query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
beb37d2caa692c70065197f037373e2a
|
Security Testing for Chatbots
|
[
{
"docid": "e2e99eca77da211cac64ab69931ed1f4",
"text": "Cross-site scripting (XSS) and SQL injection errors are two prominent examples of taint-based vulnerabilities that have been responsible for a large number of security breaches in recent years. This paper presents QED, a goal-directed model-checking system that automatically generates attacks exploiting taint-based vulnerabilities in large Java web applications. This is the first time where model checking has been used successfully on real-life Java programs to create attack sequences that consist of multiple HTTP requests. QED accepts any Java web application that is written to the standard servlet specification. The analyst specifies the vulnerability of interest in a specification that looks like a Java code fragment, along with a range of values for form parameters. QED then generates a goal-directed analysis from the specification to perform session-aware tests, optimizes to eliminate inputs that are not of interest, and feeds the remainder to a model checker. The checker will systematically explore the remaining state space and report example attacks if the vulnerability specification is matched. QED provides better results than traditional analyses because it does not generate any false positive warnings. It proves the existence of errors by providing an example attack and a program trace showing how the code is compromised. Past experience suggests this is important because it makes it easy for the application maintainer to recognize the errors and to make the necessary fixes. In addition, for a class of applications, QED can guarantee that it has found all the potential bugs in the program. We have run QED over 3 Java web applications totaling 130,000 lines of code. We found 10 SQL injections and 13 cross-site scripting errors.",
"title": ""
},
{
"docid": "eec68f5cf24838f6885636dc72a057e6",
"text": "This electronic file may not be altered in any way. The author(s) of this article is/are permitted to use this PDF file to generate printed copies to be used by way of offprints, for their personal use only. Permission is granted by the publishers to post this file on a closed server which is accessible to members (students and staff) only of the author’s/s’ institute. For any other use of this material prior written permission should be obtained from the publishers or through the Copyright Clearance Center (for USA: www.copyright.com). Please contact rights@benjamins.nl or consult our website: www.benjamins.com",
"title": ""
}
] |
[
{
"docid": "d46329330906d2ea997cb63cb465bec0",
"text": "We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.",
"title": ""
},
{
"docid": "b726c5812eecb0e846f9089edc64deca",
"text": "Distant metastases harbor unique genomic characteristics not detectable in the corresponding primary tumor of the same patient and metastases located at different sites show a considerable intrapatient heterogeneity. Thus, the mere analysis of the resected primary tumor alone (current standard practice in oncology) or, if possible, even reevaluation of tumor characteristics based on the biopsy of the most accessible metastasis may not reveal sufficient information for treatment decisions. Here, we propose that this dilemma can be solved by a new diagnostic concept: liquid biopsy, that is, analysis of therapeutic targets and drug resistance-conferring gene mutations on circulating tumor cells (CTC) and cell-free circulating tumor DNA (ctDNA) released into the peripheral blood from metastatic deposits. We discuss the current challenges and future perspectives of CTCs and ctDNA as biomarkers in clinical oncology. Both CTCs and ctDNA are interesting complementary technologies that can be used in parallel in future trials assessing new drugs or drug combinations. We postulate that the liquid biopsy concept will contribute to a better understanding and clinical management of drug resistance in patients with cancer.",
"title": ""
},
{
"docid": "724734077fbc469f1bbcad4d7c3b0cbc",
"text": "Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use.",
"title": ""
},
{
"docid": "299c0b60f9803c4eb60cc900b196a689",
"text": "The exponentially growing production of data and the social trend towards openness and sharing are powerful forces that are changing the global economy and society. Governments around the world have become active participants in this evolution, opening up their data for access and re-use by public and private agents alike. The phenomenon of Open Government Data has spread around the world in the last four years, driven by the widely held belief that use of Open Government Data has the ability to generate both economic and social value. However, a cursory review of the popular press, as well as an investigation of academic research and empirical data, reveals the need to further understand the relationship between Open Government Data and value. In this paper, we focus on how use of Open Government Data can bring about new innovative solutions that can generate social and economic value. We apply a critical realist approach to a case study analysis to uncover the mechanisms that can explain how data is transformed to value. We explore the case of Opower, a pioneer in using and transforming data to induce a behavioral change that has resulted in a considerable reduction in energy use over the last six years.",
"title": ""
},
{
"docid": "9e13ee2693415e6597c54660d45a93bd",
"text": "Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representations. In this paper, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition.",
"title": ""
},
{
"docid": "2bb78e27f9546b938caf8be04f1a8b99",
"text": "While there has been an explosion of impressive, datadriven AI applications in recent years, machines still largely lack a deeper understanding of the world to answer questions that go beyond information explicitly stated in text, and to explain and discuss those answers. To reach this next generation of AI applications, it is imperative to make faster progress in areas of knowledge, modeling, reasoning, and language. Standardized tests have often been proposed as a driver for such progress, with good reason: Many of the questions require sophisticated understanding of both language and the world, pushing the boundaries of AI, while other questions are easier, supporting incremental progress. In Project Aristo at the Allen Institute for AI, we are working on a specific version of this challenge, namely having the computer pass Elementary School Science and Math exams. Even at this level there is a rich variety of problems and question types, the most difficult requiring significant progress in AI. Here we propose this task as a challenge problem for the community, and are providing supporting datasets. Solutions to many of these problems would have a major impact on the field so we encourage you: Take the Aristo Challenge!",
"title": ""
},
{
"docid": "3feb565be1dc3439fd2fdf6b0e25d65b",
"text": "Previous research demonstrated that a single amnesic patient could acquire complex knowledge and processes required for the performance of a computer data-entry task. The present study extends the earlier work to a larger group of brain-damaged patients with memory disorders of varying severity and of various etiologies and with other accompanying cognitive deficits. All patients were able to learn both the data-entry procedures and the factual information associated with the task. Declarative knowledge was acquired by patients at a much slower rate than normal whereas procedural learning proceeded at approximately the same rate in patients and control subjects. Patients also showed evidence of transfer of declarative knowledge to the procedural task, as well as transfer of the data-entry procedures across changes in materials.",
"title": ""
},
{
"docid": "600203272eace7a02d6f4cbdc591e0b9",
"text": "Algebraic manipulation covers branches of software, particularly list processing, mathematics, notably logic and number theory, and applications largely in physics. The lectures will deal with all of these to a varying extent. The mathematical content will be kept to a minimum.",
"title": ""
},
{
"docid": "0bd30308a11711f1dc71b8ff8ae8e80c",
"text": "Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.",
"title": ""
},
{
"docid": "94e5d19f134670a6ae982311e6c1ccc1",
"text": "In mobile ad hoc networks, it is usually assumed that all the nodes belong to the same authority; therefore, they are expected to cooperate in order to support the basic functions of the network such as routing. In this paper, we consider the case in which each node is its own authority and tries to maximize the bene ts it gets from the network. In order to stimulate cooperation, we introduce a virtual currency and detail the way it can be protected against theft and forgery. We show that this mechanism ful lls our expectations without signi cantly decreasing the performance of the network.",
"title": ""
},
{
"docid": "dfb78a96f9af81aa3f4be1a28e4ce0a2",
"text": "This paper presents two ultra-high-speed SerDes dedicated for PAM4 and NRZ data. The PAM4 TX incorporates an output driver with 3-tap FFE and adjustable weighting to deliver clean outputs at 4 levels, and the PAM4 RX employs a purely linear full-rate CDR and CTLE/1-tap DFE combination to recover and demultiplex the data. NRZ TX includes a tree-structure MUX with built-in PLL and phase aligner. NRZ RX adopts linear PD with special vernier technique to handle the 56 Gb/s input data. All chips have been verified in silicon with reasonable performance, providing prospective design examples for next-generation 400 GbE.",
"title": ""
},
{
"docid": "e3caf8dcb01139ae780616c022e1810d",
"text": "The relative age effect (RAE) and its relationships with maturation, anthropometry, and physical performance characteristics were examined across a representative sample of English youth soccer development programmes. Birth dates of 1,212 players, chronologically age-grouped (i.e., U9's-U18's), representing 17 professional clubs (i.e., playing in Leagues 1 & 2) were obtained and categorised into relative age quartiles from the start of the selection year (Q1 = Sep-Nov; Q2 = Dec-Feb; Q3 = Mar-May; Q4 = Jun-Aug). Players were measured for somatic maturation and performed a battery of physical tests to determine aerobic fitness (Multi-Stage Fitness Test [MSFT]), Maximal Vertical Jump (MVJ), sprint (10 & 20m), and agility (T-Test) performance capabilities. Odds ratio's (OR) revealed Q1 players were 5.3 times (95% confidence intervals [CI]: 4.08-6.83) more likely to be selected than Q4's, with a particularly strong RAE bias observed in U9 (OR: 5.56) and U13-U16 squads (OR: 5.45-6.13). Multivariate statistical models identified few between quartile differences in anthropometric and fitness characteristics, and confirmed chronological age-group and estimated age at peak height velocity (APHV) as covariates. Assessment of practical significance using magnitude-based inferences demonstrated body size advantages in relatively older players (Q1 vs. Q4) that were very-likely small (Effect Size [ES]: 0.53-0.57), and likely to very-likely moderate (ES: 0.62-0.72) in U12 and U14 squads, respectively. Relatively older U12-U14 players also demonstrated small advantages in 10m (ES: 0.31-0.45) and 20m sprint performance (ES: 0.36-0.46). The data identify a strong RAE bias at the entry-point to English soccer developmental programmes. RAE was also stronger circa-PHV, and relatively older players demonstrated anaerobic performance advantages during the pubescent period. Talent selectors should consider motor function and maturation status assessments to avoid premature and unwarranted drop-out of soccer players within youth development programmes.",
"title": ""
},
{
"docid": "b220f5c44e1fa81f65807ffb73869edf",
"text": "Increasing demand and costs for healthcare, exacerbated by ageing populations and a great shortage of doctors, are serious concerns worldwide. Consequently, this has generated a great amount of motivation in providing better healthcare through smarter healthcare systems. Management and processing of healthcare data are challenging due to various factors that are inherent in the data itself such as high-dimensionality, irregularity and sparsity. A long stream of research has been proposed to address these problems and provide more efficient and scalable healthcare systems and solutions. In this chapter, we shall examine the challenges in designing algorithms and systems for healthcare analytics and applications, followed by a survey on various relevant solutions. We shall also discuss next-generation healthcare applications, services and systems, that are related to big healthcare data analytics.",
"title": ""
},
{
"docid": "b37fb73811110ec7a095e98df66f0ee0",
"text": "This paper looks into recent developments and research trends in collision avoidance/warning systems and automation of vehicle longitudinal/lateral control tasks. It is an attempt to provide a bigger picture of the very diverse, detailed and highly multidisciplinary research in this area. Based on diversely selected research, this paper explains the initiatives for automation in different levels of transportation system with a specific emphasis on the vehicle-level automation. Human factor studies and legal issues are analyzed as well as control algorithms. Drivers’ comfort and well being, increased safety, and increased highway capacity are among the most important initiatives counted for automation. However, sometimes these are contradictory requirements. Relying on an analytical survey of the published research, we will try to provide a more clear understanding of the impact of automation/warning systems on each of the above-mentioned factors. The discussion of sensory issues requires a dedicated paper due to its broad range and is not addressed in this paper.",
"title": ""
},
{
"docid": "3665a82c20eb55c8afd2c7f35b68f49f",
"text": "The formulation and delivery of biopharmaceutical drugs, such as monoclonal antibodies and recombinant proteins, poses substantial challenges owing to their large size and susceptibility to degradation. In this Review we highlight recent advances in formulation and delivery strategies — such as the use of microsphere-based controlled-release technologies, protein modification methods that make use of polyethylene glycol and other polymers, and genetic manipulation of biopharmaceutical drugs — and discuss their advantages and limitations. We also highlight current and emerging delivery routes that provide an alternative to injection, including transdermal, oral and pulmonary delivery routes. In addition, the potential of targeted and intracellular protein delivery is discussed.",
"title": ""
},
{
"docid": "3184a80254b1fe01d5b102d659fc76b0",
"text": "Competitive computer gaming or eSports is a phenomenon that has become a fundamental element in today’s digital youth culture. So far very little effort has been made to study eSports in particular with respect to its potentials to positively influence research developments in other areas. This paper therefore tries to lay a foundation for a proper academic treatment of eSports. It presents a short overview on the history of eSports, provides a definition that is suitable for academic studies on eSports related issues and discusses first approaches to this topic that might lead to results that are applicable to problems in seemingly unrelated fields such as strategic decision making or management training.",
"title": ""
},
{
"docid": "adba3380818a72270aea9452d2b77af2",
"text": "Web-based programming exercises are a useful way for students to practice and master essential concepts and techniques presented in introductory programming courses. Although these systems are used fairly widely, we have a limited understanding of how students use these systems, and what can be learned from the data collected by these systems.\n In this paper, we perform a preliminary exploratory analysis of data collected by the CloudCoder programming exercise system from five introductory courses taught in two programming languages across three colleges and universities. We explore a number of interesting correlations in the data that confirm existing hypotheses. Finally, and perhaps most importantly, we demonstrate the effectiveness and future potential of systems like CloudCoder to help us study novice programmers.",
"title": ""
},
{
"docid": "c3b099d2499346314657257ec35e8d78",
"text": "In the fuzzy clustering literature, two main types of membership are usually considered: A relative type, termed probabilistic, and an absolute or possibilistic type, indicating the strength of the attribution to any cluster independent from the rest. There are works addressing the unification of the two schemes. Here, we focus on providing a model for the transition from one schema to the other, to exploit the dual information given by the two schemes, and to add flexibility for the interpretation of results. We apply an uncertainty model based on interval values to memberships in the clustering framework, obtaining a framework that we term graded possibility. We outline a basic example of graded possibilistic clustering algorithm and add some practical remarks about its implementation. The experimental demonstrations presented highlight the different properties attainable through appropriate implementation of a suitable graded possibilistic model. An interesting application is found in automated segmentation of diagnostic medical images, where the model provides an interactive visualization tool for this task",
"title": ""
},
{
"docid": "19d2e8cfa7787a139ca8117a0522b044",
"text": "We give here a comprehensive treatment of the mathematical theory of per-turbative renormalization (in the minimal subtraction scheme with dimensional regularization), in the framework of the Riemann–Hilbert correspondence and motivic Galois theory. We give a detailed overview of the work of Connes– Kreimer [31], [32]. We also cover some background material on affine group schemes, Tannakian categories, the Riemann–Hilbert problem in the regular singular and irregular case, and a brief introduction to motives and motivic Ga-lois theory. We then give a complete account of our results on renormalization and motivic Galois theory announced in [35]. Our main goal is to show how the divergences of quantum field theory, which may at first appear as the undesired effect of a mathematically ill-formulated theory, in fact reveal the presence of a very rich deeper mathematical structure, which manifests itself through the action of a hidden \" cosmic Galois group \" 1 , which is of an arithmetic nature, related to motivic Galois theory. Historically, perturbative renormalization has always appeared as one of the most elaborate recipes created by modern physics, capable of producing numerical quantities of great physical relevance out of a priori meaningless mathematical expressions. In this respect, it is fascinating for mathematicians and physicists alike. The depth of its origin in quantum field theory and the precision with which it is confirmed by experiments undoubtedly make it into one of the jewels of modern theoretical physics. For a mathematician in quest of \" meaning \" rather than heavy formalism, the attempts to cast the perturbative renormalization technique in a conceptual framework were so far falling short of accounting for the main computational aspects, used for instance in QED. These have to do with the subtleties involved in the subtraction of infinities in the evaluation of Feynman graphs and do not fall under the range of \" asymptotically free theories \" for which constructive quantum field theory can provide a mathematically satisfactory formulation., where the conceptual meaning of the detailed computational devices used in perturbative renormalization is analysed. Their work shows that the recursive procedure used by physicists is in fact identical to a mathematical method of extraction of finite values known as the Birkhoff decomposition, applied to a loop γ(z) with values in a complex pro-unipotent Lie group G.",
"title": ""
}
] |
scidocsrr
|
74fb43790aba64959e60d231169b5bb4
|
Three-dimensional bipedal walking control using Divergent Component of Motion
|
[
{
"docid": "02322377d048f2469928a71290cf1566",
"text": "In order to interact with human environments, humanoid robots require safe and compliant control which can be achieved through force-controlled joints. In this paper, full body step recovery control for robots with force-controlled joints is achieved by adding model-based feed-forward controls. Push Recovery Model Predictive Control (PR-MPC) is presented as a method for generating full-body step recovery motions after a large disturbance. Results are presented from experiments on the Sarcos Primus humanoid robot that uses hydraulic actuators instrumented with force feedback control.",
"title": ""
}
] |
[
{
"docid": "7d3642cc1714951ccd9ec1928a340d81",
"text": "Electrical fuse (eFUSE) has become a popular choice to enable memory redundancy, chip identification and authentication, analog device trimming, and other applications. We will review the evolution and applications of electrical fuse solutions for 180 nm to 45 nm technologies at IBM, and provide some insight into future uses in 32 nm technology and beyond with the eFUSE as a building block for the autonomic chip of the future.",
"title": ""
},
{
"docid": "680923cf5cd1801c8fff7935ce30a5d4",
"text": "Recommender systems have to deal with the cold start problem as new users and/or items are always present. Rating elicitation is a common approach for handling cold start. However, there still lacks a principled model for guiding how to select the most useful ratings. In this paper, we propose a principled approach to identify representative users and items using representative-based matrix factorization. Not only do we show that the selected representatives are superior to other competing methods in terms of achieving good balance between coverage and diversity, but we also demonstrate that ratings on the selected representatives are much more useful for making recommendations (about 10% better than competing methods). In addition to illustrating how representatives help solve the cold start problem, we also argue that the problem of finding representatives itself is an important problem that would deserve further investigations, for both its practical values and technical challenges.",
"title": ""
},
{
"docid": "b94d33cc0366703b48d75ad844422c85",
"text": "We propose a dataflow architecture, called HyperFlow, that offers a supporting infrastructure that creates an abstraction layer over computation resources and naturally exposes heterogeneous computation to dataflow processing. In order to show the efficiency of our system as well as testing it, we have included a set of synthetic and real-case applications. First, we designed a general suite of micro-benchmarks that captures main parallel pipeline structures and allows evaluation of HyperFlow under different stress conditions. Finally, we demonstrate the potential of our system with relevant applications in visualization. Implementations in HyperFlow are shown to have greater performance than actual hand-tuning codes, yet still providing high scalability on different platforms.",
"title": ""
},
{
"docid": "d84b91baa39a01ce84235793b089206d",
"text": "A comprehensive chip-package-system (CPS) electrostatic discharge (ESD) simulation methodology is developed for addressing IEC61000-4-2 testing conditions. An innovative chip ESD compact model is proposed, combined with full-wave models of the ESD gun, ESD protection devices, PCB wires/vias and connectors for CPS analysis. Two examples of CPS ESD application are illustrated demonstrating good correlation with measurement.",
"title": ""
},
{
"docid": "a56d8c2f4c4488c1c805fe803859cde2",
"text": "We study variational autoencoders for text data to build a generative model that can be used to conditionally generate text. We introduce a mutual information criterion to encourage the model to put semantic information into the latent representation, and compare its efficacy with other tricks explored in literature such as KL divergence cost annealing and word dropout. We compare the log-likelihood lowerbounds on held-out data using variational autoencoders with the log-likelihoods on an unconditional language model. We notice our models quickly learn to exploit the grammatical redundancies in our dataset, but it is more challenging to encode semantic information in the latent representation.",
"title": ""
},
{
"docid": "866abb0de36960fba889282d67ce9dbd",
"text": "We present our experience with the use of local fasciocutaneous V-Y advancement flaps in the reconstruction of 10 axillae in 6 patients for large defects following wide excision of long-standing Hidradenitis suppurativa of the axilla. The defects were closed with local V-Y subcutaneous island flaps. A single flap from the chest wall was sufficient for moderate defects. However, for larger defects, an additional flap was taken from the medial side of the ipsilateral arm. The donor defects could be closed primarily in all the patients. The local areas of the lateral chest wall and the medial side of the arm have a plentiful supply of cutaneous perforators and the flaps can be designed in a V-Y fashion without resorting to preoperative marking of the perforator. The flaps were freed sufficiently to allow adequate movement for closure of the defects. Although no attempt was made to identify the perforators specifically, many perforators were seen entering the flap. Some perforators can be safely divided to increase reach of the flap. All the flaps survived completely. A follow up of 2.5 years is presented.",
"title": ""
},
{
"docid": "70eed1677463969a4ed443988d8d7521",
"text": "Security, privacy, and fairness have become critical in the era of data science and machine learning. More and more we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data; how the exploitation of additional data can reveal private information in the original one; and how what looks like unrelated features can teach us about each other. Confronted with this challenge, in this paper we open a new line of research, where the security, privacy, and fairness is learned and used in a closed environment. The goal is to ensure that a given entity (e.g., the company or the government), trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the gender of the subject (negative task). Similarly, a company can guarantee that internally it is not using the provided data for any undesired task, an important goal that is not contradicting the virtually impossible challenge of blocking everybody from the undesired task. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one being blocked. Fairness, to the information in the negative task, is often automatically obtained as a result of this proposed approach. The particular framework and examples open the door to security, privacy, and fairness in very important closed scenarios, ranging from private data accumulation companies like social networks to law-enforcement and hospitals. J. Sokolić and M. R. D. Rodrigues are with the Department of Electronic and Electrical Engineering, Univeristy College London, London, UK (e-mail: {jure.sokolic.13, m.rodrigues}@ucl.ac.uk). Q. Qiu and G. Sapiro are with the Department of Electrical and Computer Engineering, Duke University, NC, USA (e-mail: {qiang.qiu, guillermo.sapiro}@duke.edu). The work of Guillermo Sapiro was partially supported by NSF, ONR, ARO, NGA. May 24, 2017 DRAFT ar X iv :1 70 5. 08 19 7v 1 [ st at .M L ] 2 3 M ay 2 01 7",
"title": ""
},
{
"docid": "73e4fed83bf8b1f473768ce15d6a6a86",
"text": "Improving science, technology, engineering, and mathematics (STEM) education, especially for traditionally disadvantaged groups, is widely recognized as pivotal to the U.S.'s long-term economic growth and security. In this article, we review and discuss current research on STEM education in the U.S., drawing on recent research in sociology and related fields. The reviewed literature shows that different social factors affect the two major components of STEM education attainment: (1) attainment of education in general, and (2) attainment of STEM education relative to non-STEM education conditional on educational attainment. Cognitive and social psychological characteristics matter for both major components, as do structural influences at the neighborhood, school, and broader cultural levels. However, while commonly used measures of socioeconomic status (SES) predict the attainment of general education, social psychological factors are more important influences on participation and achievement in STEM versus non-STEM education. Domestically, disparities by family SES, race, and gender persist in STEM education. Internationally, American students lag behind those in some countries with less economic resources. Explanations for group disparities within the U.S. and the mediocre international ranking of US student performance require more research, a task that is best accomplished through interdisciplinary approaches.",
"title": ""
},
{
"docid": "109e5a5d42a1aa105eb1ff1d694a3c4a",
"text": "Using fiducial markers ensures reliable detection and identification of planar features in images. Fiducials are used in a wide range of applications, especially when a reliable visual reference is needed, e.g., to track the camera in cluttered or textureless environments. A marker designed for such applications must be robust to partial occlusions, varying distances and angles of view, and fast camera motions. In this paper, we present a robust, highly accurate fiducial system, whose markers consist of concentric rings, along with its theoretical foundations. Relying on projective properties, it allows to robustly localize the imaged marker and to accurately detect the position of the image of the (common) circle center. We demonstrate that our system can detect and accurately localize these circular fiducials under very challenging conditions and the experimental results reveal that it outperforms other recent fiducial systems.",
"title": ""
},
{
"docid": "a3d7a0a672e9090072d0e3e7834844a2",
"text": "Hyper spectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image.",
"title": ""
},
{
"docid": "5428ed5b458b8bae73d58c8069ad3cfd",
"text": "Software-Defined Radio (SDR) is a technique using software to make the radio functions hardware independent. SDR is starting to be the basis of advanced wireless communication systems such as Joint Tactical Radio System (JTRS). More interestingly, the adoption of SDR technology by JTRS program is followed by military satellite communications programs. In the development of the SDR implementation, GNU Radio emerged as an open source tool that provides functions to support SDR. Later, Universal Software Radio Peripheral (USRP) was developed as a low cost, high-speed SDR platform. USRP in conjunction with GNU Radio is a very powerful tool to develop SDR based wireless communication system. This paper discusses the employment of GNU Radio and USRP for developing software based wireless transmission system. Furthermore, retransmission scheme, buffering and Leaky Bucket Algorithm are implemented to solve the transmission error and environment interference problems found during the implementation.",
"title": ""
},
{
"docid": "2f0da2f7461043476d5ba82ae9cf77bf",
"text": "Recently two emerging areas of research, attosecond and nanoscale physics, have started to come together. Attosecond physics deals with phenomena occurring when ultrashort laser pulses, with duration on the femto- and sub-femtosecond time scales, interact with atoms, molecules or solids. The laser-induced electron dynamics occurs natively on a timescale down to a few hundred or even tens of attoseconds (1 attosecond = 1 as = 10-18 s), which is comparable with the optical field. For comparison, the revolution of an electron on a 1s orbital of a hydrogen atom is ∼152 as. On the other hand, the second branch involves the manipulation and engineering of mesoscopic systems, such as solids, metals and dielectrics, with nanometric precision. Although nano-engineering is a vast and well-established research field on its own, the merger with intense laser physics is relatively recent. In this report on progress we present a comprehensive experimental and theoretical overview of physics that takes place when short and intense laser pulses interact with nanosystems, such as metallic and dielectric nanostructures. In particular we elucidate how the spatially inhomogeneous laser induced fields at a nanometer scale modify the laser-driven electron dynamics. Consequently, this has important impact on pivotal processes such as above-threshold ionization and high-order harmonic generation. The deep understanding of the coupled dynamics between these spatially inhomogeneous fields and matter configures a promising way to new avenues of research and applications. Thanks to the maturity that attosecond physics has reached, together with the tremendous advance in material engineering and manipulation techniques, the age of atto-nanophysics has begun, but it is in the initial stage. We present thus some of the open questions, challenges and prospects for experimental confirmation of theoretical predictions, as well as experiments aimed at characterizing the induced fields and the unique electron dynamics initiated by them with high temporal and spatial resolution.",
"title": ""
},
{
"docid": "f0217e1579461afbfea5eccb2b3a4567",
"text": "There is an industry-driven public obsession with antioxidants, which are equated to safe, health-giving molecules to be swallowed as mega-dose supplements or in fortified foods. Sometimes they are good for you, but sometimes they may not be, and pro-oxidants can be better for you in some circumstances. This article re-examines and challenges some basic assumptions in the nutritional antioxidant field.",
"title": ""
},
{
"docid": "a0a01e96e9fe31a797c55b94e9a12cea",
"text": "This thesis broadens the space of rich yet practical models for structured prediction. We introduce a general framework for modeling with four ingredients: (1) latent variables, (2) structural constraints, (3) learned (neural) feature representations of the inputs, and (4) training that takes the approximations made during inference into account. The thesis builds up to this framework through an empirical study of three NLP tasks: semantic role labeling, relation extraction, and dependency parsing—obtaining state-of-the-art results on the former two. We apply the resulting graphical models with structured and neural factors, and approximation-aware learning to jointly model part-of-speech tags, a syntactic dependency parse, and semantic roles in a low-resource setting where the syntax is unobserved. We present an alternative view of these models as neural networks with a topology inspired by inference on graphical models that encode our intuitions about the data.",
"title": ""
},
{
"docid": "2cc86d07183f5197febbd527852a527a",
"text": "In this paper, I review two studies (Roschelle, 1996; Baker, Hansen, Joiner, & Traum, 1999) which I believe to represent paradigmatic examples of CSCL research. I offer a critique of these studies based on the theory of inquiry developed by the American pragmatist philosopher John Dewey. Inquiry, for Dewey, represented an exceedingly broad category of activity of which joint problem solving is a special case. I conclude by proposing a description of what I think research in CSCL is, or at least should be, about. This description can be used to distinguish what is done in this field from traditional research in education on learning outcomes, research based on classical information processing theory, and conventional research on social interaction.",
"title": ""
},
{
"docid": "7cebca46f584b2f31fd9d2c8ef004f17",
"text": "Wirelessly networked systems of intra-body sensors and actuators could enable revolutionary applications at the intersection between biomedical science, networking, and control with a strong potential to advance medical treatment of major diseases of our times. Yet, most research to date has focused on communications along the body surface among devices interconnected through traditional electromagnetic radio-frequency (RF) carrier waves; while the underlying root challenge of enabling networked intra-body miniaturized sensors and actuators that communicate through body tissues is substantially unaddressed. The main obstacle to enabling this vision of networked implantable devices is posed by the physical nature of propagation in the human body. The human body is composed primarily (65 percent) of water, a medium through which RF electromagnetic waves do not easily propagate, even at relatively low frequencies. Therefore, in this article we take a different perspective and propose to investigate and study the use of ultrasonic waves to wirelessly internetwork intra-body devices. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack.",
"title": ""
},
{
"docid": "2f7edc539bc61f8fc07bc6f5f8e496e0",
"text": "We investigate the contextual multi-armed bandit problem in an adversarial setting and introduce an online algorithm that asymptotically achieves the performance of the best contextual bandit arm selection strategy under certain conditions. We show that our algorithm is highly efficient and provides significantly improved performance with a guaranteed performance upper bound in a strong mathematical sense. We have no statistical assumptions on the context vectors and the loss of the bandit arms, hence our results are guaranteed to hold even in adversarial environments. We use a tree notion in order to partition the space of context vectors in a nested structure. Using this tree, we construct a large class of context dependent bandit arm selection strategies and adaptively combine them to achieve the performance of the best strategy. We use the hierarchical nature of introduced tree to implement this combination with a significantly low computational complexity, thus our algorithm can be efficiently used in applications involving big data. Through extensive set of experiments involving synthetic and real data, we demonstrate significant performance gains achieved by the proposed algorithm with respect to the state-of-the-art adversarial bandit algorithms.",
"title": ""
},
{
"docid": "8fc91742d6b3a4de76182b32fd89c440",
"text": "This paper is to enable the Delta PLC (Programmable Logic Control) DVP14SS to communicate with the Visual Basic 6.0. The communication between DVP14SS and Visual Basic 6.0 is via Modbus Serial Protocol. Computers are used as a link between humans and PLC systems as they have more graphics and visual capabilities. These are nothing but SCADA systems widely used for determining plant setups and displaying plant status on high quality screens. They also record/log the system data for long period .The SCADA software's are the software packages needs to be purchased from vendors and the cost depends on tag count. Visual Basic 6.0 platform can be used develop the SCADA application effectively. Using VB 6.0 we integrate software and hardware across spectrum of vendors easily. Here we show simple approach to communicate Delta PLC with visual Basic using MSComm control in visual basic. By means of Visual Basic cost effective solution is possible as Visual Basic we do not need to purchase licenses and is cheaper than SCADA packages. It also has the advantages like flexibility. Keywords—MSComm control and PLC, plc, communication between PLC and VB 6.0, DVP14SS.",
"title": ""
},
{
"docid": "86ded82f634443fae2353c7c715e2f5f",
"text": "Articles R emote sensing has facilitated extraordinary advances in the modeling, mapping, and understanding of ecosystems. Typical applications of remote sensing involve either images from passive optical systems, such as aerial photography and Landsat Thematic Mapper (Goward and Williams 1997), or to a lesser degree, active radar sensors such as RADARSAT (Waring et al. 1995). These types of sensors have proven to be satisfactory for many ecological applications , such as mapping land cover into broad classes and, in some biomes, estimating aboveground biomass and leaf area index (LAI). Moreover, they enable researchers to analyze the spatial pattern of these images. However, conventional sensors have significant limitations for ecological applications. The sensitivity and accuracy of these devices have repeatedly been shown to fall with increasing aboveground biomass and leaf area index (Waring et al. 1995, Carlson and Ripley 1997, Turner et al. 1999). They are also limited in their ability to represent spatial patterns: They produce only two-dimensional (x and y) images, which cannot fully represent the three-dimensional structure of, for instance, an old-growth forest canopy.Yet ecologists have long understood that the presence of specific organisms, and the overall richness of wildlife communities, can be highly dependent on the three-dimensional spatial pattern of vegetation (MacArthur and MacArthur 1961), especially in systems where biomass accumulation is significant (Hansen and Rotella 2000). Individual bird species, in particular, are often associated with specific three-dimensional features in forests (Carey et al. 1991). In addition, other functional aspects of forests, such as productivity, may be related to forest canopy structure. Laser altimetry, or lidar (light detection and ranging), is an alternative remote sensing technology that promises to both increase the accuracy of biophysical measurements and extend spatial analysis into the third (z) dimension. Lidar sensors directly measure the three-dimensional distribution of plant canopies as well as subcanopy topography, thus providing high-resolution topographic maps and highly accurate estimates of vegetation height, cover, and canopy structure. In addition , lidar has been shown to accurately estimate LAI and aboveground biomass even in those high-biomass ecosystems where passive optical and active radar sensors typically fail to do so. The basic measurement made by a lidar device is the distance between the sensor and a target surface, obtained by determining the elapsed time between the emission of a short-duration laser pulse and the arrival of the reflection of that pulse (the return signal) at the sensor's receiver. Multiplying this …",
"title": ""
},
{
"docid": "dad7dbbb31f0d9d6268bfdc8303d1c9c",
"text": "This letter proposes a reconfigurable microstrip patch antenna with polarization states being switched among linear polarization (LP), left-hand (LH) and right-hand (RH) circular polarizations (CP). The CP waves are excited by two perturbation elements of loop slots in the ground plane. A p-i-n diode is placed on every slot to alter the current direction, which determines the polarization state. The influences of the slots and p-i-n diodes on antenna performance are minimized because the slots and diodes are not on the patch. The simulated and measured results verified the effectiveness of the proposed antenna configuration. The experimental bandwidths of the -10-dB reflection coefficient for LHCP and RHCP are about 60 MHz, while for LP is about 30 MHz. The bandwidths of the 3-dB axial ratio for both CP states are 20 MHz with best value of 0.5 dB at the center frequency on the broadside direction. Gains for two CP operations are 6.4 dB, and that for the LP one is 5.83 dB. This reconfigurable patch antenna with agile polarization has good performance and concise structure, which can be used for 2.4 GHz wireless communication systems.",
"title": ""
}
] |
scidocsrr
|
beb84755c6a4f867b512785a5c81bd35
|
Extended list of stop words: Does it work for keyphrase extraction from short texts?
|
[
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "fb2028ca0e836452862a2cb1fa707d28",
"text": "State-of-the-art approaches for unsupervised keyphrase extraction are typically evaluated on a single dataset with a single parameter setting. Consequently, it is unclear how effective these approaches are on a new dataset from a different domain, and how sensitive they are to changes in parameter settings. To gain a better understanding of state-of-the-art unsupervised keyphrase extraction algorithms, we conduct a systematic evaluation and analysis of these algorithms on a variety of standard evaluation datasets.",
"title": ""
}
] |
[
{
"docid": "cdeaf14d18c32ca534e8e76b9025db42",
"text": "A broadband dual-polarized base station antenna with sturdy construction is presented in this letter. The antenna mainly contains four parts: main radiator, feeding baluns, bedframe, and reflector. First, two orthogonal dipoles are etched on a substrate as main radiator forming dual polarization. Two baluns are then introduced to excite the printed dipoles. Each balun has four bumps on the edges for electrical connection and fixation. The bedframe is designed to facilitate the installation, and the reflector is finally used to gain unidirectional radiation. Measured results show that the antenna has a 48% impedance bandwidth with reflection coefficient less than –15 dB and port isolation more than 22 dB. A four-element antenna array with 6° ± 2° electrical down tilt is also investigated for wideband base station application. The antenna and its array have the advantages of sturdy construction, high machining accuracy, ease of integration, and low cost. They can be used for broadband base station in the next-generation wireless communication system.",
"title": ""
},
{
"docid": "f74dd570fd04512dc82aac9d62930992",
"text": "A compact microstrip-line ultra-wideband (UWB) bandpass filter (BPF) using the proposed stub-loaded multiple-mode resonator (MMR) is presented. This MMR is formed by loading three open-ended stubs in shunt to a simple stepped-impedance resonator in center and two symmetrical locations, respectively. By properly adjusting the lengths of these stubs, the first four resonant modes of this MMR can be evenly allocated within the 3.1-to-10.6 GHz UWB band while the fifth resonant frequency is raised above 15.0GHz. It results in the formulation of a novel UWB BPF with compact-size and widened upper-stopband by incorporating this MMR with two interdigital parallel-coupled feed lines. Simulated and measured results are found in good agreement with each other, showing improved UWB bandpass behaviors with the insertion loss lower than 0.8dB, return loss higher than 14.3dB, and maximum group delay variation less than 0.64ns in the realized UWB passband",
"title": ""
},
{
"docid": "c5c14f0e6008db7d4f5cd31546b7f414",
"text": "Source code examples are used by developers to implement unfamiliar tasks by learning from existing solutions. To better support developers in finding existing solutions, code search engines are designed to locate and rank code examples relevant to user’s queries. Essentially, a code search engine provides a ranking schema, which combines a set of ranking features to calculate the relevance between a query and candidate code examples. Consequently, the ranking schema places relevant code examples at the top of the result list. However, it is difficult to determine the configurations of the ranking schemas subjectively. In this paper, we propose a code example search approach that applies a machine learning technique to automatically train a ranking schema. We use the trained ranking schema to rank candidate code examples for new queries at run-time. We evaluate the ranking performance of our approach using a corpus of over 360,000 code snippets crawled from 586 open-source Android projects. The performance evaluation study shows that the learning-to-rank approach can effectively rank code examples, and outperform the existing ranking schemas by about 35.65 % and 48.42 % in terms of normalized discounted cumulative gain (NDCG) and expected reciprocal rank (ERR) measures respectively.",
"title": ""
},
{
"docid": "d2694577861e75535e59e316bd6a9015",
"text": "Despite being a new term, ‘fake news’ has evolved rapidly. This paper argues that it should be reserved for cases of deliberate presentation of (typically) false or misleading claims as news, where these are misleading by design. The phrase ‘by design’ here refers to systemic features of the design of the sources and channels by which fake news propagates and, thereby, manipulates the audience’s cognitive processes. This prospective definition is then tested: first, by contrasting fake news with other forms of public disinformation; second, by considering whether it helps pinpoint conditions for the (recent) proliferation of fake news. Résumé: En dépit de son utilisation récente, l’expression «fausses nouvelles» a évolué rapidement. Cet article soutient qu'elle devrait être réservée aux présentations intentionnelles d’allégations (typiquement) fausses ou trompeuses comme si elles étaient des nouvelles véridiques et où elles sont faussées à dessein. L'expression «à dessein» fait ici référence à des caractéristiques systémiques de la conception des sources et des canaux par lesquels les fausses nouvelles se propagent et par conséquent, manipulent les processus cognitifs du public. Cette définition prospective est ensuite mise à l’épreuve: d'abord, en opposant les fausses nouvelles à d'autres formes de désinformation publique; deuxièmement, en examinant si elle aide à cerner les conditions de la prolifération (récente) de fausses nou-",
"title": ""
},
{
"docid": "08c5b829ff5f65baf3ec0b9376686c5a",
"text": "With the continuing growth of E-commerce, credit card fraud has evolved exponentially, where people are using more on-line services to conduct their daily transactions. Fraudsters masquerade normal behaviour of customers to achieve unlawful gains. Fraud patterns are changing rapidly where fraud detection needs to be re-evaluated from a reactive to a proactive approach. In recent years Deep Learning has gained lot of popularity in image recognition, speech recognition and natural language processing. This paper seeks to understand how Deep Learning can be helpful in finding fraud in credit card transactions and compare Deep Learning against several state of the art algorithms (RF, GBM, GLM) and sampling methods (Over, Under, Hybrid, SMOTE and ROSE) used in fraud detection. The results show that Deep Learning performed best with the highest Recall (accuracy of identifying fraudulent transactions), which means lowest financial losses to the company. However, Deep Learning achieved the lowest Precision rate (classified more legitimate transactions as fraudulent), which can cause customer dissatisfaction. Among other chosen classifiers, oversampling method performed best in terms of AUC, precision was highest for GLM and F-Score was highest for model trained using ROSE sampling method. Recall and Precision both have high cost, so there cannot be any trade of one against the other. Selecting the best classifier to identify fraud is based on the business goal. Keywords—Imbalanced class, Data mining, Sampling methods, H2O, Gradient Boosting, Random Forest, Generalized Linear Models, Deep Learning, Grid search, Hyper parameter optimization, Ensemble methods.",
"title": ""
},
{
"docid": "7e7651261be84e2e05cde0ac9df69e6d",
"text": "Searching a large database to find a sequence that is most similar to a query can be prohibitively expensive, particularly if individual sequence comparisons involve complex operations such as warping. To achieve scalability, \"pruning\" heuristics are typically employed to minimize the portion of the database that must be searched with more complex matching. We present an approximate pruning technique which involves embedding sequences in a Euclidean space. Sequences are embedded using a convolutional network with a form of attention that integrates over time, trained on matching and non-matching pairs of sequences. By using fixed-length embeddings, our pruning method effectively runs in constant time, making it many orders of magnitude faster than full dynamic time warping-based matching for large datasets. We demonstrate our approach on a large-scale musical score-to-audio recording retrieval task.",
"title": ""
},
{
"docid": "1ca822578f9e23f7715f7c2d763e984f",
"text": "Instance Search (INS) is a fundamental problem for many applications, while it is more challenging comparing to traditional image search since the relevancy is defined at the instance level. Existing works have demonstrated the success of many complex ensemble systems that are typically conducted by firstly generating object proposals, and then extracting handcrafted and/or CNN features of each proposal for matching. However, object bounding box proposals and feature extraction are often conducted in two separated steps, thus the effectiveness of these methods collapses. Also, due to the large amount of generated proposals, matching speed becomes the bottleneck that limits its application to large-scale datasets. To tackle these issues, in this paper we propose an effective and efficient Deep Region Hashing (DRH) approach for large-scale INS using an image patch as the query. Specifically, DRH is an end-toend deep neural network which consists of object proposal, feature extraction, and hash code generation. DRH shares full-image convolutional feature map with the region proposal network, thus enabling nearly cost-free region proposals. Also, each high-dimensional, real-valued region features are mapped onto a low-dimensional, compact binary codes for the efficient object region level matching on large-scale dataset. Experimental results on four datasets show that our DRH can achieve even better performance than the state-of-the-arts in terms of MAP, while the efficiency is improved by nearly 100 times.",
"title": ""
},
{
"docid": "73be48e8d9d50c04e6b3652953bc47de",
"text": "Student video-watching behavior and quiz performance are studied in two Massive Open Online Courses (MOOCs). In doing so, two frameworks are presented by which video-watching clickstreams can be represented: one based on the sequence of events created, and another on the sequence of positions visited. With the event-based framework, recurring subsequences of student behavior are extracted, which contain fundamental characteristics such as reflecting (i.e., repeatedly playing and pausing) and revising (i.e., plays and skip backs). It is found that some of these behaviors are significantly correlated with changes in the likelihood that a student will be Correct on First Attempt (CFA) or not in answering quiz questions, and in ways that are not necessarily intuitive. Then, with the position-based framework, models of quiz performance are devised based on positions visited in a video. In evaluating these models through CFA prediction, it is found that three of them can substantially improve prediction quality, which underlines the ability to relate this type of behavior to quiz scores. Since this prediction considers videos individually, these benefits also suggest that these models are useful in situations where there is limited training data, e.g., for early detection or in short courses.",
"title": ""
},
{
"docid": "90bb7ab528877c922758b44b102bf4e8",
"text": "Labeling training data is increasingly the largest bottleneck in deploying machine learning systems. We present Snorkel, a first-of-its-kind system that enables users to train state-of- the-art models without hand labeling any training data. Instead, users write labeling functions that express arbitrary heuristics, which can have unknown accuracies and correlations. Snorkel denoises their outputs without access to ground truth by incorporating the first end-to-end implementation of our recently proposed machine learning paradigm, data programming. We present a flexible interface layer for writing labeling functions based on our experience over the past year collaborating with companies, agencies, and research labs. In a user study, subject matter experts build models 2.8× faster and increase predictive performance an average 45.5% versus seven hours of hand labeling. We study the modeling tradeoffs in this new setting and propose an optimizer for automating tradeoff decisions that gives up to 1.8× speedup per pipeline execution. In two collaborations, with the U.S. Department of Veterans Affairs and the U.S. Food and Drug Administration, and on four open-source text and image data sets representative of other deployments, Snorkel provides 132% average improvements to predictive performance over prior heuristic approaches and comes within an average 3.60% of the predictive performance of large hand-curated training sets.",
"title": ""
},
{
"docid": "fddadfbc6c1b34a8ac14f8973f052da5",
"text": "Abstract. Centroidal Voronoi tessellations are useful for subdividing a region in Euclidean space into Voronoi regions whose generators are also the centers of mass, with respect to a prescribed density function, of the regions. Their extensions to general spaces and sets are also available; for example, tessellations of surfaces in a Euclidean space may be considered. In this paper, a precise definition of such constrained centroidal Voronoi tessellations (CCVTs) is given and a number of their properties are derived, including their characterization as minimizers of an “energy.” Deterministic and probabilistic algorithms for the construction of CCVTs are presented and some analytical results for one of the algorithms are given. Computational examples are provided which serve to illustrate the high quality of CCVT point sets. Finally, CCVT point sets are applied to polynomial interpolation and numerical integration on the sphere.",
"title": ""
},
{
"docid": "d65dc7b231f52980b07f9166d8371198",
"text": "Lip augmentation has become increasingly popular in recent years as a reflection of cultural trends emphasizing youth and beauty. Techniques to enhance the appearance of the lips have evolved with advances in biotechnology. An understanding of lip anatomy and aesthetics forms the basis for successful results. We outline the pertinent anatomy and aesthetics of the preoperative evaluation. A summary of various filler materials available is provided. Augmentation options include both injectable and open surgical techniques. The procedures and materials currently favored by the authors are described in greater detail.",
"title": ""
},
{
"docid": "915ad4f43eef7db8fb24080f8389b424",
"text": "This paper details the design and architecture of a series elastic actuated snake robot, the SEA Snake. The robot consists of a series chain of 1-DOF modules that are capable of torque, velocity and position control. Additionally, each module includes a high-speed Ethernet communications bus, internal IMU, modular electro-mechanical interface, and ARM based on-board control electronics.",
"title": ""
},
{
"docid": "cbed0b87ebae159115277322b21299ca",
"text": "The present work describes a classification schema for irony detection in Greek political tweets. Our hypothesis states that humorous political tweets could predict actual election results. The irony detection concept is based on subjective perceptions, so only relying on human-annotator driven labor might not be the best route. The proposed approach relies on limited labeled training data, thus a semi-supervised approach is followed, where collective-learning algorithms take both labeled and unlabeled data into consideration. We compare the semi-supervised results with the supervised ones from a previous research of ours. The hypothesis is evaluated via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91cb34751e9126eac5ad068d8297b6cd",
"text": "Text, as one of the most influential inventions of humanity, has played an important role in human life, so far from ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications, therefore text detection and recognition in natural scenes have become important and active research topics in computer vision and document analysis. Especially in recent years, the community has seen a surge of research efforts and substantial progresses in these fields, though a variety of challenges (e.g. noise, blur, distortion, occlusion and variation) still remain. The purposes of this survey are three-fold: 1) introduce up-to-date works, 2) identify state-of-the-art algorithms, and 3) predict potential research directions in the future. Moreover, this paper provides comprehensive links to publicly available resources, including benchmark datasets, source codes, and online demos. In summary, this literature review can serve as a good reference for researchers in the areas of scene text detection and recognition.",
"title": ""
},
{
"docid": "d4cd46d9c8f0c225d4fe7e34b308e8f1",
"text": "In this paper, a 10 kW current-fed DC-DC converter using resonant push-pull topology is demonstrated and analyzed. The grounds for component dimensioning are given and the advantages and disadvantages of the resonant push-pull topology are discussed. The converter characteristics and efficiencies are demonstrated by calculations and prototype measurements.",
"title": ""
},
{
"docid": "2cc1afe86873bb7d83e919d25fbd5954",
"text": "Cellular Automata (CA) have attracted growing attention in urban simulation because their capability in spatial modelling is not fully developed in GIS. This paper discusses how cellular automata (CA) can be extended and integrated with GIS to help planners to search for better urban forms for sustainable development. The cellular automata model is built within a grid-GIS system to facilitate easy access to GIS databases for constructing the constraints. The essence of the model is that constraint space is used to regulate cellular space. Local, regional and global constraints play important roles in a ecting modelling results. In addition, ‘grey’ cells are de ned to represent the degrees or percentages of urban land development during the iterations of modelling for more accurate results. The model can be easily controlled by the parameter k using a power transformation function for calculating the constraint scores. It can be used as a useful planning tool to test the e ects of di erent urban development scenarios. 1. Cellular automata and GIS for urban simulation Cellular automata (CA) were developed by Ulam in the 1940s and soon used by Von Neumann to investigate the logical nature of self-reproducible systems (White and Engelen 1993). A CA system usually consists of four elements—cells, states, neighbourhoods and rules. Cells are the smallest units which must manifest some adjacency or proximity. The state of a cell can change according to transition rules which are de ned in terms of neighbourhood functions. The notion of neighbourhood is central to the CA paradigm (Couclelis 1997), but the de nition of neighbourhood is rather relaxed. CA are cell-based methods that can model two-dimensional space. Because of this underlying feature, it does not take long for geographers to apply CA to simulate land use change, urban development and other changes of geographical phenomena. CA have become especially, useful as a tool for modelling urban spatial dynamics and encouraging results have been documented (Deadman et al. 1993, Batty and Xie 1994a, Batty and Xie 1997, White and Engelen 1997). The advantages are that the future trajectory of urban morphology can be shown virtually during the simulation processes. The rapid development of GIS helps to foster the application of CA in urban Internationa l Journal of Geographica l Information Science ISSN 1365-8816 print/ISSN 1362-3087 online © 2000 Taylor & Francis Ltd http://www.tandf.co.uk/journals/tf/13658816.html X. L i and A. G. Yeh 132 simulation. Some researches indicate that cell-based GIS may indeed serve as a useful tool for implementing cellular automata models for the purposes of geographical analysis (Itami 1994). Although current GIS are not designed for fast iterative computation, cellular automata can still be used by creating batch ® les that contain iterative command sequences. While linking cellular automata to GIS can overcome some of the limitations of current GIS (White and Engelen 1997), CA can bene® t from the useful information provided by GIS in de® ning transition rules. The data realism requirement of CA can be best satis® ed with the aid of GIS (Couclelis 1997). Space no longer needs to be uniform since the spatial di erence equations can be easily developed in the context of GIS (Batty and Xie 1994b). Most current GIS techniques have limitations in modelling changes in the landscape over time, but the integration of CA and GIS has demonstrated considerable potential (Itami 1988, Deadman et al. 1993). The limitations of contemporary GIS include its poor ability to handle dynamic spatial models, poor performance for many operations, and poor handling of the temporal dimension (Park and Wagner 1997 ). In coupling GIS with CA, CA can serves as an analytical engine to provide a ̄ exible framework for the programming and running of dynamic spatial models. 2. Constrained CA for the planning of sustainable urban development Interest in sustainable urban development has increased rapidly in recent years. Unfortunately, the concept of sustainable urban development is debatable because unique de® nitions and scopes do not exist (Haughton and Hunter 1994). However, this concept is very important to our society in dealing with its increasingly pressing resource and environmental problems. As more nations are implementing this concept in their development plans, it has created important impacts on national policies and urban planning. The concern over sustainable urban development will continue to grow, especially in the developing countries which are undergoing rapid urbanization. A useful way to clarify its ambiguity is to set up some working de® nitions. Some speci® c and narrow de® nitions do exist for special circumstances but there are no commonly accepted de® nitions. The working de® nitions can help to eliminate ambiguities and ® nd out solutions and better alternatives to existing development patterns. The conversion of agricultural land into urban land uses in the urbanization processes has become a serious issue for sustainable urban development in the developing countries. Take China as an example, it cannot a ord to lose a signi® cant amount of its valuable agricultural land because it has a huge growing population to feed. Unfortunately, in recent years, a large amount of such land have been unnecessarily lost and the forms of existing urban development cannot help to sustain its further development (Yeh and Li 1997, Yeh and Li 1998). The complete depletion of agricultural land resources would not be far away in some fast growing areas if such development trends continued. The main issue of sustainable urban development is to search for better urban forms that can help to sustain development, especially the minimization of unnecessary agricultural land loss. Four operational criteria for sustainable urban forms can be used: (1 ) not to convert too much agricultural land at the early stages of development; (2 ) to decide the amount of land consumption based on available land resources and population growth; (3 ) to guide urban development to sites which are less important for food production; and (4 ) to maintain compact development patterns. The objective of this research is to develop an operational CA model for Modelling sustainable urban development 133 sustainable urban development. A number of advantages have been identi® ed in the application of CA in urban simulation (Wolfram 1984, Itami 1988). Cellular automata are seen not only as a framework for dynamic spatial modelling but as a paradigm for thinking about complex spatial-temporal phenomena and an experimental laboratory for testing ideas (Itami 1994 ). Formally, standard cellular automata may be generalised as follows: St+1 = f (St, N ) (1 ) where S is a set of all possible states of the cellular automata, N is a neighbourhood of all cells providing input values for the function f, and f is a transition function that de® nes the change of the state from t to t+1. Standard cellular automata apply a b̀ottom-up’ approach. The approach argues that local rules can create complex patterns by running the models in iterations. It is central to the idea that cities should work from particular to general, and that they should seek to understand the small scale in order to understand the large (Batty and Xie 1994a). It is amazing to see that real urban systems can be modelled based on microscopic behaviour that may be the CA model’s most useful advantage . However, the t̀op-down’ critique nevertheless needs to be taken seriously. An example is that central governments have the power to control overall land development patterns and the amount of land consumption. With the implementations of sustainable elements into cellular automata, a new paradigm for thinking about urban planning emerges. It is possible to embed some constraints in the transition rules of cellular automata so that urban growth can be rationalised according to a set of pre-de® ned sustainable criteria. However, such experiments are very limited since many researchers just focus on the simulation of possible urban evolution and the understanding of growth mechanisms using CA techniques. The constrained cellular automata should be able to provide much better alternatives to actual development patterns. A good example is to produce a c̀ompact’ urban form using CA models. The need for sustainable cities is readily apparent in recent years. A particular issue is to seek the most suitable form for sustainable urban development. The growing spread of urban areas accelerating at an alarming rate in the last few decades re ̄ ects the dramatic pressure of human development on nature. The steady rise in urban areas and decline in agricultural land have led to the worsening of food production and other environmental problems. Urban development towards a compact form has been proposed as a means to alleviate the increasingly intensi® ed land use con ̄ icts. The morphology of a city is an important feature in the c̀ompact city theory’ (Jenks et al. 1996). Evidence indicates a strong link between urban form and sustainable development, although it is not simple and straightforward. Compact urban form can be a major means in guiding urban development to sustainability, especially in reducing the negative e ects of the present dispersed development in Western cities. However, one of the frequent problems in the compact city debate is the lack of proper tools to ensure successful implementation of the compact city because of its complexity (Burton et al. 1996). This study demonstrates that the constrained CA can be used to model compact cities and sustainable urban forms based on local, regional and global constraints. 3. Suitability and constraints for sustainable urban forms using CA In this constrained CA model, there are three important aspects of sustainable urban forms that need to be consideredÐ compact patterns, land q",
"title": ""
},
{
"docid": "8a33040d6464f7792b3eeee1e0760925",
"text": "We live in a data abundance era. Availability of large volume of diverse multimedia data streams (ranging from video, to tweets, to activity, and to PM2.5) can now be used to solve many critical societal problems. Causal modeling across multimedia data streams is essential to reap the potential of this data. However, effective frameworks combining formal abstract approaches with practical computational algorithms for causal inference from such data are needed to utilize available data from diverse sensors. We propose a causal modeling framework that builds on data-driven techniques while emphasizing and including the appropriate human knowledge in causal inference. We show that this formal framework can help in designing a causal model with a systematic approach that facilitates framing sharper scientific questions, incorporating expert's knowledge as causal assumptions, and evaluating the plausibility of these assumptions. We show the applicability of the framework in a an important Asthma management application using meteorological and pollution data streams.",
"title": ""
},
{
"docid": "6a938ceeec7601c7a7bf1ff0107f0163",
"text": "We have been developing a 4DOF exoskeleton robot system in order to assist shoulder vertical motion, shoulder horizontal motion, elbow motion, and forearm motion of physically weak persons such as elderly, injured, or disabled persons. The robot is directly attached to a user's body and activated based on EMG (electromyogram) signals of the user's muscles, since the EMG signals directly reflect the user's motion intention. A neuro-fuzzy controller has been applied to control the exoskeleton robot system. In this paper, controller adaptation method to user's EMG signals is proposed. A motion indicator is introduced to indicate the motion intention of the user for the controller adaptation. The experimental results show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "88b562679f217affe489b6914bbc342b",
"text": "The measurement of functional gene abundance in diverse microbial communities often employs quantitative PCR (qPCR) with highly degenerate oligonucleotide primers. While degenerate PCR primers have been demonstrated to cause template-specific bias in PCR applications, the effect of such bias on qPCR has been less well explored. We used a set of diverse, full-length nifH gene standards to test the performance of several universal nifH primer sets in qPCR. We found significant template-specific bias in all but the PolF/PolR primer set. Template-specific bias caused more than 1000-fold mis-estimation of nifH gene copy number for three of the primer sets and one primer set resulted in more than 10,000-fold mis-estimation. Furthermore, such template-specific bias will cause qPCR estimates to vary in response to beta-diversity, thereby causing mis-estimation of changes in gene copy number. A reduction in bias was achieved by increasing the primer concentration. We conclude that degenerate primers should be evaluated across a range of templates, annealing temperatures, and primer concentrations to evaluate the potential for template-specific bias prior to their use in qPCR.",
"title": ""
}
] |
scidocsrr
|
fd94f02e0203547d87d2e4181b4a4c37
|
An improved DNN-based approach to mispronunciation detection and diagnosis of L2 learners' speech
|
[
{
"docid": "6c2a033b374b4318cd94f0a617ec705a",
"text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.",
"title": ""
}
] |
[
{
"docid": "014759efa636aec38aa35287b61e44a4",
"text": "Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection",
"title": ""
},
{
"docid": "d258a14fc9e64ba612f2c8ea77f85d08",
"text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.",
"title": ""
},
{
"docid": "9e32ff523f592d1988b79b5a8a56ef81",
"text": "We propose a semi-automatic method to obtain foreground object masks for a large set of related images. We develop a stagewise active approach to propagation: in each stage, we actively determine the images that appear most valuable for human annotation, then revise the foreground estimates in all unlabeled images accordingly. In order to identify images that, once annotated, will propagate well to other examples, we introduce an active selection procedure that operates on the joint segmentation graph over all images. It prioritizes human intervention for those images that are uncertain and influential in the graph, while also mutually diverse. We apply our method to obtain foreground masks for over 1 million images. Our method yields state-of-the-art accuracy on the ImageNet and MIT Object Discovery datasets, and it focuses human attention more effectively than existing propagation strategies.",
"title": ""
},
{
"docid": "fb8fc8a881ff11d997e9bb763234aa78",
"text": "To determine the elasticity characteristics of focal liver lesions (FLLs) by shearwave elastography (SWE). We used SWE in 108 patients with 161 FLLs and in the adjacent liver for quantitative and qualitative FLLs stiffness assessment. The Mann–Whitney test was used to assess the difference between the groups of lesions where a P value less than 0.05 was considered significant. SWE acquisitions failed in 22 nodules (14 %) in 13 patients. For the 139 lesions successfully evaluated, SWE values were (in kPa), for the 3 focal fatty sparings (FFS) 6.6 ± 0.3, for the 10 adenomas 9.4 ± 4.3, for the 22 haemangiomas 13.8 ± −5.5, for the 16 focal nodular hyperplasias (FNHs) 33 ± −14.7, for the 2 scars 53.7 ± 4.7, for the 26 HCCs 14.86 ± 10, for the 53 metastasis 28.8 ± 16, and for the 7 cholangiocarcinomas 56.9 ± 25.6. FNHs had significant differences in stiffness compared with adenomas (P = 0.0002). Fifty percent of the FNHs had a radial pattern of elevated elasticity. A significant difference was also found between HCCs and cholangiocarcinomas elasticity (P = 0.0004). SWE could be useful in differentiating FNHs and adenomas, or HCCs and cholangiocarcinomas by ultrasound. • Elastography is becoming quite widely used as an adjunct to conventional ultrasound • Shearwave elastography (SWE) could help differentiate adenomas from fibrous nodular hyperplasia • SWE could also be helpful in distinguishing between hepatocellular carcinomas and cholangiocarcinomas • SWE could improve the identify hepatocellular carcinomas in cirrhotic livers",
"title": ""
},
{
"docid": "72d1acfae576c88b5b40d94f2239146c",
"text": "In recent years, large amounts of financial data have become available for analysis. We propose exploring returns from 21 European stock markets by model-based clustering of regime switching models. These econometric models identify clusters of time series with similar dynamic patterns and moreover allow relaxing assumptions of existing approaches, such as the assumption of conditional Gaussian returns. The proposed model handles simultaneously the heterogeneity across stock markets and over time, i.e., time-constant and timevarying discrete latent variables capture unobserved heterogeneity between and within stock markets, respectively. The results show a clear distinction between two groups of stock markets, each one characterized by different regime switching dynamics that correspond to different expected return-risk patterns. We identify three regimes: the so-called bull and bear regimes, as well as a stable regime with returns close to 0, which turns out to be the most frequently occurring regime. This is consistent with stylized facts in financial econometrics.",
"title": ""
},
{
"docid": "5ecde325c3d01dc62bc179bc21fc8a0d",
"text": "Rapid access to situation-sensitive data through social media networks creates new opportunities to address a number of real-world problems. Damage assessment during disasters is a core situational awareness task for many humanitarian organizations that traditionally takes weeks and months. In this work, we analyze images posted on social media platforms during natural disasters to determine the level of damage caused by the disasters. We employ state-of-the-art machine learning techniques to perform an extensive experimentation of damage assessment using images from four major natural disasters. We show that the domain-specific fine-tuning of deep Convolutional Neural Networks (CNN) outperforms other state-of-the-art techniques such as Bag-of-Visual-Words (BoVW). High classification accuracy under both event-specific and cross-event test settings demonstrate that the proposed approach can effectively adapt deep-CNN features to identify the severity of destruction from social media images taken after a disaster strikes.",
"title": ""
},
{
"docid": "18ab36acafc5e0d39d02cecb0db2f7b3",
"text": "Trigeminal trophic syndrome is a rare complication after peripheral or central damage to the trigeminal nerve, characterized by sensorial impairment in the trigeminal nerve territory and self-induced nasal ulceration. Conditions that can affect the trigeminal nerve include brainstem cerebrovascular disease, diabetes, tabes, syringomyelia, and postencephalopathic parkinsonism; it can also occur following the surgical management of trigeminal neuralgia. Trigeminal trophic syndrome may develop months to years after trigeminal nerve insult. Its most common presentation is a crescent-shaped ulceration within the trigeminal sensory territory. The ala nasi is the most frequently affected site. Trigeminal trophic syndrome is notoriously difficult to diagnose and manage. A clear history is of paramount importance, with exclusion of malignant, fungal, granulomatous, vasculitic, or infective causes. We present a case of ulceration of the left ala nasi after brainstem cerebrovascular accident.",
"title": ""
},
{
"docid": "c7b92058dd9aee5217725a55ca1b56ff",
"text": "For the autonomous navigation of mobile robots, robust and fast visual localization is a challenging task. Although some end-to-end deep neural networks for 6-DoF Visual Odometry (VO) have been reported with promising results, they are still unable to solve the drift problem in long-range navigation. In this paper, we propose the deep global-relative networks (DGRNets), which is a novel global and relative fusion framework based on Recurrent Convolutional Neural Networks (RCNNs). It is designed to jointly estimate global pose and relative localization from consecutive monocular images. DGRNets include feature extraction sub-networks for discriminative feature selection, RCNNs-type relative pose estimation subnetworks for smoothing the VO trajectory and RCNNs-type global pose regression sub-networks for avoiding the accumulation of pose errors. We also propose two loss functions: the first one consists of Cross Transformation Constraints (CTC) that utilize geometric consistency of the adjacent frames to train a more accurate relative sub-networks, and the second one is composed of CTC and Mean Square Error (MSE) between the predicted pose and ground truth used to train the end-to-end DGRNets. The competitive experiments on indoor Microsoft 7-Scenes and outdoor KITTI dataset show that our DGRNets outperform other learning-based monocular VO methods in terms of pose accuracy.",
"title": ""
},
{
"docid": "6393d61b229e7230e256922445534bdb",
"text": "Recently, region based methods for estimating the 3D pose of an object from a 2D image have gained increasing popularity. They do not require prior knowledge of the object’s texture, making them particularity attractive when the object’s texture is unknown a priori. Region based methods estimate the 3D pose of an object by finding the pose which maximizes the image segmentation in to foreground and background regions. Typically the foreground and background regions are described using global appearance models, and an energy function measuring their fit quality is optimized with respect to the pose parameters. Applying a region based approach on standard 2D-3D pose estimation databases shows its performance is strongly dependent on the scene complexity. In simple scenes, where the statistical properties of the foreground and background do not spatially vary, it performs well. However, in more complex scenes, where the statistical properties of the foreground or background vary, the performance strongly degrades. The global appearance models used to segment the image do not sufficiently capture the spatial variation. Inspired by ideas from local active contours, we propose a framework for simultaneous image segmentation and pose estimation using multiple local appearance models. The local appearance models are capable of capturing spatial variation in statistical properties, where global appearance models are limited. We derive an energy function, measuring the image segmentation, using multiple local regions and optimize it with respect to the pose parameters. Our experiments show a substantially higher probability of estimating the correct pose for heterogeneous objects, whereas for homogeneous objects there is minor improvement.",
"title": ""
},
{
"docid": "e17558c5a39f3e231aa6d09c8e2124fc",
"text": "Surveys of child sexual abuse in large nonclinical populations of adults have been conducted in at least 19 countries in addition to the United States and Canada, including 10 national probability samples. All studies have found rates in line with comparable North American research, ranging from 7% to 36% for women and 3% to 29% for men. Most studies found females to be abused at 1 1/2 to 3 times the rate for males. Few comparisons among countries are possible because of methodological and definitional differences. However, they clearly confirm sexual abuse to be an international problem.",
"title": ""
},
{
"docid": "061e91fba7571b8e601b54e1cfc1d71e",
"text": "The training of medical image analysis systems using machine learning approaches follows a common script: collect and annotate a large dataset, train the classifier on the training set, and test it on a hold-out test set. This process bears no direct resemblance with radiologist training, which is based on solving a series of tasks of increasing difficulty, where each task involves the use of significantly smaller datasets than those used in machine learning. In this paper, we propose a novel training approach inspired by how radiologists are trained. In particular, we explore the use of meta-training that models a classifier based on a series of tasks. Tasks are selected using teacher-student curriculum learning, where each task consists of simple classification problems containing small training sets. We hypothesize that our proposed meta-training approach can be used to pre-train medical image analysis models. This hypothesis is tested on the automatic breast screening classification from DCE-MRI trained with weakly labeled datasets. The classification performance achieved by our approach is shown to be the best in the field for that application, compared to state of art baseline approaches: DenseNet, multiple instance learning and multi-task learning.",
"title": ""
},
{
"docid": "8acd56a71630abb371d3c91e61abbafb",
"text": "Ontologies are used to represent domain knowledge with help of object, their behaviour and properties. This paper represents web enabled approach on agriculture semantic web using SPARQL and specified tools to increase productivity of farmers. This work focuses on assessment of query optimization tools and results predicted from them to determine the suitability of each method for different users where structured ontologies are used as querying aids for agriculture based dataset.",
"title": ""
},
{
"docid": "3753bd82d038b2b2b7f03812480fdacd",
"text": "BACKGROUND\nDuring the last few years, an increasing number of unstable thoracolumbar fractures, especially in elderly patients, has been treated by dorsal instrumentation combined with a balloon kyphoplasty. This combination provides additional stabilization to the anterior spinal column without any need for a second ventral approach.\n\n\nCASE PRESENTATION\nWe report the case of a 97-year-old male patient with a lumbar burst fracture (type A3-1.1 according to the AO Classification) who presented prolonged neurological deficits of the lower limbs - grade C according to the modified Frankel/ASIA score. After a posterior realignment of the fractured vertebra with an internal screw fixation and after an augmentation with non-absorbable cement in combination with a balloon kyphoplasty, the patient regained his mobility without any neurological restrictions.\n\n\nCONCLUSION\nEspecially in older patients, the presented technique of PMMA-augmented pedicle screw instrumentation combined with balloon-assisted kyphoplasty could be an option to address unstable vertebral fractures in \"a minor-invasive way\". The standard procedure of a two-step dorsoventral approach could be reduced to a one-step procedure.",
"title": ""
},
{
"docid": "73b858944ed6daea63bb40cc674b6c62",
"text": "This paper gives a method of establishing small-signal model for full-bridge inverter arc welding power supply via state-space averaging treatment and linearization method. On the basis of the model, the frequency response reflecting dynamical characteristics of arc welding power supply is analyzed by means of MATLAB. The simulation result has agreement with experiment result as regards dynamical property. The small signal frequent mathematical model established in the paper can reflect basic characteristic of practical arc welding power system.",
"title": ""
},
{
"docid": "b0f13c59bb4ba0f81ebc86373ad80d81",
"text": "3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack. The keys to this architecture are the ability to move data between memory stacks as required for computation, and a partitioned execution mechanism that offloads memory-intensive application segments onto the NDP stack and decouples address translation from DRAM accesses. By enhancing this system with a smart offload selection mechanism that is cognizant of the compute capability of the NDP and cache locality on the host processor, system performance and energy are improved by up to 66.8% and 37.6%, respectively.",
"title": ""
},
{
"docid": "d0c43cf66df910094195bc3476cb8fa7",
"text": "Global information systems development has become increasingly prevalent and is facing a variety of challenges, including the challenge of cross-cultural management. However, research on exactly how cross-cultural factors affect global information systems development work is limited, especially with respect to distributed collaborative work between the U.S. and China. This paper draws on the interviews of Chinese IT professionals and discusses three emergent themes relevant to cross-cultural challenges: the complexity of language issues, culture and communication styles and work behaviors, and cultural understandings at different levels. Implications drawn from our findings will provide actionable knowledge to organizational management entities.",
"title": ""
},
{
"docid": "ced0328f339248158e8414c3315330c5",
"text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB",
"title": ""
},
{
"docid": "c320f54a692aa3750326a2ac5859fedb",
"text": "Nitrous oxide is an important greenhouse gas and ozone-depleting-substance. Its sources are diffuse and poorly characterized, complicating efforts to understand anthropogenic impacts and develop mitigation policies. Online, spectroscopic analysis of N2O isotopic composition can provide continuous measurements at high time resolution, giving new insight into N2O sources, sinks, and chemistry. We present a new preconcentration unit, \"Stheno II\", coupled to a tunable infrared laser direct absorption spectroscopy (TILDAS) instrument, to measure ambient-level variations in (18)O and site-specific (15)N N2O isotopic composition at remote sites with a temporal resolution of <1 h. Trapping of N2O is quantitative up to a sample size of ∼4 L, with an optimal sample size of 1200-1800 mL at a sampling frequency of 28 min. Line shape variations with the partial pressure of the major matrix gases N2/O2 and CO2 are measured, and show that characterization of both pressure broadening and Dicke narrowing is necessary for an optimal spectral fit. Partial pressure variations of CO2 and bath gas result in a linear isotopic measurement offset of 2.6-6.0 ‰ mbar(-1). Comparison of IR MS and TILDAS measurements shows that the TILDAS technique is accurate and precise, and less susceptible to interferences than IR MS measurements. Two weeks of measurements of N2O isotopic composition from Cambridge, MA, in May 2013 are presented. The measurements show significant short-term variability in N2O isotopic composition larger than the measurement precision, in response to meteorological parameters such as atmospheric pressure and temperature.",
"title": ""
},
{
"docid": "3ce021aa52dac518e1437d397c63bf68",
"text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.",
"title": ""
},
{
"docid": "e4a1f577cb232f6f76fba149a69db58f",
"text": "During software development, the activities of requirements analysis, functional specification, and architectural design all require a team of developers to converge on a common vision of what they are developing. There have been remarkably few studies of conceptual design during real projects. In this paper, we describe a detailed field study of a large industrial software project. We observed the development team's conceptual design activities for three months with follow-up observations and discussions over the following eight months. In this paper, we emphasize the organization of the project and how patterns of collaboration affected the team's convergence on a common vision. Three observations stand out: First, convergence on a common vision was not only painfully slow but was punctuated by several reorientations of direction; second, the design process seemed to be inherently forgetful, involving repeated resurfacing of previously discussed issues; finally, a conflict of values persisted between team members responsible for system development and those responsible for overseeing the development process. These findings have clear implications for collaborative support tools and process interventions.",
"title": ""
}
] |
scidocsrr
|
5d1c878f24ec3527b2abf995d3e3a751
|
Order preserving hashing for approximate nearest neighbor search
|
[
{
"docid": "f70ff7f71ff2424fbcfea69d63a19de0",
"text": "We propose a method for learning similaritypreserving hash functions that map highdimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "51da24a6bdd2b42c68c4465624d2c344",
"text": "Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points.",
"title": ""
}
] |
[
{
"docid": "dba2eb41963729637d6e043c79bce839",
"text": "The recent popularity of cryptocurrencies has highlighted the versatility and applications of a decentralized, public blockchain. Blockchains provide a data structure that can guarantee both the integrity and non-repudiation of data, as well as providing provenance pertaining to such data. Our novel Lightweight Mining (LWM) algorithm provides these guarantees with minimal resource requirements. Our approach to blockchain-based data provenance, paired with the LWM algorithm, provides the legal and ethical framework for auditors to validate clinical trials, expediting the research process, and saving lives and money in the process. Contributions of this paper include the following: we explain how to adapt and apply a novel, blockchain-based provenance system to enhance clinical trial data integrity and nonrepudiation. We explain the key features of the Scrybe system that enable this outcome, and we describe resilience of the system to denial of service attacks and repudiation. We conclude that Scrybe can provide a system and method for secure data provenance for clinical trials, consistent with the legal and ethical requirements for the lifecycle management of such data.",
"title": ""
},
{
"docid": "112a1483acf7fae119036ea231fcbe85",
"text": "Part of the long lasting cultural heritage of China is the classical ancient Chinese poems which follow strict formats and complicated linguistic rules. Automatic Chinese poetry composition by programs is considered as a challenging problem in computational linguistics and requires high Artificial Intelligence assistance, and has not been well addressed. In this paper, we formulate the poetry composition task as an optimization problem based on a generative summarization framework under several constraints. Given the user specified writing intents, the system retrieves candidate terms out of a large poem corpus, and then orders these terms to fit into poetry formats, satisfying tonal and rhythm requirements. The optimization process under constraints is conducted via iterative term substitutions till convergence, and outputs the subset with the highest utility as the generated poem. For experiments, we perform generation on large datasets of 61,960 classic poems from Tang and Song Dynasty of China. A comprehensive evaluation, using both human judgments and ROUGE scores, has demonstrated the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "2bb1c5b4c159c19b23b38b42d1c440cb",
"text": "This paper presents a prediction and planning framework for analysing the safety and interaction of moving objects in complex road scenes. Rather than detecting specific, known, dangerous configurations, we simulate all the possible motion and interaction of objects. This simulation is used to detect dangerous situations, and to select the best path. The best path can be chosen according to a number of different criterion, such as: smoothest motion, largest avoiding distance, or quickest path. This framework can be applied, either as a driver warning system (open loop), or as an action recommendation system (human in the loop), or as an intelligent cruise control system (closed loop). This framework is evaluated using synthetic data, using simple and complex road scenes.",
"title": ""
},
{
"docid": "ca9e2cbebd44b3c345bdfead24bda0ee",
"text": "At present, robot simulators have robust physics engine, high-quality graphics, convenient customer and graphical interfaces, that gives rich opportunities to substitute the real robot by its simulation model, providing the calculation of a robot locomotion by odometry and sensor data. This paper aims at describing a Gazebo simulation approach of simultaneous localization and mapping (SLAM) based on the Robotic operating system (ROS) for a simulated mobile robot with a system of two scanning lasers, which moves in a 3D model of realistic indoor environment. The image-based 3D model of a real room with obstacles was obtained by camera shots and reconstructed by Autodesk 123D Catch software with meshing in MeshLab software. We use the existing Gazebo simulation of the Willow Garage Personal Robot 2 (PR2) with its sensor system, which facilitates the simulation of robot locomotion and sensor measurements for SLAM and navigation tasks. The ROS-based SLAM approach applies Rao-Blackwellized particle filters and laser data to locate the PR2 robot in unknown environment and build a map. The Gazebo simulation of the PR2 robot locomotion, sensor data and SLAM algorithm is considered in details. The results qualitatively demonstrate the fidelity of the simulated 3D room with obstacles to the ROS-calculated map obtained from the robot laser system. It proves the feasibility of ROS-based SLAM of a Gazebo-simulated mobile robot to its usage in camera-based 3D model of a realistic indoor environment. This approach can be spread to further ROS-based robotic simulations with Gazebo, e.g. concerning a Russian android robot AR-601M.",
"title": ""
},
{
"docid": "278e83a20dc4f34df316ff408232cdf8",
"text": "We present a Multi View Stereo approach for huge unstructured image datasets that can deal with large variations in surface sampling rate of single images. Our method reconstructs surface parts always in the best available resolution. It considers scaling not only for large scale differences, but also between arbitrary small ones for a weighted merging of the best partial reconstructions. We create depth maps with our GPU based depth map algorithm, that also performs normal optimization. It matches several images that are found with a heuristic image selection method, to a reference image. We remove outliers by comparing depth maps against each other with a fast but reliable GPU approach. Then, we merge the different reconstructions from depth maps in 3D space by selecting the best points and optimizing them with not selected points. Finally, we create the surface by using a Delaunay graph cut.",
"title": ""
},
{
"docid": "7621e0dcdad12367dc2cfcd12d31c719",
"text": "Microblogging sites have emerged as major platforms for bloggers to create and consume posts as well as to follow other bloggers and get informed of their updates. Due to the large number of users, and the huge amount of posts they create, it becomes extremely difficult to identify relevant and interesting blog posts. In this paper, we propose a novel convex collective matrix completion (CCMC) method that effectively utilizes user-item matrix and incorporates additional user activity and topic-based signals to recommend relevant content. The key advantage of CCMC over existing methods is that it can obtain a globally optimal solution and can easily scale to large-scale matrices using Hazan’s algorithm. To the best of our knowledge, this is the first work which applies and studies CCMC as a recommendation method in social media. We conduct a large scale study and show significant improvement over existing state-ofthe-art approaches.",
"title": ""
},
{
"docid": "b01c56bb7d95cefbb499eee6717def74",
"text": "School shootings have become more common in the United States in recent years. Yet, as media portrayals of these ‘rampages’ shock the public, the characterisation of this violence obscures an important point: many of these crimes culminate in suicide, and they are almost universally committed by males. We examine three recent American cases, which involve suicide, to elucidate how the culture of hegemonic masculinity in the US creates a sense of aggrieved entitlement conducive to violence. This sense of entitlement simultaneously frames suicide as an appropriate, instrumental behaviour for these males to underscore their violent enactment of masculinity.",
"title": ""
},
{
"docid": "c83d4f1136b07797912a4c4722b685a1",
"text": "In agriculture research of automatic leaf disease detection is essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect symptoms of disease as soon as they appear on plant leaves. The term disease is usually used only for destruction of live plants. This paper provides various methods used to study of leaf disease detection using image processing. The methods studies are for increasing throughput and reduction subjectiveness arising from human experts in detecting the leaf disease[1].digital image processing is a technique used for enhancement of the image. To improve agricultural products automatic detection of symptoms is beneficial. Keyword— Leaf disease, Image processing.",
"title": ""
},
{
"docid": "32a4c17a53643042a5c19180bffd7c21",
"text": "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \"$1 recognizer\" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.",
"title": ""
},
{
"docid": "ea4e0cb8ac63a26319e5567e53b1a053",
"text": "Markov chains are widely used in the context of performance and reliability evaluation of systems of various nature. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both the discrete [17, 6] and the continuous time setting [4, 8]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen–Twente Markov Chain Checker (E T MC), where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore we report on first successful applications of the tool to non-trivial examples, highlighting lessons learned during development and application of E T MC.",
"title": ""
},
{
"docid": "c4a7e413a12e62b66ec7512ac137ed31",
"text": "Methylammonium lead iodide (CH3NH3PbI3) (MAPI)-embedded β-phase comprising porous poly(vinylidene fluoride) (PVDF) composite (MPC) films turns to an excellent material for energy harvester and photodetector (PD). MAPI enables to nucleate up to ∼91% of electroactive phase in PVDF to make it suitable for piezoelectric-based mechanical energy harvesters (PEHs), sensors, and actuators. The piezoelectric energy generation from PEH made with MPC film has been demonstrated under a simple human finger touch motion. In addition, the feasibility of photosensitive properties of MPC films are manifested under the illumination of nonmonochromatic light, which also promises the application as organic photodetectors. Furthermore, fast rising time and instant increase in the current under light illumination have been observed in an MPC-based photodetector (PD), which indicates of its potential utility in efficient photoactive device. Owing to the photoresponsive and electroactive nature of MPC films, a new class of stand-alone self-powered flexible photoactive piezoelectric energy harvester (PPEH) has been fabricated. The simultaneous mechanical energy-harvesting and visible light detection capability of the PPEH is promising in piezo-phototronics technology.",
"title": ""
},
{
"docid": "e38f369fb206e1a8034ce00a0ec25869",
"text": "A large body of research work and efforts have been focused on detecting fake news and building online fact-check systems in order to debunk fake news as soon as possible. Despite the existence of these systems, fake news is still wildly shared by online users. It indicates that these systems may not be fully utilized. After detecting fake news, what is the next step to stop people from sharing it? How can we improve the utilization of these fact-check systems? To fill this gap, in this paper, we (i) collect and analyze online users called guardians, who correct misinformation and fake news in online discussions by referring fact-checking URLs; and (ii) propose a novel fact-checking URL recommendation model to encourage the guardians to engage more in fact-checking activities. We found that the guardians usually took less than one day to reply to claims in online conversations and took another day to spread verified information to hundreds of millions of followers. Our proposed recommendation model outperformed four state-of-the-art models by 11%~33%. Our source code and dataset are available at http://web.cs.wpi.edu/~kmlee/data/gau.html.",
"title": ""
},
{
"docid": "b17325ed7fd45b6e8bd47303dbc52fb7",
"text": "The compute-intensive and power-efficient brain has been a source of inspiration for a broad range of neural networks to solve recognition and classification tasks. Compared to the supervised deep neural networks (DNNs) that have been very successful on well-defined labeled datasets, bio-plausible spiking neural networks (SNNs) with unsupervised learning rules could be well-suited for training and learning representations from the massive amount of unlabeled data. To design dense and low-power hardware for such unsupervised SNNs, we employ digital CMOS circuits for neuromorphic processors, which can exploit transistor scaling and dynamic voltage scaling to the utmost. As exemplary works, we present two neuromorphic processor designs. First, a 45nm neuromorphic chip is designed for a small-scale network of spiking neurons. Through tight integration of memory (64k SRAM synapses) and computation (256 digital neurons), the chip demonstrates on-chip learning on pattern recognition tasks down to 0.53V supply. Secondly, a 65nm neuromorphic processor that performs unsupervised on-line spike-clustering for brain sensing applications is implemented with 1.2k digital neurons and 4.7k latch-based synapses. The processor exhibits a power consumption of 9.3μW/ch at 0.3V supply. Synapse hardware precision, efficient synapse memory array access, overfitting, and voltage scaling will be discussed for dense and power-efficient on-chip learning for CMOS spiking neural networks.",
"title": ""
},
{
"docid": "dd96d664a4a9f31db0f2ae3814cf666d",
"text": "This paper investigates the application of flexible fast-convolution (FC) filtering scheme for multiplexing orthogonal frequency-division multiplexing (OFDM) physical resource blocks (PRBs) in a spectrally well-localized manner. This scheme is able to suppress interference leakage between adjacent PRBs, thus supporting independent waveform parametrization and numerologies for different PRBs, as well as asynchronous multiuser operation. These are considered as important features in the 5G waveform development. This contribution focuses on optimizing FC based OFDM transmultiplexers such that the in-band interference is minimized subject to the given out-of-band emission constraint. The performance of the optimized designs is demonstrated using resource block groups (RBGs) of different sizes and with various design parameters. The proposed scheme has great flexibility in tuning the filtering bandwidths dynamically according the resource allocation to different users with different requirements regarding the OFDM waveform numerology. Also the computational complexity is competitive with existing time-domain windowing approaches and becomes superior when the number of filtering bands is increased.",
"title": ""
},
{
"docid": "5293dc28da110096fee7be1da7bf52b2",
"text": "The function of brown adipose tissue is to transfer energy from food into heat; physiologically, both the heat produced and the resulting decrease in metabolic efficiency can be of significance. Both the acute activity of the tissue, i.e., the heat production, and the recruitment process in the tissue (that results in a higher thermogenic capacity) are under the control of norepinephrine released from sympathetic nerves. In thermoregulatory thermogenesis, brown adipose tissue is essential for classical nonshivering thermogenesis (this phenomenon does not exist in the absence of functional brown adipose tissue), as well as for the cold acclimation-recruited norepinephrine-induced thermogenesis. Heat production from brown adipose tissue is activated whenever the organism is in need of extra heat, e.g., postnatally, during entry into a febrile state, and during arousal from hibernation, and the rate of thermogenesis is centrally controlled via a pathway initiated in the hypothalamus. Feeding as such also results in activation of brown adipose tissue; a series of diets, apparently all characterized by being low in protein, result in a leptin-dependent recruitment of the tissue; this metaboloregulatory thermogenesis is also under hypothalamic control. When the tissue is active, high amounts of lipids and glucose are combusted in the tissue. The development of brown adipose tissue with its characteristic protein, uncoupling protein-1 (UCP1), was probably determinative for the evolutionary success of mammals, as its thermogenesis enhances neonatal survival and allows for active life even in cold surroundings.",
"title": ""
},
{
"docid": "e0919ddaddfbf307f33b7442ee99cbad",
"text": "With the ever-increasing diffusion of computers into the society, it is widely believed that present popular mode of interactions with computers (mouse and keyboard) will become a bottleneck in the effective utilization of information flow between the computers and the human. Vision based Gesture recognition has the potential to be a natural and powerful tool supporting efficient and intuitive interaction between the human and the computer. Visual interpretation of hand gestures can help in achieving the ease and naturalness desired for Human Computer Interaction (HCI). This has motivated many researchers in computer vision-based analysis and interpretation of hand gestures as a very active research area. We surveyed the literature on visual interpretation of hand gestures in the context of its role in HCI and various seminal works of researchers are emphasized. The purpose of this review is to introduce the field of gesture recognition as a mechanism for interaction with computers.",
"title": ""
},
{
"docid": "03fcf9cd39c516332be9f10ee948a07f",
"text": "Cloud application performance is heavily reliant on the hit rate of datacenter key-value caches. Key-value caches typically use least recently used (LRU) as their eviction policy, but LRU’s hit rate is far from optimal under real workloads. Prior research has proposed many eviction policies that improve on LRU, but these policies make restrictive assumptions that hurt their hit rate, and they can be difficult to implement efficiently. We introduce least hit density (LHD), a novel eviction policy for key-value caches. LHD predicts each object’s expected hits-per-space-consumed (hit density), filtering objects that contribute little to the cache’s hit rate. Unlike prior eviction policies, LHD does not rely on heuristics, but rather rigorously models objects’ behavior using conditional probability to adapt its behavior in real time. To make LHD practical, we design and implement RankCache, an efficient key-value cache based on memcached. We evaluate RankCache and LHD on commercial memcached and enterprise storage traces, where LHD consistently achieves better hit rates than prior policies. LHD requires much less space than prior policies to match their hit rate, on average 8× less than LRU and 2–3× less than recently proposed policies. Moreover, RankCache requires no synchronization in the common case, improving request throughput at 16 threads by 8× over LRU and by 2× over CLOCK.",
"title": ""
},
{
"docid": "96e24fabd3567a896e8366abdfaad78e",
"text": "Interior permanent magnet synchronous motor (IPMSM) is usually applied to traction motor in the hybrid electric vehicle (HEV). All motors including IPMSM have different parameters and characteristics with various combinations of the number of poles and slots. The proper combination can improve characteristics of traction system ultimately. This paper deals with analysis of the characteristics of IPMSM for mild type HEV according to the combinations of number of poles and slots. The specific models with 16-pole/18-slot, 16-pole/24-slot and 12-pole/18-slot combinations are introduced. And the advantages and disadvantages of these three models are compared. The characteristics of each model are computed in d-q axis equivalent circuit analysis and finite element analysis. After then, the proper combination of the number of poles and slots for HEV traction motor is presented after comparing these three models.",
"title": ""
},
{
"docid": "fead5d31f441dd95ce3ec0fafab4e3e7",
"text": "Texts that convey the same or close meaning can be written in many different ways. On the other hand, computer programs are not good at algorithmically processing meaning equivalence of short texts, without relying on knowledge. Toward addressing this problem, researchers have been investigating methods for automatically acquiring paraphrase templates from a corpus. The goal of this thesis work is to develop a paraphrase acquisition framework that can acquire lexically-diverse paraphrase templates, given small (5-20) seed instances and a small (1-10GB) plain monolingual corpus. The framework works in an iterative fashion where the seed instances are used to find paraphrase patterns from the corpus, and the patterns are used to harvest more seed instances to be used in the next iteration. Unlike previous works, lexical diversity of resulting paraphrase patterns can be controlled with a parameter. Our corpus requirement is decent as compared to previous works that require a parallel/comparable corpus or a huge parsed monolingual corpus, which is ideal for languageand domain-portability.",
"title": ""
},
{
"docid": "45260b1efb4858e231c8c15879db89d1",
"text": "Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the problem and the current solution space. The attack classification criteria was selected to highlight commonalities and important features of attack strategies, that define challenges and dictate the design of countermeasures. The defense taxonomy classifies the body of existing DDoS defenses based on their design decisions; it then shows how these decisions dictate the advantages and deficiencies of proposed solutions.",
"title": ""
}
] |
scidocsrr
|
4208ecfc595953a69e44245546e51427
|
A GPU-based WFST Decoder with Exact Lattice Generation
|
[
{
"docid": "9ade6407ce2603e27744df1b03728bfc",
"text": "We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "3d2ff2e7bd3dbcaf7839afe98920391d",
"text": "The performance-intensive part of a large-vocabulary continuous speech-recognition system is the Viterbi computation that determines the sequence of words that are most likely to generate the acoustic-state scores extracted from an input utterance. This paper presents an efficient parallel algorithm for Viterbi. The key idea is to partition the per-frame computation among threads to minimize inter-thread communication despite traversing a large irregular acoustic and language model graphs. Together with a per-thread beam search, load balancing language-model lookups, and memory optimizations, we achieve a 6.67× speedup over an highly-optimized production-quality WFST-based speech decoder. On a 200,000 word vocabulary and a 59 million ngram model, our decoder runs at 0.27× real time while achieving a word-error rate of 14.81% on 6214 labeled utterances from Voice Search data.",
"title": ""
}
] |
[
{
"docid": "11af505fd6448ec1e232c02a838a70bf",
"text": "A great interest is recently paid to Electric Vehicles (EV) and their integration into electricity grids. EV can potentially play an important role in power system operation, however, the EV charging infrastructures have been only partly defined, considering them as limited to individual charging points, randomly distributed into the networks. This paper addresses the planning of public central charging stations (CCS) that can be integrated in low-voltage (LV) networks for EV parallel charging. The concepts of AC and DC architectures of CCS are proposed and a comparison is given on their investment cost. Investigation on location and size of CCS is conducted, analyzing two LV grids of different capacity. The results enlighten that a public CCS should be preferably located in the range of 100 m from the transformer. The AC charging levels of 11 kW and 22 kW have the highest potential in LV grids. The option of DC fast-charging is only possible in the larger capacity grids, withstanding the parallel charge of one or two vehicles.",
"title": ""
},
{
"docid": "450808fb3512ffd3bac692523e785c73",
"text": "This paper focuses on approaches to building a text automatic summarization model for news articles, generating a one-sentence summarization that mimics the style of a news title given some paragraphs. We managed to build and train two relatively complex deep learning models that outperformed our baseline model, which is a simple feed forward neural network. We explored Recurrent Neural Network models with encoder-decoder using LSTM and GRU cells, and with/without attention. We obtained some results that we then measured by calculating their respective ROUGE scores with respect to the actual references. For future work, we believe abstractive method of text summarization is a power way of summarizing texts, and we will continue with this approach. We think that the deficiencies currently embedded in our language model can be improved by better fine-tuning the model, more deep-learning method exploration, as well as larger training dataset.",
"title": ""
},
{
"docid": "b7944edc9e6704cbf59489f112f46c11",
"text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001",
"title": ""
},
{
"docid": "bc58f2f9f6f5773f5f8b2696d9902281",
"text": "Software development is a complicated process and requires careful planning to produce high quality software. In large software development projects, release planning may involve a lot of unique challenges. Due to time, budget and some other constraints, potentially there are many problems that may possibly occur. Subsequently, project managers have been trying to identify and understand release planning, challenges and possible resolutions which might help them in developing more effective and successful software products. This paper presents the findings from an empirical study which investigates release planning challenges. It takes a qualitative approach using interviews and observations with practitioners and project managers at five large software banking projects in Informatics Services Corporation (ISC) in Iran. The main objective of this study is to explore and increase the understanding of software release planning challenges in several software companies in a developing country. A number of challenges were elaborated and discussed in this study within the domain of software banking projects. These major challenges are classified into two main categories: the human-originated including people cooperation, disciplines and abilities; and the system-oriented including systematic approaches, resource constraints, complexity, and interdependency among the systems.",
"title": ""
},
{
"docid": "9c05452b964c67b8f79ce7dfda4a72e5",
"text": "The Internet is evolving rapidly toward the future Internet of Things (IoT) which will potentially connect billions or even trillions of edge devices which could generate huge amount of data at a very high speed and some of the applications may require very low latency. The traditional cloud infrastructure will run into a series of difficulties due to centralized computation, storage, and networking in a small number of datacenters, and due to the relative long distance between the edge devices and the remote datacenters. To tackle this challenge, edge cloud and edge computing seem to be a promising possibility which provides resources closer to the resource-poor edge IoT devices and potentially can nurture a new IoT innovation ecosystem. Such prospect is enabled by a series of emerging technologies, including network function virtualization and software defined networking. In this survey paper, we investigate the key rationale, the state-of-the-art efforts, the key enabling technologies and research topics, and typical IoT applications benefiting from edge cloud. We aim to draw an overall picture of both ongoing research efforts and future possible research directions through comprehensive discussions.",
"title": ""
},
{
"docid": "3c1cc57db29b8c86de4f314163ccaca0",
"text": "We are motivated by the need for a generic object proposal generation algorithm which achieves good balance between object detection recall, proposal localization quality and computational efficiency. We propose a novel object proposal algorithm, BING++, which inherits the virtue of good computational efficiency of BING [1] but significantly improves its proposal localization quality. At high level we formulate the problem of object proposal generation from a novel probabilistic perspective, based on which our BING++ manages to improve the localization quality by employing edges and segments to estimate object boundaries and update the proposals sequentially. We propose learning the parameters efficiently by searching for approximate solutions in a quantized parameter space for complexity reduction. We demonstrate the generalization of BING++ with the same fixed parameters across different object classes and datasets. Empirically our BING++ can run at half speed of BING on CPU, but significantly improve the localization quality by 18.5 and 16.7 percent on both VOC2007 and Microhsoft COCO datasets, respectively. Compared with other state-of-the-art approaches, BING++ can achieve comparable performance, but run significantly faster.",
"title": ""
},
{
"docid": "029ce02521ddc1c7377749ff48bb54b0",
"text": "Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization.",
"title": ""
},
{
"docid": "e582ba5e904eafa0fa17c03a0335f7a6",
"text": "In this paper, a methodology to apply space vector modulation (SVM) to a three-level neutral point clamped (NPC) inverter based on volt-second balancing principle is presented. The proposed scheme, switching-state selection of the three-level NPC inverter by using a nearest three vector scheme, is fully described. A mathematical analysis of the duty cycle is determined by utilizing the calculated switching time method in order to generate the duty cycle signals for both the continuous SVM (CSVM) and the discontinuous SVM (DSVM) conditions. Compared with the CSVM scheme, the DSVM scheme is the reduction in the number of commutations of switches by one third which leads to a further reduction in switching losses. Simulated results are presented to validate the proposal, which demonstrates good steady-state and dynamic performance.",
"title": ""
},
{
"docid": "1c7131fcb031497b2c1487f9b25d8d4e",
"text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.",
"title": ""
},
{
"docid": "dc53e2bf9576fd3fb7670b0860eae754",
"text": "In the field of ADAS and self-driving car, lane and drivable road detection play an essential role in reliably accomplishing other tasks, such as objects detection. For monocular vision based semantic segmentation of lane and road, we propose a dilated feature pyramid network (FPN) with feature aggregation, called DFFA, where feature aggregation is employed to combine multi-level features enhanced with dilated convolution operations and FPN under the framework of ResNet. Experimental results validate effectiveness and efficiency of the proposed deep learning model for semantic segmentation of lane and drivable road. Our DFFA achieves the best performance both on Lane Estimation Evaluation and Behavior Evaluation tasks in KITTI-ROAD and take the second place on UU ROAD task.",
"title": ""
},
{
"docid": "8304509377e6abecfc62f5bcc76be519",
"text": "This paper developed a practical split-window (SW) algorithm to estimate land surface temperature (LST) from Thermal Infrared Sensor (TIRS) aboard Landsat 8. The coefficients of the SW algorithm were determined based on atmospheric water vapor sub-ranges, which were obtained through a modified split-window covariance–variance ratio method. The channel emissivities were acquired from newly released global land cover products at 30 m and from a fraction of the vegetation cover calculated from visible and near-infrared images aboard Landsat 8. Simulation results showed that the new algorithm can obtain LST with an accuracy of better than 1.0 K. The model consistency to the noise of the brightness temperature, emissivity and water vapor was conducted, which indicated the robustness of the new algorithm in LST retrieval. Furthermore, based on comparisons, the new algorithm performed better than the existing algorithms in retrieving LST from TIRS data. Finally, the SW algorithm was proven to be reliable through application in different regions. To further confirm the credibility of the SW algorithm, the LST will be validated in the future.",
"title": ""
},
{
"docid": "3df69e5ce63d3a3b51ad6f2b254e12b6",
"text": "This paper presents three approaches to creating corpora that we are working on for speech-to-speech translation in the travel conversation task. The first approach is to collect sentences that bilingual travel experts consider useful for people going-to/coming-from another country. The resulting English-Japanese aligned corpora are collectively called the basic travel expression corpus (BTEC), which is now being translated into several other languages. The second approach tries to expand this corpus by generating many \"synonymous\" expressions for each sentence. Although we can create large corpora by the above two approaches relatively cheaply, they may be different from utterances in actual conversation. Thus, as the third approach, we are collecting dialogue corpora by letting two people talk, each in his/her native language, through a speech-to-speech translation system. To concentrate on translation modules, we have replaced speech recognition modules with human typists. We will report some of the characteristics of these corpora as well.",
"title": ""
},
{
"docid": "9f7fef4255c3b1c3c240938057ce6583",
"text": "Drug side-effects become a worldwide public health concern, which are the fourth leading cause of death in the United States. Pharmaceutical industry has paid tremendous effort to identify drug side-effects during the drug development. However, it is impossible and impractical to identify all of them. Fortunately, drug side-effects can also be reported on heterogeneous platforms (i.e., data sources), such as FDA Adverse Event Reporting System and various online communities. However, existing supervised and semi-supervised approaches are not practical as annotating labels are expensive in the medical field. In this paper, we propose a novel and effective unsupervised model Sifter to automatically discover drug side-effects. Sifter enhances the estimation on drug side-effects by learning from various online platforms and measuring platform-level and user-level quality simultaneously. In this way, Sifter demonstrates better performance compared with existing approaches in terms of correctly identifying drug side-effects. Experimental results on five real-world datasets show that Sifter can significantly improve the performance of identifying side-effects compared with the state-of-the-art approaches.",
"title": ""
},
{
"docid": "d6976361b44aab044c563e75056744d6",
"text": "Five adrenoceptor subtypes are involved in the adrenergic regulation of white and brown fat cell function. The effects on cAMP production and cAMP-related cellular responses are mediated through the control of adenylyl cyclase activity by the stimulatory beta 1-, beta 2-, and beta 3-adrenergic receptors and the inhibitory alpha 2-adrenoceptors. Activation of alpha 1-adrenoceptors stimulates phosphoinositidase C activity leading to inositol 1,4,5-triphosphate and diacylglycerol formation with a consequent mobilization of intracellular Ca2+ stores and protein kinase C activation which trigger cell responsiveness. The balance between the various adrenoceptor subtypes is the point of regulation that determines the final effect of physiological amines on adipocytes in vitro and in vivo. Large species-specific differences exist in brown and white fat cell adrenoceptor distribution and in their relative importance in the control of the fat cell. Functional beta 3-adrenoceptors coexist with beta 1- and beta 2-adrenoceptors in a number of fat cells; they are weakly active in guinea pig, primate, and human fat cells. Physiological hormones and transmitters operate, in fact, through differential recruitment of all these multiple alpha- and beta-adrenoceptors on the basis of their relative affinity for the different subtypes. The affinity of the beta 3-adrenoceptor for catecholamines is less than that of the classical beta 1- and beta 2-adrenoceptors. Conversely, epinephrine and norepinephrine have a higher affinity for the alpha 2-adrenoceptors than for beta 1-, 2-, or 3-adrenoceptors. Antagonistic actions exist between alpha 2- and beta-adrenoceptor-mediated effects in white fat cells while positive cooperation has been revealed between alpha 1- and beta-adrenoceptors in brown fat cells. Homologous down-regulation of beta 1- and beta 2-adrenoceptors is observed after administration of physiological amines and beta-agonists. Conversely, beta 3- and alpha 2-adrenoceptors are much more resistant to agonist-induced desensitization and down-regulation. Heterologous regulation of beta-adrenoceptors was reported with glucocorticoids while sex-steroid hormones were shown to regulate alpha 2-adrenoceptor expression (androgens) and to alter adenylyl cyclase activity (estrogens).",
"title": ""
},
{
"docid": "07575ce75d921d6af72674e1fe563ff7",
"text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.",
"title": ""
},
{
"docid": "e7b3656170f6e6302d9b765d820bb0a9",
"text": "Due to the fast development of social media on the Web, Twitter has become one of the major platforms for people to express themselves. Because of the wide adoption of Twitter, events like breaking news and release of popular videos can easily catch people’s attention and spread rapidly on Twitter, and the number of relevant tweets approximately reflects the impact of an event. Event identification and analysis on Twitter has thus become an important task. Recently the Recurrent Chinese Restaurant Process (RCRP) has been successfully used for event identification from news streams and news-centric social media streams. However, these models cannot be directly applied to Twitter based on our preliminary experiments mainly for two reasons: (1) Events emerge and die out fast on Twitter, while existing models ignore this burstiness property. (2) Most Twitter posts are personal interest oriented while only a small fraction is event related. Motivated by these challenges, we propose a new nonparametric model which considers burstiness. We further combine this model with traditional topic models to identify both events and topics simultaneously. Our quantitative evaluation provides sufficient evidence that our model can accurately detect meaningful events. Our qualitative evaluation also shows interesting analysis for events on Twitter.",
"title": ""
},
{
"docid": "27e1d29dc8d252081e80f93186a14660",
"text": "Over the last several years there has been an increasing focus on early detection of Autism Spectrum Disorder (ASD), not only from the scientific field but also from professional associations and public health systems all across Europe. Not surprisingly, in order to offer better services and quality of life for both children with ASD and their families, different screening procedures and tools have been developed for early assessment and intervention. However, current evidence is needed for healthcare providers and policy makers to be able to implement specific measures and increase autism awareness in European communities. The general aim of this review is to address the latest and most relevant issues related to early detection and treatments. The specific objectives are (1) analyse the impact, describing advantages and drawbacks, of screening procedures based on standardized tests, surveillance programmes, or other observational measures; and (2) provide a European framework of early intervention programmes and practices and what has been learnt from implementing them in public or private settings. This analysis is then discussed and best practices are suggested to help professionals, health systems and policy makers to improve their local procedures or to develop new proposals for early detection and intervention programmes.",
"title": ""
},
{
"docid": "9cd85689d30771a8b11a1d8c9d9d1785",
"text": "Plug-in electric vehicles (PEVs) can behave either as loads or as distributed energy sources in a concept known as vehicle-to-grid (V2G). The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural and technical obstacles. Although V2G operation can reduce the lifetime of PEVs, it is projected to be more economical for vehicle owners and grid operators. This paper reviews these benefits and challenges of V2G technology for both individual vehicles and vehicle fleets.",
"title": ""
},
{
"docid": "9e6bfc7b5cc87f687a699c62da013083",
"text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.",
"title": ""
},
{
"docid": "a3f9ca7067ccfe944f614f775918498f",
"text": "The goal of this survey is to review the major idiosyncrasies of the commodity markets and the methods which have been proposed to handle them in spot and forward price models. We devote special attention to the most idiosyncratic of all: electricity markets. Following a discussion of traded instruments, market features, historical perspectives, recent developments and various modelling approaches, we focus on the important role of other energy prices and fundamental factors in setting the power price. In doing so, we present a detailed analysis of the structural approach for electricity, arguing for its merits over traditional reduced-form models. Building on several recent articles, we advocate a broad and flexible structural framework for spot prices, incorporating demand, capacity and fuel prices in several ways, while calculating closed-form forward prices throughout.",
"title": ""
}
] |
scidocsrr
|
c25f61d2eca07f1c3f89a0d1f4e32c27
|
Frame-Recurrent Video Super-Resolution
|
[
{
"docid": "784dc5ac8e639e3ba4103b4b8653b1ff",
"text": "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.",
"title": ""
},
{
"docid": "33de1981b2d9a0aa1955602006d09db9",
"text": "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"title": ""
}
] |
[
{
"docid": "89ead93b4f234e50b6d6e70ad4f54d67",
"text": "Clinical impressions of metabolic disease problems in dairy herds can be corroborated with herd-based metabolic testing. Ruminal pH should be evaluated in herds showing clinical signs associated with SARA (lame cows, thin cows, high herd removals or death loss across all stages of lactation, or milk fat depression). Testing a herd for the prevalence of SCK via blood BHB sampling in early lactation is useful in almost any dairy herd, and particularly if the herd is experiencing a high incidence of displaced abomasum or high removal rates of early lactation cows. If cows are experiencing SCK within the first 3 weeks of lactation, then consider NEFA testing of the prefresh cows to corroborate prefresh negative energy balance. Finally, monitoring cows on the day of calving for parturient hypocalcemia can provide early detection of diet-induced problems in calcium homeostasis. If hypocalcemia problems are present despite supplementing anionic salts before calving, then it may be helpful to evaluate mean urinary pH of a group of the prefresh cows. Quantitative testing strategies based on statistical analyses can be used to establish minimum sample sizes and interpretation guidelines for all of these tests.",
"title": ""
},
{
"docid": "5857805620b43cafa7a18461dfb74363",
"text": "In this paper, we give an overview for the shared task at the 5th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC 2016): Chinese word segmentation for micro-blog texts. Different with the popular used newswire datasets, the dataset of this shared task consists of the relatively informal micro-texts. Besides, we also use a new psychometric-inspired evaluation metric for Chinese word segmentation, which addresses to balance the very skewed word distribution at different levels of difficulty. The data and evaluation codes can be downloaded from https://github.com/FudanNLP/ NLPCC-WordSeg-Weibo.",
"title": ""
},
{
"docid": "166b9cb75f8f81e3f143a44b1b3e0b99",
"text": "This study aimed to classify different emotional states by means of EEG-based functional connectivity patterns. Forty young participants viewed film clips that evoked the following emotional states: neutral, positive, or negative. Three connectivity indices, including correlation, coherence, and phase synchronization, were used to estimate brain functional connectivity in EEG signals. Following each film clip, participants were asked to report on their subjective affect. The results indicated that the EEG-based functional connectivity change was significantly different among emotional states. Furthermore, the connectivity pattern was detected by pattern classification analysis using Quadratic Discriminant Analysis. The results indicated that the classification rate was better than chance. We conclude that estimating EEG-based functional connectivity provides a useful tool for studying the relationship between brain activity and emotional states.",
"title": ""
},
{
"docid": "14049dd7ee7a07107702c531fec4ff61",
"text": "Reducing errors and improving quality are an integral part of Pathology and Laboratory Medicine. The rate of errors is reviewed for the pre-analytical, analytical, and post-analytical phases for a specimen. The quality systems in place in pathology today are identified and compared with benchmarks for quality. The types and frequency of errors and quality systems are reviewed for surgical pathology, cytopathology, clinical chemistry, hematology, microbiology, molecular biology, and transfusion medicine. Seven recommendations are made to reduce errors in future for Pathology and Laboratory Medicine.",
"title": ""
},
{
"docid": "af63f1e1efbb15f2f41a91deb6ec1e32",
"text": "OBJECTIVES\n: A systematic review of the literature to determine the ability of dynamic changes in arterial waveform-derived variables to predict fluid responsiveness and compare these with static indices of fluid responsiveness. The assessment of a patient's intravascular volume is one of the most difficult tasks in critical care medicine. Conventional static hemodynamic variables have proven unreliable as predictors of volume responsiveness. Dynamic changes in systolic pressure, pulse pressure, and stroke volume in patients undergoing mechanical ventilation have emerged as useful techniques to assess volume responsiveness.\n\n\nDATA SOURCES\n: MEDLINE, EMBASE, Cochrane Register of Controlled Trials and citation review of relevant primary and review articles.\n\n\nSTUDY SELECTION\n: Clinical studies that evaluated the association between stroke volume variation, pulse pressure variation, and/or stroke volume variation and the change in stroke volume/cardiac index after a fluid or positive end-expiratory pressure challenge.\n\n\nDATA EXTRACTION AND SYNTHESIS\n: Data were abstracted on study design, study size, study setting, patient population, and the correlation coefficient and/or receiver operating characteristic between the baseline systolic pressure variation, stroke volume variation, and/or pulse pressure variation and the change in stroke index/cardiac index after a fluid challenge. When reported, the receiver operating characteristic of the central venous pressure, global end-diastolic volume index, and left ventricular end-diastolic area index were also recorded. Meta-analytic techniques were used to summarize the data. Twenty-nine studies (which enrolled 685 patients) met our inclusion criteria. Overall, 56% of patients responded to a fluid challenge. The pooled correlation coefficients between the baseline pulse pressure variation, stroke volume variation, systolic pressure variation, and the change in stroke/cardiac index were 0.78, 0.72, and 0.72, respectively. The area under the receiver operating characteristic curves were 0.94, 0.84, and 0.86, respectively, compared with 0.55 for the central venous pressure, 0.56 for the global end-diastolic volume index, and 0.64 for the left ventricular end-diastolic area index. The mean threshold values were 12.5 +/- 1.6% for the pulse pressure variation and 11.6 +/- 1.9% for the stroke volume variation. The sensitivity, specificity, and diagnostic odds ratio were 0.89, 0.88, and 59.86 for the pulse pressure variation and 0.82, 0.86, and 27.34 for the stroke volume variation, respectively.\n\n\nCONCLUSIONS\n: Dynamic changes of arterial waveform-derived variables during mechanical ventilation are highly accurate in predicting volume responsiveness in critically ill patients with an accuracy greater than that of traditional static indices of volume responsiveness. This technique, however, is limited to patients who receive controlled ventilation and who are not breathing spontaneously.",
"title": ""
},
{
"docid": "b07cbf3da9e3ff9691dcb49040c7e6a5",
"text": "Few years ago, the information flow in library was relatively simple and the application of technology was limited. However, as we progress into a more integrated world where technology has become an integral part of the business processes, the process of transfer of information has become more complicated. Today, one of the biggest challenges that libraries face is the explosive growth of library data and to use this data to improve the quality of managerial decisions. Data mining techniques are analytical tools that can be used to extract meaningful knowledge from large data sets. This paper addresses the applications of data mining in library to extract useful information from the huge data sets and providing analytical tool to view and use this information for decision making processes by taking real life examples.",
"title": ""
},
{
"docid": "1224987c5fdd228cc38bf1ee3aeb6f2d",
"text": "Many existing studies of social media focus on only one platform, but the reality of users' lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider \"audience\" and \"content\" when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.",
"title": ""
},
{
"docid": "f71b608183cb3673ca15e1348e670791",
"text": "Representing knowledge about researchers and research communities is a prime use case for distributed, locally maintained, interlinked and highly structured information in the spirit of the Semantic Web. In this paper we describe the publicly available ‘Semantic Web for Research Communities’ (SWRC) ontology, in which research communities and relevant related concepts are modelled. We describe the design decisions that underlie the ontology and report on both experiences with and known usages of the SWRC Ontology. We believe that for making the Semantic Web reality the re-usage of ontologies and their continuous improvement by user communities is crucial. Our contribution aims to provide a description and usage guidelines to make the value of the SWRC explicit and to facilitate",
"title": ""
},
{
"docid": "7304805b7f5f8d22ef9f3ce02f8954e6",
"text": "A novel inductor switching technique is used to design and implement a wideband LC voltage controlled oscillator (VCO) in 0.13µm CMOS. The VCO has a tuning range of 87.2% between 3.3 and 8.4 GHz with phase noise ranging from −122 to −117.2 dBc/Hz at 1MHz offset. The power varies between 6.5 and 15.4 mW over the tuning range. This results in a Power-Frequency-Tuning Normalized figure of merit (PFTN) between 6.6 and 10.2 dB which is one of the best reported to date.",
"title": ""
},
{
"docid": "521b77c549c0fdb7edb609fbde7f6abc",
"text": "User recommender systems are a key component in any on-line social networking platform: they help the users growing their network faster, thus driving engagement and loyalty.\n In this paper we study link prediction with explanations for user recommendation in social networks. For this problem we propose WTFW (\"Who to Follow and Why\"), a stochastic topic model for link prediction over directed and nodes-attributed graphs. Our model not only predicts links, but for each predicted link it decides whether it is a \"topical\" or a \"social\" link, and depending on this decision it produces a different type of explanation.\n A topical link is recommended between a user interested in a topic and a user authoritative in that topic: the explanation in this case is a set of binary features describing the topic responsible of the link creation. A social link is recommended between users which share a large social neighborhood: in this case the explanation is the set of neighbors which are more likely to be responsible for the link creation.\n Our experimental assessment on real-world data confirms the accuracy of WTFW in the link prediction and the quality of the associated explanations.",
"title": ""
},
{
"docid": "576c215649f09f2f6fb75369344ce17f",
"text": "The emergence of two new technologies, namely, software defined networking (SDN) and network function virtualization (NFV), have radically changed the development of network functions and the evolution of network architectures. These two technologies bring to mobile operators the promises of reducing costs, enhancing network flexibility and scalability, and shortening the time-to-market of new applications and services. With the advent of SDN and NFV and their offered benefits, the mobile operators are gradually changing the way how they architect their mobile networks to cope with ever-increasing growth of data traffic, massive number of new devices and network accesses, and to pave the way toward the upcoming fifth generation networking. This survey aims at providing a comprehensive survey of state-of-the-art research work, which leverages SDN and NFV into the most recent mobile packet core network architecture, evolved packet core. The research work is categorized into smaller groups according to a proposed four-dimensional taxonomy reflecting the: 1) architectural approach, 2) technology adoption, 3) functional implementation, and 4) deployment strategy. Thereafter, the research work is exhaustively compared based on the proposed taxonomy and some added attributes and criteria. Finally, this survey identifies and discusses some major challenges and open issues, such as scalability and reliability, optimal resource scheduling and allocation, management and orchestration, and network sharing and slicing that raise from the taxonomy and comparison tables that need to be further investigated and explored.",
"title": ""
},
{
"docid": "0458cd9d19a2837c5890977adb7fbcc8",
"text": "Nowadays, the development of traditional business models become more and more mature that people use them to guide various kinds of E-business activities. Internet of things (IoT), being an innovative revolution over the Internet, becomes a new platform for E-business. However, old business models could hardly fit for the E-business on the IoT. In this article, we 1) propose an IoT E-business model, which is specially designed for the IoT E-business; 2) redesign many elements in traditional E-business models; 3) realize the transaction of smart property and paid data on the IoT with the help of P2P trade based on the Blockchain and smart contract. We also experiment our design and make a comprehensive discuss.",
"title": ""
},
{
"docid": "bc726085dace24ccdf33a2cd58ab8016",
"text": "The output of high-level synthesis typically consists of a netlist of generic RTL components and a state sequencing table. While module generators and logic synthesis tools can be used to map RTL components into standard cells or layout geometries, they cannot provide technology mapping into the data book libraries of functional RTL cells used commonly throughout the industrial design community. In this paper, we introduce an approach to implementing generic RTL components with technology-specific RTL library cells. This approach addresses the criticism of designers who feel that high-level synthesis tools should be used in conjunction with existing RTL data books. We describe how GENUS, a library of generic RTL components, is organized for use in high-level synthesis and how DTAS, a functional synthesis system, is used to map GENUS components into RTL library cells.",
"title": ""
},
{
"docid": "a2013a7c9212829187fff9bfa42665e5",
"text": "As companies increase their efforts in retaining customers, being able to predict accurately ahead of time, whether a customer will churn in the foreseeable future is an extremely powerful tool for any marketing team. The paper describes in depth the application of Deep Learning in the problem of churn prediction. Using abstract feature vectors, that can generated on any subscription based company’s user event logs, the paper proves that through the use of the intrinsic property of Deep Neural Networks (learning secondary features in an unsupervised manner), the complete pipeline can be applied to any subscription based company with extremely good churn predictive performance. Furthermore the research documented in the paper was performed for Framed Data (a company that sells churn prediction as a service for other companies) in conjunction with the Data Science Institute at Lancaster University, UK. This paper is the intellectual property of Framed Data.",
"title": ""
},
{
"docid": "fb0648489dcf41e98ad617657725a66e",
"text": "In this paper, a triple active bridge converter is proposed. The topology is capable of achieving ZVS across the full load range with wide input voltage while minimizing heavy load conduction losses to increase overall efficiency. This topology comprises three full bridges coupled by a three-winding transformer. At light load, by adjusting the phase shift between two input bridges, all switching devices can maintain ZVS due to a controlled circulating current. At heavy load, the two input bridges work in parallel to reduce conduction loss. The operation principles of this topology are introduced and the ZVS boundaries are derived. Based on analytical models of power loss, a 200W laboratory prototype has been built to verify theoretical considerations.",
"title": ""
},
{
"docid": "556c0c1662a64f484aff9d7556b2d0b5",
"text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "cf707baa5b30bcdb75d2f3a6e01862e8",
"text": "Engagement in the arts1 is an important component of participation in cultural activities, but remains a largely unaddressed challenge for people with sensory disabilities. Visual arts are generally inaccessible to people with visual impairments due to their inherently visual nature. To address this, we present Eyes-Free Art, a design probe to explore the use of proxemic audio for interactive sonic experiences with 2D art work. The proxemic audio interface allows a user to move closer and further away from a painting to experience background music, a novel sonification, sound effects, and a detailed verbal description. We conducted a lab study by creating interpretations of five paintings with 13 people with visual impairments and found that participants enjoyed interacting with the artwork. We then created a live installation with a visually impaired artist to iterate on this concept to account for multiple users and paintings. We learned that a proxemic audio interface allows for people to feel immersed in the artwork. Proxemic audio interfaces are similar to visual because they increase in detail with closer proximity, but are different because they need a descriptive verbal overview to give context. We present future research directions in the space of proxemic audio interactions.",
"title": ""
},
{
"docid": "d2ce66a758efcb045e42e8accd7ba292",
"text": "Incorporating a human computer interaction (HCI) perspective into the systems development life cycle (SDLC) is critical to information systems (IS) success and in turn to the success of businesses. However, modern SDLC models are based more on organizational needs than human needs. The human interaction aspect of an information system is considered far too little (only the screen interface) and far too late in the IS development process (only at the design stage). Thus there is often a gap between satisfying organizational needs and supporting and enriching human users as they use the system for their tasks. This problem can be fixed by carefully integrating HCI development into the SDLC process to achieve a truly human-centered IS development approach. This tutorial presents a methodology for such human-centered IS development where human requirements for the whole system are emphasized. An example of applying such methodology is used to illustrate the important concepts and techniques.",
"title": ""
},
{
"docid": "ac4d208a022717f6389d8b754abba80b",
"text": "This paper presents a new approach to detect tabular structures present in document images and in low resolution video images. The algorithm for table detection is based on identifying the unique table start pattern and table trailer pattern. We have formulated perceptual attributes to characterize the patterns. The performance of our table detection system is tested on a set of document images picked from UW-III (University of Washington) dataset, UNLV dataset, video images of NPTEL videos, and our own dataset. Our approach demonstrates improved detection for different types of table layouts, with or without ruling lines. We have obtained correct table localization on pages with multiple tables aligned side-by-side.",
"title": ""
},
{
"docid": "13f1b9cf251b3b37de00cb68b17652c0",
"text": "This is an updated and expanded version of TR2000-26, but it is still in draft form. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe’s approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe’s approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions of these Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2001 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
}
] |
scidocsrr
|
965929afe3b466d809918951997102ed
|
Modelling input texts: from Tree Kernels to Deep Learning
|
[
{
"docid": "e8dbadf197a6222bdcd8604915811bb8",
"text": "In this paper we present SenTube – a dataset of user-generated comments on YouTube videos annotated for information content and sentiment polarity. It contains annotations that allow to develop classifiers for several important NLP tasks: (i) sentiment analysis, (ii) text categorization (relatedness of a comment to video and/or product), (iii) spam detection, and (iv) prediction of comment informativeness. The SenTube corpus favors the development of research on indexing and searching YouTube videos exploiting information derived from comments. The corpus will cover several languages: at the moment, we focus on English and Italian, with Spanish and Dutch parts scheduled for the later stages of the project. For all the languages, we collect videos for the same set of products, thus offering possibilities for multiand cross-lingual experiments. The paper provides annotation guidelines, corpus statistics and annotator agreement details.",
"title": ""
}
] |
[
{
"docid": "0d5b33ce7e1a1af17751559c96fdcf0a",
"text": "Urban-related data and geographic information are becoming mainstream in the Linked Data community due also to the popularity of Location-based Services. In this paper, we introduce the UrbanMatch game, a mobile gaming application that joins data linkage and data quality/trustworthiness assessment in an urban environment. By putting together Linked Data and Human Computation, we create a new interaction paradigm to consume and produce location-specific linked data by involving and engaging the final user. The UrbanMatch game is also offered as an example of value proposition and business model of a new family of linked data applications based on gaming in Smart Cities.",
"title": ""
},
{
"docid": "234fcc911f6d94b6bbb0af237ad5f34f",
"text": "Contamination of samples with DNA is still a major problem in microbiology laboratories, despite the wide acceptance of PCR and other amplification techniques for the detection of frequently low amounts of target DNA. This review focuses on the implications of contamination in the diagnosis and research of infectious diseases, possible sources of contaminants, strategies for prevention and destruction, and quality control. Contamination of samples in diagnostic PCR can have far-reaching consequences for patients, as illustrated by several examples in this review. Furthermore, it appears that the (sometimes very unexpected) sources of contaminants are diverse (including water, reagents, disposables, sample carry over, and amplicon), and contaminants can also be introduced by unrelated activities in neighboring laboratories. Therefore, lack of communication between researchers using the same laboratory space can be considered a risk factor. Only a very limited number of multicenter quality control studies have been published so far, but these showed false-positive rates of 9–57%. The overall conclusion is that although nucleic acid amplification assays are basically useful both in research and in the clinic, their accuracy depends on awareness of risk factors and the proper use of procedures for the prevention of nucleic acid contamination. The discussion of prevention and destruction strategies included in this review may serve as a guide to help improve laboratory practices and reduce the number of false-positive amplification results.",
"title": ""
},
{
"docid": "99e604a84b6d56d2f42efe7b0a2ddec8",
"text": "This work aims at providing a RLCG modeling ofthe 10 µm fine-pitch microbump type interconnects in the 100 MHz-40 GHz frequency band based on characterization approach. RF measurements are performed on two-port test structures within a short-loop with chip to wafer assembly using the fine pitch 10 µm Cu-pillar on a 10 Ohm.cm substrate resistivity silicon interposer. Accuracy is obtained thanks to a coplanar transmission line using 44 Cu-pillar transitions. To the author knowledge, it is the first time that RLCG modeling of fine-pitch Cu-pillar is extracted from experimental results. Another goal of this work is to get a better understanding of the main physical effects over a wide frequency range, especially concerning the key parameter of fine pitch Cu-pillar, i.e. the resistance. Finally, analysis based on the proposed RLCG modeling are performed to optimize over frequency the resistive interposer-to-chip link thanks to process modifications mitigating high frequency parasitic effects.",
"title": ""
},
{
"docid": "c1d436c01088c2295b35a1a37e922bee",
"text": "Tourism is an important part of national economy. On the other hand it can also be a source of some negative externalities. These are mainly environmental externalities, resulting in increased pollution, aesthetic or architectural damages. High concentration of visitors may also lead to increased crime, or aggressiveness. These may have negative effects on quality of life of residents and negative experience of visitors. The paper deals with the influence of tourism on destination environment. It highlights the necessity of sustainable forms of tourism and activities to prevent negative implication of tourism, such as education activities and tourism monitoring. Key-words: Tourism, Mass Tourism, Development, Sustainability, Tourism Impact, Monitoring.",
"title": ""
},
{
"docid": "4eafe7f60154fa2bed78530735a08878",
"text": "Although Android's permission system is intended to allow users to make informed decisions about their privacy, it is often ineffective at conveying meaningful, useful information on how a user's privacy might be impacted by using an application. We present an alternate approach to providing users the knowledge needed to make informed decisions about the applications they install. First, we create a knowledge base of mappings between API calls and fine-grained privacy-related behaviors. We then use this knowledge base to produce, through static analysis, high-level behavior profiles of application behavior. We have analyzed almost 80,000 applications to date and have made the resulting behavior profiles available both through an Android application and online. Nearly 1500 users have used this application to date. Based on 2782 pieces of application-specific feedback, we analyze users' opinions about how applications affect their privacy and demonstrate that these profiles have had a substantial impact on their understanding of those applications. We also show the benefit of these profiles in understanding large-scale trends in how applications behave and the implications for user privacy.",
"title": ""
},
{
"docid": "e0d040efd131db568d875b80c6adc111",
"text": "Familism is a cultural value that emphasizes interdependent family relationships that are warm, close, and supportive. We theorized that familism values can be beneficial for romantic relationships and tested whether (a) familism would be positively associated with romantic relationship quality and (b) this association would be mediated by less attachment avoidance. Evidence indicates that familism is particularly relevant for U.S. Latinos but is also relevant for non-Latinos. Thus, we expected to observe the hypothesized pattern in Latinos and explored whether the pattern extended to non-Latinos of European and East Asian cultural background. A sample of U.S. participants of Latino (n 1⁄4 140), European (n 1⁄4 176), and East Asian (n 1⁄4 199) cultural background currently in a romantic relationship completed measures of familism, attachment, and two indices of romantic relationship quality, namely, partner support and partner closeness. As predicted, higher familism was associated with higher partner support and partner closeness, and these associations were mediated by lower attachment avoidance in the Latino sample. This pattern was not observed in the European or East Asian background samples. The implications of familism for relationships and psychological processes relevant to relationships in Latinos and non-Latinos are discussed. 1 University of California, Irvine, USA 2 University of California, Los Angeles, USA Corresponding author: Belinda Campos, Department of Chicano/Latino Studies, University of California, Irvine, 3151 Social Sciences Plaza A, Irvine, CA 92697, USA. Email: bcampos@uci.edu Journal of Social and Personal Relationships 1–20 a The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0265407514562564 spr.sagepub.com J S P R at UNIV CALIFORNIA IRVINE on January 5, 2015 spr.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "cb196f1bd373110cf7428a46f73f3a8f",
"text": "We present Corona, a wearable device that allows constant high-voltage electrostatic charge to be continuously accumulated in the human body. We propose the usages of Corona for three basic functions; generating haptic sensations, generating electric power from body static charge and near-body electric field, and inducing physical force near the body. We describe detailed principle of operation, analysis of produced energy and force, discussion on safety issues, as well as demonstration of proof-of-concept applications for aforementioned basic functions. We conclude with discussion of our experiments using the prototype and applications, which also involves a study to gather user feedbacks. To the best of our knowledge, Corona is the first work to exploit continuous high-voltage static charge on the human body for Human-Computer Interaction purposes.",
"title": ""
},
{
"docid": "fdb3afefbb8e96eed2e35e8c2a3fd015",
"text": "BACKGROUND\nAvoidant/Restrictive Food Intake Disorder (ARFID) is a \"new\" diagnosis in the recently published DSM-5, but there is very little literature on patients with ARFID. Our objectives were to determine the prevalence of ARFID in children and adolescents undergoing day treatment for an eating disorder, and to compare ARFID patients to other eating disorder patients in the same cohort.\n\n\nMETHODS\nA retrospective chart review of 7-17 year olds admitted to a day program for younger patients with eating disorders between 2008 and 2012 was performed. Patients with ARFID were compared to those with anorexia nervosa, bulimia nervosa, and other specified feeding or eating disorder/unspecified feeding or eating disorder with respect to demographics, anthropometrics, clinical symptoms, and psychometric testing, using Chi-square, ANOVA, and post-hoc analysis.\n\n\nRESULTS\n39/173 (22.5%) patients met ARFID criteria. The ARFID group was younger than the non-ARFID group and had a greater proportion of males. Similar degrees of weight loss and malnutrition were found between groups. Patients with ARFID reported greater fears of vomiting and/or choking and food texture issues than those with other eating disorders, as well as greater dependency on nutritional supplements at intake. Children's Eating Attitudes Test scores were lower for children with than without ARFID. A higher comorbidity of anxiety disorders, pervasive developmental disorder, and learning disorders, and a lower comorbidity of depression, were found in those with ARFID.\n\n\nCONCLUSIONS\nThis study demonstrates that there are significant demographic and clinical characteristics that differentiate children with ARFID from those with other eating disorders in a day treatment program, and helps substantiate the recognition of ARFID as a distinct eating disorder diagnosis in the DSM-5.",
"title": ""
},
{
"docid": "4f66bf9da23c0beb562dfaeb3af18d93",
"text": "Cloud computing concept has been envisioned as architecture of the next generation for Information Technology (IT) enterprise. The Cloud computing idea offers with dynamic scalable resources provisioned as examine on the Internet. It allows access to remote computing services and users only have to pay for what they want to use, when they want to use it. But the security of the information which is stored in the cloud is the major issue for a cloud user. Cloud computing has been flourishing in past years because of its ability to provide users with on-demand, flexible, reliable, and low-cost services. With more and more cloud applications being available, data security becomes an important issue to the cloud. In order to make sure security of the information at cloud data storage end, a design and implementation of an algorithm to enhance cloud security is proposed. With a concept, where the proposed algorithm (PA) combines features of two other existing algorithms named Ceaser cipher and Attribute based cryptography (ABC). In this research work, text information are encrypting using "Caesar Cipher" then produced cipher text again encrypted by using proposed algorithm (PA) with the help of private key of 128 bits. And in the last step of encryption process, based on ABC, attribute related to cipher text is stored along with cipher text",
"title": ""
},
{
"docid": "51bc87524f064f715bb5876f21468d9d",
"text": "Cloud computing provides an effective business model for the deployment of IT infrastructure, platform, and software services. Often, facilities are outsourced to cloud providers and this offers the service consumer virtualization technologies without the added cost burden of development. However, virtualization introduces serious threats to service delivery such as Denial of Service (DoS) attacks, Cross-VM Cache Side Channel attacks, Hypervisor Escape and Hyper-jacking. One of the most sophisticated forms of attack is the cross-VM cache side channel attack that exploits shared cache memory between VMs. A cache side channel attack results in side channel data leakage, such as cryptographic keys. Various techniques used by the attackers to launch cache side channel attack are presented, as is a critical analysis of countermeasures against cache side channel attacks.",
"title": ""
},
{
"docid": "2c251c8f1fcf15510a5c82de33daced3",
"text": "BACKGROUND\nOverall diet quality measurements have been suggested as a useful tool to assess diet-disease relationships. Oxidative stress has been related to the development of obesity and other chronic diseases. Furthermore, antioxidant intake is being considered as protective against cell oxidative damage and related metabolic complications.\n\n\nOBJECTIVE\nTo evaluate potential associations between the dietary total antioxidant capacity of foods (TAC), the energy density of the diet, and other relevant nutritional quality indexes in healthy young adults.\n\n\nMETHODS\nSeveral anthropometric variables from 153 healthy participants (20.8 +/- 2.7 years) included in this study were measured. Dietary intake was assessed by a validated food-frequency questionnaire, which was also used to calculate the dietary TAC and for daily energy intake adjustment.\n\n\nRESULTS\nPositive significant associations were found between dietary TAC and Mediterranean energy density hypothesis-oriented dietary scores (Mediterranean Diet Score, Alternate Mediterranean Diet Score, Modified Mediterranean Diet Score), non-Mediterranean hypothesis-oriented dietary scores (Healthy Eating Index, Alternate Healthy Eating Index, Diet Quality Index-International, Diet Quality Index-Revised), and diversity of food intake indicators (Recommended Food Score, Quantitative Index for Dietary Diversity in terms of total energy intake). The Mediterranean Diet Quality Index and Diet Quality Index scores (a Mediterranean and a non-Mediterranean hypothesis-oriented dietary score, respectively), whose lower values refer to a higher diet quality, decreased with higher values of dietary TAC. Energy density was also inversely associated with dietary TAC.\n\n\nCONCLUSION\nThese data suggest that dietary TAC, as a measure of antioxidant intake, may also be a potential marker of diet quality in healthy subjects, providing a novel approach to assess the role of antioxidant intake on health promotion and diet-based therapies.",
"title": ""
},
{
"docid": "d6039a3f998b33c08b07696dfb1c2ca9",
"text": "In this paper, we propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system monitors almost entire length of the track line in the platform by using multiple cameras, and determines in real-time whether a human or dangerous obstacle is in the preset monitoring area by using image processing technology. According to the experimental results, we verity system performance in real condition. Detection of train state and object is conducted robustly by using proposed image processing algorithm. Moreover, to deal with the accident immediately, the system provides local station, central control room and train with the video information and alarm message.",
"title": ""
},
{
"docid": "631c42f3f0aa5ccbb26d52ad185505bf",
"text": "Roll-to-roll embossing is able to produce functional optical films with high throughput and lower cost, compared to other conventional manufacturing technologies such like injection molding. In roll-to-roll ultraviolet (UV) embossing, the functional microstructures on an optical film are directly replicated from a high-precision roller mold; the roller mold requires strict control of profile accuracy and optical surface quality which have to be achieved using ultra-precision machining technology. In this study, optical films with linear Fresnel lens array have been developed by using ultra-precision diamond tuning for mold fabrication and roll-to-roll UV embossing for transferring the features from mold to the flexible plastic substrates. The roller mould for making Fresnel lens array was designed in such a way that one Fresnel lens was composed of 166 tiny prisms and all prisms have an equal-feature-height of 0.1 mm, resulting in a focal length of 82.5 mm. With the optimized roll-to-roll UV embossing process, the Fresnel lens film produced in large area demonstrated strong light concentration effect on a solar simulation system, with light intensity increased by more than 6 times.",
"title": ""
},
{
"docid": "e45cb7f8283a25aa5485acf970ccfdd7",
"text": "Excitatory control of inhibitory neurons is poorly understood due to the difficulty of studying synaptic connectivity in vivo. We inferred such connectivity through analysis of spike timing and validated this inference using juxtacellular and optogenetic control of presynaptic spikes in behaving mice. We observed that neighboring CA1 neurons had stronger connections and that superficial pyramidal cells projected more to deep interneurons. Connection probability and strength were skewed, with a minority of highly connected hubs. Divergent presynaptic connections led to synchrony between interneurons. Synchrony of convergent presynaptic inputs boosted postsynaptic drive. Presynaptic firing frequency was read out by postsynaptic neurons through short-term depression and facilitation, with individual pyramidal cells and interneurons displaying a diversity of spike transmission filters. Additionally, spike transmission was strongly modulated by prior spike timing of the postsynaptic cell. These results bridge anatomical structure with physiological function.",
"title": ""
},
{
"docid": "55d08da55a64d35f3115911f3cc22e82",
"text": "Process modeling has become an essential part of many organizations for documenting, analyzing and redesigning their business operations and to support them with suitable information systems. In order to serve this purpose, it is important for process models to be well grounded in formal and precise semantics. While behavioural semantics of process models are well understood, there is a considerable gap of research into the semantic aspects of their text labels and natural language descriptions. The aim of this paper is to make this research gap more transparent. To this end, we clarify the role of textual content in process models and the challenges that are associated with the interpretation, analysis, and improvement of their natural language parts. More specifically, we discuss particular use cases of semantic process modeling to identify 25 challenges. For each challenge, we identify prior research and discuss directions for addressing them.",
"title": ""
},
{
"docid": "a99785b0563ca5922da304f69aa370c0",
"text": "Marcel Fritz, Christian Schlereth, Stefan Figge Empirical Evaluation of Fair Use Flat Rate Strategies for Mobile Internet The fair use flat rate is a promising tariff concept for the mobile telecommunication industry. Similar to classical flat rates it allows unlimited usage at a fixed monthly fee. Contrary to classical flat rates it limits the access speed once a certain usage threshold is exceeded. Due to the current global roll-out of the LTE (Long Term Evolution) technology and the related economic changes for telecommunication providers, the application of fair use flat rates needs a reassessment. We therefore propose a simulation model to evaluate different pricing strategies and their contribution margin impact. The key input element of the model is provided by socalled discrete choice experiments that allow the estimation of customer preferences. Based on this customer information and the simulation results, the article provides the following recommendations. Classical flat rates do not allow profitable provisioning of mobile Internet access. Instead, operators should apply fair use flat rates with a lower usage threshold of 1 or 3 GB which leads to an improved contribution margin. Bandwidth and speed are secondary and do merely impact customer preferences. The main motivation for new mobile technologies such as LTE should therefore be to improve the cost structure of an operator rather than using it to skim an assumed higher willingness to pay of mobile subscribers.",
"title": ""
},
{
"docid": "18f9eb6f90eb09393395e4cb5a12ea01",
"text": " Topic modeling refers to the process of algorithmically sorting documents into categories based on some common relationship between the documents. This common relationship between the documents is considered the “topic” of the documents. Sentiment analysis refers to the process of algorithmically sorting a document into a positive or negative category depending whether this document expresses a positive or negative opinion on its respective topic. In this paper, I consider the open problem of document classification into a topic category, as well as a sentiment category. This has a direct application to the retail industry where companies may want to scour the web in order to find documents (blogs, Amazon reviews, etc.) which both speak about their product, and give an opinion on their product (positive, negative or neutral). My solution to this problem uses a Non-negative Matrix Factorization (NMF) technique in order to determine the topic classifications of a document set, and further factors the matrix in order to discover the sentiment behind this category of product. Introduction to Sentiment Analysis: In the United States, the incredible accessibility of the internet gives a voice to every consumer. Furthermore, internet blogs and common review sites are the first places the majority of consumers turn to when researching the pros and cons behind a product they are looking to purchase. Discovering sentiment and insights behind a company's products is hardly a new challenge, but a constantly evolving one given the complexity and sheer number of reviews buried inside a multitude of internet domains. Internet reviews, perhaps more frequently than a traditionally written and formatted magazine or newspaper review, are riddled with sarcasm, abbreviations, and slang. Simple text classification techniques based on analyzing the number of positive and negative words that occur in a document are error prone because of this. The internet requires a new solution to the trend of “reviews in 140 characters or less” which will necessitate unsupervised or semi-supervised machine learning and natural language processing techniques. Observed in [Ng et al., 2009] semi-supervised dictionary based approaches yield unsatisfactory results, with resulting lexicons of large coverage and low precision, or limited coverage and higher precision. In this paper, I will attempt to utilize these previously created dictionaries (of positive and negative words) and incorporate them into a machine learning approach to classify unlabeled documents. Introduction to Non-negative Matrix Factorization and Topic Modeling: Non-negative Matrix Factorization has applications to many fields such as computer vision, but we are interested in the specific application to topic modeling (often referred to as document clustering). NMF is the process of factoring a matrix into (usually) two parts where a [A x B] matrix is approximated by [A x r] x [r x B] where r is a chosen value, less than A and B. Every element in [A x r] and [r x B] must be non-negative throughout this process.",
"title": ""
},
{
"docid": "cebd215c11e4c73266e70950ac5af2ff",
"text": "Precision oncology is predicated upon the ability to detect specific actionable genomic alterations and to monitor their adaptive evolution during treatment to counter resistance. Because of spatial and temporal heterogeneity and comorbidities associated with obtaining tumor tissues, especially in the case of metastatic disease, traditional methods for tumor sampling are impractical for this application. Known to be present in the blood of cancer patients for decades, cell-free DNA (cfDNA) is beginning to inform on tumor genetics, tumor burden, and mechanisms of progression and drug resistance. This substrate is amenable for inexpensive noninvasive testing and thus presents a viable approach to serial sampling for screening and monitoring tumor progression. The fragmentation, low yield, and variable admixture of normal DNA present formidable technical challenges for realization of this potential. This review summarizes the history of cfDNA discovery, its biological properties, and explores emerging technologies for clinically relevant sequence-based analysis of cfDNA in cancer patients. Molecular barcoding (or Unique Molecular Identifier, UMI)-based methods currently appear to offer an optimal balance between sensitivity, flexibility, and cost and constitute a promising approach for clinically relevant assays for near real-time monitoring of treatment-induced mutational adaptations to guide evidence-based precision oncology. Mol Cancer Res; 14(10); 898-908. ©2016 AACR.",
"title": ""
},
{
"docid": "afd378cf5e492a9627e746254586763b",
"text": "Gradient-based optimization has enabled dramatic advances in computational imaging through techniques like deep learning and nonlinear optimization. These methods require gradients not just of simple mathematical functions, but of general programs which encode complex transformations of images and graphical data. Unfortunately, practitioners have traditionally been limited to either hand-deriving gradients of complex computations, or composing programs from a limited set of coarse-grained operators in deep learning frameworks. At the same time, writing programs with the level of performance needed for imaging and deep learning is prohibitively difficult for most programmers.\n We extend the image processing language Halide with general reverse-mode automatic differentiation (AD), and the ability to automatically optimize the implementation of gradient computations. This enables automatic computation of the gradients of arbitrary Halide programs, at high performance, with little programmer effort. A key challenge is to structure the gradient code to retain parallelism. We define a simple algorithm to automatically schedule these pipelines, and show how Halide's existing scheduling primitives can express and extend the key AD optimization of \"checkpointing.\"\n Using this new tool, we show how to easily define new neural network layers which automatically compile to high-performance GPU implementations, and how to solve nonlinear inverse problems from computational imaging. Finally, we show how differentiable programming enables dramatically improving the quality of even traditional, feed-forward image processing algorithms, blurring the distinction between classical and deep methods.",
"title": ""
}
] |
scidocsrr
|
0cfff37214194f41ec10a43e8d937c3c
|
Autonomous robotic monitoring of underground cable systems
|
[
{
"docid": "3ae884fc5b7c090fc386367f9a2c6fa2",
"text": "This review paper focuses on interdigital electrodes-a geometric structure encountered in a wide variety of sensor and transducer designs. Physical and chemical principles behind the operation of these devices vary so much across different fields of science and technology that the common features present in all devices are often overlooked. This paper attempts to bring under one umbrella capacitive, inductive, dielectric, piezoacoustic, chemical, biological, and microelectromechanical interdigital sensors and transducers. The paper also provides historical perspective, discusses fabrication techniques, modeling of sensor parameters, application examples, and directions of future research.",
"title": ""
}
] |
[
{
"docid": "123b35d403447a29eaf509fa707eddaa",
"text": "Technology is the vital criteria to boosting the quality of life for everyone from new-borns to senior citizens. Thus, any technology to enhance the quality of life society has a value that is priceless. Nowadays Smart Wearable Technology (SWTs) innovation has been coming up to different sectors and is gaining momentum to be implemented in everyday objects. The successful adoption of SWTs by consumers will allow the production of new generations of innovative and high value-added products. The study attempts to predict the dynamics that play a role in the process through which consumers accept wearable technology. The research build an integrated model based on UTAUT2 and some external variables in order to investigate the direct and moderating effects of human expectation and behaviour on the awareness and adoption of smart products such as watch and wristband fitness. Survey will be chosen in order to test our model based on consumers. In addition, our study focus on different rate of adoption and expectation differences between early adopters and early majority in order to explore those differences and propose techniques to successfully cross the chasm between these two groups according to “Chasm theory”. For this aim and due to lack of prior research, Semi-structured focus groups will be used to obtain qualitative data for our research. Originality/value: To date, a few research exists addressing the adoption of smart wearable technologies. Therefore, the examination of consumers behaviour towards SWTs may provide orientations into the future that are useful for managers who can monitor how consumers make choices, how manufacturers should design successful market strategies, and how regulators can proscribe manipulative behaviour in this industry.",
"title": ""
},
{
"docid": "33bc830ab66c9864fd4c45c463c2c9da",
"text": "We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. As an alternative to the direct user input, our proposed system naturally supports a copy-paste mode, which allows users to edit a given image region by using parts of another exemplar image without the need of hand-drawn sketching at all. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits. Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. To the best of our knowledge, we are the first to combine these two tasks in a unified framework for interactive image editing. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high quality synthesis results without additional post-processing.",
"title": ""
},
{
"docid": "969a8e447fb70d22a7cbabe7fc47a9c9",
"text": "A novel multi-level AC six-phase motor drive is developed in this paper. The scheme is based on three conventional 2-level three-phase voltage source inverters (VSIs) supplying the open-end windings of a dual three-phase motor (six-phase induction machine). The proposed inverter is capable of supply the machine with multi-level voltage waveforms. The developed system is compared with the conventional solution and it is demonstrated that the drive system permits to reduce the harmonic distortion of the machine currents, to reduce the total semiconductor losses and to decrease the power processed by converter switches. The system model and the Pulse-Width Modulation (PWM) strategy are presented. The experimental verification was obtained by using IGBTs with dedicated drives and a digital signal processor (DSP) with plug-in boards and sensors.",
"title": ""
},
{
"docid": "612e460c0f6e328d7516bfba7b674517",
"text": "There is universality in the transactional-transformational leadership paradigm. That is, the same conception of phenomena and relationships can be observed in a wide range of organizations and cultures. Exceptions can be understood as a consequence of unusual attributes of the organizations or cultures. Three corollaries are discussed. Supportive evidence has been gathered in studies conducted in organizations in business, education, the military, the government, and the independent sector. Likewise, supportive evidence has been accumulated from all but 1 continent to document the applicability of the paradigm.",
"title": ""
},
{
"docid": "668912813f92c3d8bd152a7cf4b3fc3d",
"text": "Since the advent of computers, the natural and engineering sciences have enormously progressed. Computer simulations allow one to understand interactions of physical particles and make sense of astronomical observations, to describe many chemical properties ab initio, and to design energy-efficient aircrafts and safer cars. Today, the use of computational devices is pervasive. Offices, administrations, financial trading, economic exchange, the control of infrastructure networks, and a large share of our communication would not be conceivable without the use of computers anymore. Hence, it would be very surprising, if computers could not make a contribution to a better understanding of social and economic systems. While relevant also for the statistical analysis of data and data-driven efforts to reveal patterns of human interaction [1], we will focus here on the prospects of computer simulation of social and economic systems. More specifically, we will discuss the techniques of agent-based modeling (ABM) and multi-agent simulation (MAS), including the challenges, perspectives and limitations of the approach. In doing so, we will discuss a number of issues, which have not been covered by the excellent books and review papers available so far [2–10]. In particular, we will describe the different steps belonging to a thorough agent-based simulation study, and try to explain, how to do them right from a scientific perspective. To some extent, computer simulation can be seen as experimental technique for hypothesis testing and scenario analysis, which can be used complementary and in combination with experiments in real-life, the lab or the Web.",
"title": ""
},
{
"docid": "13d7abc974d44c8c3723c3b9c8534fec",
"text": "We propose a novel approach to automatically produce multiple colorized versions of a grayscale image. Our method results from the observation that the task of automated colorization is relatively easy given a low-resolution version of the color image. We first train a conditional PixelCNN to generate a low resolution color for a given grayscale image. Then, given the generated low-resolution color image and the original grayscale image as inputs, we train a second CNN to generate a high-resolution colorization of an image. We demonstrate that our approach produces more diverse and plausible colorizations than existing methods, as judged by human raters in a ”Visual Turing Test”.",
"title": ""
},
{
"docid": "9b3adf0f3c15a42ac1ee82a38d451988",
"text": "Four novel color Local Binary Pattern (LBP) descriptors are presented in this paper for scene image and image texture classification with applications to image search and retrieval. The oRGB-LBP descriptor is derived by concatenating the LBP features of the component images in the oRGB color space. The Color LBP Fusion (CLF) descriptor is constructed by integrating the LBP descriptors from different color spaces; the Color Grayscale LBP Fusion (CGLF) descriptor is derived by integrating the grayscale-LBP descriptor and the CLF descriptor; and the CGLF+PHOG descriptor is obtained by integrating the Pyramid of Histogram of Orientation Gradients (PHOG) and the CGLF descriptor. Feature extraction applies the Enhanced Fisher Model (EFM) and image classification is based on the nearest neighbor classification rule (EFM-NN). The proposed image descriptors and the feature extraction and classification methods are evaluated using three grand challenge databases and are shown to improve upon the classification performance of existing methods.",
"title": ""
},
{
"docid": "0fe3923ed3c6fffa3c910e661a79d722",
"text": "Unconstrained face recognition performance evaluations have traditionally focused on Labeled Faces in the Wild (LFW) dataset for imagery and the YouTubeFaces (YTF) dataset for videos in the last couple of years. Spectacular progress in this field has resulted in a saturation on verification and identification accuracies for those benchmark datasets. In this paper, we propose a unified learning framework named transferred deep feature fusion targeting at the new IARPA Janus Bechmark A (IJB-A) face recognition dataset released by NIST face challenge. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the LFW and YTF datasets. Inspired by transfer learning, we train two advanced deep convolutional neural networks (DCNN) with two different large datasets in source domain, respectively. By exploring the complementarity of two distinct DCNNs, deep feature fusion is utilized after feature extraction in target domain. Then, template specific linear SVMs is adopted to enhance the discrimination of framework. Finally, multiple matching scores corresponding different templates are merged as the final results. This simple unified framework outperforms the state-of-the-art by a wide margin on IJB-A dataset. Based on the proposed approach, we have submitted our IJB-A results to National Institute of Standards and Technology (NIST) for official evaluation.",
"title": ""
},
{
"docid": "f83a16d393c78d6ba0e65a4659446e7e",
"text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.",
"title": ""
},
{
"docid": "9d8adcc7b2fa2a3dd7953c0cede7b81d",
"text": "A new nonlinear disturbance observer (NDO) for robotic manipulators is derived in this paper. The global exponential stability of the proposed disturbance observer (DO) is guaranteed by selecting design parameters, which depend on the maximum velocity and physical parameters of robotic manipulators. This new observer overcomes the disadvantages of existing DO’s, which are designed or analyzed by linear system techniques. It can be applied in robotic manipulators for various purposes such as friction compensation, independent joint control, sensorless torque control, and fault diagnosis. The performance of the proposed observer is demonstrated by the friction estimation and compensation for a two-link robotic manipulator. Both simulation and experimental results show the NDO works well.",
"title": ""
},
{
"docid": "5aab6cd36899f3d5e3c93cf166563a3e",
"text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.",
"title": ""
},
{
"docid": "e6332297afd2883e41888be243b27d1d",
"text": "The 2018 Nucleic Acids Research Database Issue contains 181 papers spanning molecular biology. Among them, 82 are new and 84 are updates describing resources that appeared in the Issue previously. The remaining 15 cover databases most recently published elsewhere. Databases in the area of nucleic acids include 3DIV for visualisation of data on genome 3D structure and RNArchitecture, a hierarchical classification of RNA families. Protein databases include the established SMART, ELM and MEROPS while GPCRdb and the newcomer STCRDab cover families of biomedical interest. In the area of metabolism, HMDB and Reactome both report new features while PULDB appears in NAR for the first time. This issue also contains reports on genomics resources including Ensembl, the UCSC Genome Browser and ENCODE. Update papers from the IUPHAR/BPS Guide to Pharmacology and DrugBank are highlights of the drug and drug target section while a number of proteomics databases including proteomicsDB are also covered. The entire Database Issue is freely available online on the Nucleic Acids Research website (https://academic.oup.com/nar). The NAR online Molecular Biology Database Collection has been updated, reviewing 138 entries, adding 88 new resources and eliminating 47 discontinued URLs, bringing the current total to 1737 databases. It is available at http://www.oxfordjournals.org/nar/database/c/.",
"title": ""
},
{
"docid": "f70ce9d95ac15fc0800b8e6ac60247cb",
"text": "Many systems for the parallel processing of big data are available today. Yet, few users can tell by intuition which system, or combination of systems, is \"best\" for a given workflow. Porting workflows between systems is tedious. Hence, users become \"locked in\", despite faster or more efficient systems being available. This is a direct consequence of the tight coupling between user-facing front-ends that express workflows (e.g., Hive, SparkSQL, Lindi, GraphLINQ) and the back-end execution engines that run them (e.g., MapReduce, Spark, PowerGraph, Naiad).\n We argue that the ways that workflows are defined should be decoupled from the manner in which they are executed. To explore this idea, we have built Musketeer, a workflow manager which can dynamically map front-end workflow descriptions to a broad range of back-end execution engines.\n Our prototype maps workflows expressed in four high-level query languages to seven different popular data processing systems. Musketeer speeds up realistic workflows by up to 9x by targeting different execution engines, without requiring any manual effort. Its automatically generated back-end code comes within 5%--30% of the performance of hand-optimized implementations.",
"title": ""
},
{
"docid": "57416ef0f8ec577433898fb1a9e46bee",
"text": "New types of synthetic cannabinoid designer drugs are constantly introduced to the illicit drug market to circumvent legislation. Recently, N-(1-Adamantyl)-1-(5-fluoropentyl)-1H-indazole-3-carboxamide (5F-AKB-48), also known as 5F-APINACA, was identified as an adulterant in herbal products. This compound deviates from earlier JHW-type synthetic cannabinoids by having an indazole ring connected to an adamantyl group via a carboxamide linkage. Synthetic cannabinoids are completely metabolized, and identification of the metabolites is thus crucial when using urine as the sample matrix. Using an authentic urine sample and high-resolution accurate-mass Fourier transform Orbitrap mass spectrometry, we identified 16 phase-I metabolites of 5F-AKB-48. The modifications included mono-, di-, and trihydroxylation on the adamantyl ring alone or in combination with hydroxylation on the N-fluoropentylindazole moiety, dealkylation of the N-fluoropentyl side chain, and oxidative loss of fluorine as well as combinations thereof. The results were compared to human liver microsomal (HLM) incubations, which predominantly showed time-dependent formation of mono-, di-, and trihydroxylated metabolites having the hydroxyl groups on the adamantyl ring. The results presented here may be used to select metabolites specific of 5F-AKB-48 for use in clinical and forensic screening.",
"title": ""
},
{
"docid": "587f6e73ca6653860cda66238d2ba146",
"text": "Cable-suspended robots are structurally similar to parallel actuated robots but with the fundamental difference that cables can only pull the end-effector but not push it. From a scientific point of view, this feature makes feedback control of cable-suspended robots more challenging than their counterpart parallel-actuated robots. In the case with redundant cables, feedback control laws can be designed to make all tensions positive while attaining desired control performance. This paper presents approaches to design positive tension controllers for cable suspended robots with redundant cables. Their effectiveness is demonstrated through simulations and experiments on a three degree-of-freedom cable suspended robots.",
"title": ""
},
{
"docid": "ccaba0b30fc1a0c7d55d00003b07725a",
"text": "We collect a corpus of 1554 online news articles from 23 RSS feeds and analyze it in terms of controversy and sentiment. We use several existing sentiment lexicons and lists of controversial terms to perform a number of statistical analyses that explore how sentiment and controversy are related. We conclude that the negative sentiment and controversy are not necessarily positively correlated as has been claimed in the past. In addition, we apply an information theoretic approach and suggest that entropy might be a good predictor of controversy.",
"title": ""
},
{
"docid": "2ab51bd16640532e17f19f9df3880a1a",
"text": "monitor retail store shelves M. Marder S. Harary A. Ribak Y. Tzur S. Alpert A. Tzadok Using image analytics to monitor the contents and status of retail store shelves is an emerging trend with increasing business importance. Detecting and identifying multiple objects on store shelves involves a number of technical challenges. The particular nature of product package design, the arrangement of products on shelves, and the requirement to operate in unconstrained environments are just a few of the issues that must be addressed. We explain how we addressed these challenges in a system for monitoring planogram compliance, developed as part of a project with Tesco, a large multinational retailer. The new system offers store personnel an instant view of shelf status and a list of action items for restocking shelves. The core of the system is based on its ability to achieve high rates of product recognition, despite the very small visual differences between some products. This paper covers how state-of-the-art methods for object detection behave when applied to this problem. We also describe the innovative aspects of our implementation for size-scale-invariant product recognition and fine-grained classification.",
"title": ""
},
{
"docid": "19f9e643decc8047d73a20d664eb458d",
"text": "There is considerable federal interest in disaster resilience as a mechanism for mitigating the impacts to local communities, yet the identification of metrics and standards for measuring resilience remain a challenge. This paper provides a methodology and a set of indicators for measuring baseline characteristics of communities that foster resilience. By establishing baseline conditions, it becomes possible to monitor changes in resilience over time in particular places and to compare one place to another. We apply our methodology to counties within the Southeastern United States as a proof of concept. The results show that spatial variations in disaster resilience exist and are especially evident in the rural/urban divide, where metropolitan areas have higher levels of resilience than rural counties. However, the individual drivers of the disaster resilience (or lack thereof)—social, economic, institutional, infrastructure, and community capacities—vary",
"title": ""
},
{
"docid": "0c8fe60982ae7516c2c248f902bf5f71",
"text": "Our work investigates the use of gaze and multitouch to fluidly perform rotate-scale-translate (RST) tasks on large displays. The work specifically aims to understand if gaze can provide benefit in such a task, how task complexity affects performance, and how gaze and multitouch can be combined to create an integral input structure suited to the task of RST. We present four techniques that individually strike a different balance between gaze-based and touch-based translation while maintaining concurrent rotation and scaling operations. A 16 participant empirical evaluation revealed that three of our four techniques present viable options for this scenario, and that larger distances and rotation/scaling operations can significantly affect a gaze-based translation configuration. Furthermore we uncover new insights regarding multimodal integrality, finding that gaze and touch can be combined into configurations that pertain to integral or separable input structures.",
"title": ""
},
{
"docid": "9dfaf1984bbe52394e115509c340be4d",
"text": "Internet of Things (IoT) can be thought of as the next big step in internet technology. It is enabled by the latest developments in communication technologies and internet protocols. This paper surveys IoT in respect of layer architecture, enabling technologies, related protocols and challenges.",
"title": ""
}
] |
scidocsrr
|
b6a59289dae5f1995adc6173b3928c57
|
Blockchain consensus mechanisms-the case of natural disasters
|
[
{
"docid": "ed41127bf43b4f792f8cbe1ec652f7b2",
"text": "Today, more than 100 blockchain projects created to transform government systems are being conducted in more than 30 countries. What leads countries rapidly initiate blockchain projects? I argue that it is because blockchain is a technology directly related to social organization; Unlike other technologies, a consensus mechanism form the core of blockchain. Traditionally, consensus is not the domain of machines but rather humankind. However, blockchain operates through a consensus algorithm with human intervention; once that consensus is made, it cannot be modified or forged. Through utilization of Lawrence Lessig’s proposition that “Code is law,” I suggest that blockchain creates “absolute law” that cannot be violated. This characteristic of blockchain makes it possible to implement social technology that can replace existing social apparatuses including bureaucracy. In addition, there are three close similarities between blockchain and bureaucracy. First, both of them are defined by the rules and execute predetermined rules. Second, both of them work as information processing machines for society. Third, both of them work as trust machines for society. Therefore, I posit that it is possible and moreover unavoidable to replace bureaucracy with blockchain systems. In conclusion, I suggest five principles that should be adhered to when we replace bureaucracy with the blockchain system: 1) introducing Blockchain Statute law; 2) transparent disclosure of data and source code; 3) implementing autonomous executing administration; 4) building a governance system based on direct democracy and 5) making Distributed Autonomous Government(DAG).",
"title": ""
}
] |
[
{
"docid": "3ef661f930df369767a7da8a192df85f",
"text": "We present MVE, the Multi-View Environment. MVE is an end-to-end multi-view geometry reconstruction software which takes photos of a scene as input and produces a surface triangle mesh as result. The system covers a structure-from-motion algorithm, multi-view stereo reconstruction, generation of extremely dense point clouds, and reconstruction of surfaces from point clouds. In contrast to most image-based geometry reconstruction approaches, our system is focused on reconstruction of multi-scale scenes, an important aspect in many areas such as cultural heritage. It allows to reconstruct large datasets containing some detailed regions with much higher resolution than the rest of the scene. Our system provides a graphical user interface for structure-from-motion reconstruction, visual inspection of images, depth maps, and rendering of scenes and meshes.",
"title": ""
},
{
"docid": "cc3b36d8026396a7a931f07ef9d3bcfb",
"text": "Planning an itinerary before traveling to a city is one of the most important travel preparation activities. In this paper, we propose a novel framework called TripPlanner, leveraging a combination of location-based social network (i.e., LBSN) and taxi GPS digital footprints to achieve personalized, interactive, and traffic-aware trip planning. First, we construct a dynamic point-of-interest network model by extracting relevant information from crowdsourced LBSN and taxi GPS traces. Then, we propose a two-phase approach for personalized trip planning. In the route search phase, TripPlanner works interactively with users to generate candidate routes with specified venues. In the route augmentation phase, TripPlanner applies heuristic algorithms to add user's preferred venues iteratively to the candidate routes, with the objective of maximizing the route score while satisfying both the venue visiting time and total travel time constraints. To validate the efficiency and effectiveness of the proposed approach, extensive empirical studies were performed on two real-world data sets from the city of San Francisco, which contain more than 391 900 passenger delivery trips generated by 536 taxis in a month and 110 214 check-ins left by 15 680 Foursquare users in six months.",
"title": ""
},
{
"docid": "82b628f4ce9e3d4a7ef8db114340e191",
"text": "Cervical cancer (CC) is a leading cause of death in women worldwide. Radiation therapy (RT) for CC is an effective alternative, but its toxicity remains challenging. Blueberry is amongst the most commonly consumed berries in the United States. We previously showed that resveratrol, a compound in red grapes, can be used as a radiosensitizer for prostate cancer. In this study, we found that the percentage of colonies, PCNA expression level and the OD value of cells from the CC cell line SiHa were all decreased in RT/Blueberry Extract (BE) group when compared to those in the RT alone group. Furthermore, TUNEL+ cells and the relative caspase-3 activity in the CC cells were increased in the RT/BE group compared to those in the RT alone group. The anti-proliferative effect of RT/BE on cancer cells correlated with downregulation of pro-proliferative molecules cyclin D and cyclin E. The pro-apoptotic effect of RT/BE correlated with upregulation of the pro-apoptotic molecule TRAIL. Thus, BE sensitizes SiHa cells to RT by inhibition of proliferation and promotion of apoptosis, suggesting that blueberry might be used as a potential radiosensitizer to treat CC.",
"title": ""
},
{
"docid": "6723049ea783b15426dc5335872e4f75",
"text": "A method of using magnetic torque rods to do 3axis spacecraft attitude control has been developed. The goal of this system is to achieve a nadir pointing accuracy on the order of 0.1 to 1.0 deg without the need for thrusters or wheels. The open-loop system is under-actuated because magnetic torque rods cannot torque about the local magnetic field direction. This direction moves in space as the spacecraft moves along an inclined orbit, and the resulting system is roughly periodic. Periodic controllers are designed using an asymptotic linear quadratic regulator technique. The control laws include integral action and saturation logic. This system's performance has been studied via analysis and simulation. The resulting closed-loop systems are robust with respect to parametric modeling uncertainty. They converge from initial attitude errors of 30 deg per axis, and they achieve steady-state pointing errors on the order of 0.5 to 1.0 deg in the presence of drag torques and unmodeled residual dipole moments. Introduction All spacecraft have an attitude stabilization system. They range from passive spin-stabilized 1 or gravitygradient stabilized 2 systems to fully active three-axis controlled systems . Pointing accuracies for such systems may range from 10 deg down to 10 deg or better, depending on the spacecraft design and on the types of sensors and actuators that it carries. The most accurate designs normally include momentum wheels or reaction wheels. This paper develops an active 3-axis attitude stabilization system for a nadir-pointing spacecraft. It uses only magnetic torque rods as actuators. Additional components of the system include appropriate attitude sensors and a magnetometer. The goal of this system is to achieve pointing accuracy that is better than a gravity gradient stabilization system, on the order of 0.1 to 1 deg. Such a system will weigh less than either a gravity-gradient system or a wheelbased system, and it will use less power than a wheel∗ Associate Professor, Sibley School of Mech. & Aero. Engr. Associate Fellow, AIAA. Copyright 2000 by Mark L. Psiaki. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. based system. Thus, it will be ideal for small satellite applications, where weight and power budgets are severely restricted. There are two classic uses of magnetic torque rods in attitude control. One is for momentum management of wheel-based systems . The other is for angularmomentum and nutation control of spinning , momentum-biased , and dual-spin spacecraft . The present study is one of a growing number that consider active 3-axis magnetic attitude stabilization of a nadir-pointing spacecraft . Reference 5 also should be classified with this group because it uses similar techniques. Reference 7, the earliest such study, presents a 3-axis proportional-derivative control law. It computes a desired torque and projects it perpendicular to the Earth's magnetic field in order to determine the actual torque. Projection is necessary because the magnetic torque, nm, takes the form nm = m × b (1) where m is the magnetic dipole moment vector of the torque rods and b is the Earth's magnetic field. Equation (1) highlights the principal problem of magnetic-torque-based 3-axis attitude control: the system is under-actuated. A rigid spacecraft has 3 rotational degrees of freedom, but the torque rods can only torque about the 2 axes that are perpendicular to the magnetic field vector. The system is controllable if the orbit is inclined because the Earth's magnetic field vector rotates in space as the spacecraft moves around its orbit. It is a time-varying system that is approximately periodic. This system's under-actuation and its periodicity combine to create a challenging feedback controller design problem. The present problem is different from the problem of attitude control when thrusters or reaction wheels provide torque only about 2 axes. References 15 and 16 and others have addressed this alternate problem, in which the un-actuated direction is defined in spacecraft coordinates. For magnetic torques, the un-actuated direction does not rotate with the spacecraft. Various control laws have been considered for magnetic attitude control systems. Some of the controllers are similar to the original controller of Martel et al. . Time-varying Linear Quadratic Regulator (LQR) formulations have been tried , as has fuzzy control 9 and sliding-mode control . References 9 and 13 patch together solutions of time-",
"title": ""
},
{
"docid": "8519922a8cbb71f4c9ba8959731ce61d",
"text": "Convolutional neural networks (CNNs) have recently been applied successfully in large scale image classification competitions for photographs found on the Internet. As our brains are able to recognize objects in the images, there must be some regularities in the data that a neural network can utilize. These regularities are difficult to find an explicit set of rules for. However, by using a CNN and the backpropagation algorithm for learning, the neural network can learn to pick up on the features in the images that are characteristic for each class. Also, data regularities that are not visually obvious to us can be learned. CNNs are particularly useful for classifying data containing some spatial structure, like photographs and speech. In this paper, the technique is tested on SAR images of ships in harbour. The tests indicate that CNNs are promising methods for discriminating between targets in SAR images. However, the false alarm rate is quite high when introducing confusers in the tests. A big challenge in the development of target classification algorithms, especially in the case of SAR, is the lack of real data. This paper also describes tests using simulated SAR images of the same target classes as the real data in order to fill this data gap. The simulated images are made with the MOCEM software (developed by DGA), based on CAD models of the targets. The tests performed here indicate that simulated data can indeed be helpful in training a convolutional neural network to classify real SAR images.",
"title": ""
},
{
"docid": "abf47e7d497c83b015ad0ba818e17847",
"text": "The staggering amounts of content readily available to us via digital channels can often appear overwhelming. While much research has focused on aiding people at selecting relevant articles to read, only few approaches have been developed to assist readers in more efficiently reading an individual text. In this paper, we present HiText, a simple yet effective way of dynamically marking parts of a document in accordance with their salience. Rather than skimming a text by focusing on randomly chosen sentences, students and other readers can direct their attention to sentences determined to be important by our system. For this, we rely on a deep learning-based sentence ranking method. Our experiments show that this results in marked increases in user satisfaction and reading efficiency, as assessed using TOEFL-style reading comprehension tests.",
"title": ""
},
{
"docid": "9c41df95c11ec4bed3e0b19b20f912bb",
"text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.",
"title": ""
},
{
"docid": "e9d987351816570b29d0144a6a7bd2ae",
"text": "One’s state of mind will influence her perception of the world and people within it. In this paper, we explore attitudes and behaviors toward online social media based on whether one is depressed or not. We conducted semistructured face-to-face interviews with 14 active Twitter users, half of whom were depressed and the other half non-depressed. Our results highlight key differences between the two groups in terms of perception towards online social media and behaviors within such systems. Non-depressed individuals perceived Twitter as an information consuming and sharing tool, while depressed individuals perceived it as a tool for social awareness and emotional interaction. We discuss several design implications for future social networks that could better accommodate users with depression and provide insights towards helping depressed users meet their needs through online social media.",
"title": ""
},
{
"docid": "f1ac14dd7efc1ef56d5aa51de465ee50",
"text": "The problem of discovering association rules has received considerable research attention and several fast algorithms for mining association rules have been developed. In practice, users are often interested in a subset of association rules. For example, they may only want rules that contain a specific item or rules that contain children of a specific item in a hierarchy. While such constraints can be applied as a postprocessing step, integrating them into the mining algorithm can dramatically reduce the execution time. We consider the problem of integrating constraints that n..,, l.....l,.... ,....,,....:,,, -1.~.. cl., -..s..a..-m e.. ..l.“,“, CUG Y”“Ac;Qu GnpLz:I)DIVua “YGI “Us: pGYaLcG “I OLJDciliLG of items into the association discovery algorithm. We present three integrated algorithms for mining association rules with item constraints and discuss their tradeoffs.",
"title": ""
},
{
"docid": "fdb23d6b43ef07761d90c3faeaefce5d",
"text": "With the advent of big data phenomenon in the world of data and its related technologies, the developments on the NoSQL databases are highly regarded. It has been claimed that these databases outperform their SQL counterparts. The aim of this study is to investigate the claim by evaluating the document-oriented MongoDB database with SQL in terms of the performance of common aggregated and non-aggregate queries. We designed a set of experiments with a huge number of operations such as read, write, delete, and select from various aspects in the two databases and on the same data for a typical e-commerce schema. The results show that MongoDB performs better for most operations excluding some aggregate functions. The results can be a good source for commercial and non-commercial companies eager to change the structure of the database used to provide their line-of-business services.",
"title": ""
},
{
"docid": "9f87ea8fd766f4b208ac142dcbbed4b2",
"text": "The dynamic marketplace in online advertising calls for ranking systems that are optimized to consistently promote and capitalize better performing ads. The streaming nature of online data inevitably makes an advertising system choose between maximizing its expected revenue according to its current knowledge in short term (exploitation) and trying to learn more about the unknown to improve its knowledge (exploration), since the latter might increase its revenue in the future. The exploitation and exploration (EE) tradeoff has been extensively studied in the reinforcement learning community, however, not been paid much attention in online advertising until recently. In this paper, we develop two novel EE strategies for online advertising. Specifically, our methods can adaptively balance the two aspects of EE by automatically learning the optimal tradeoff and incorporating confidence metrics of historical performance. Within a deliberately designed offline simulation framework we apply our algorithms to an industry leading performance based contextual advertising system and conduct extensive evaluations with real online event log data. The experimental results and detailed analysis reveal several important findings of EE behaviors in online advertising and demonstrate that our algorithms perform superiorly in terms of ad reach and click-through-rate (CTR).",
"title": ""
},
{
"docid": "398effb89faa1ac819ee5ae489908ed1",
"text": "There are many interpretations of quantum mechanics, and new ones continue to appear. The Many-Worlds Interpretation (MWI) introduced by Everett (1957) impresses me as the best candidate for the interpretation of quantum theory. My belief is not based on a philosophical affinity for the idea of plurality of worlds as in Lewis (1986), but on a judgment that the physical difficulties of other interpretations are more serious. However, the scope of this paper does not allow a comparative analysis of all alternatives, and my main purpose here is to present my version of MWI, to explain why I believe it is true, and to answer some common criticisms of MWI. The MWI is not a theory about many objective “worlds”. A mathematical formalism by itself does not define the concept of a “world”. The “world” is a subjective concept of a sentient observer. All (subjective) worlds are incorporated in one objective Universe. I think, however, that the name Many-Worlds Interpretation does represent this theory fairly well. Indeed, according to MWI (and contrary to the standard approach) there are many worlds of the sort we call in everyday life “the world”. And although MWI is not just an interpretation of quantum theory – it differs from the standard quantum theory in certain experimental predictions – interpretation is an essential part of MWI; it explains the tremendous gap between what we experience as our world and what appears in the formalism of the quantum state of the Universe. Schrödinger’s equation (the basic equation of quantum theory) predicts very accurately the results of experiments performed on microscopic systems. I shall argue in what follows that it also implies the existence of many worlds. The purpose of addition of the collapse postulate, which represents the difference between MWI and the standard approach, is to escape the implications of Schrödinger’s equation for the existence of many worlds. Today’s technology does not allow us to test the existence of the “other” worlds. So only God or “superman” (i.e., a superintelligence equipped with supertechnology) can take full",
"title": ""
},
{
"docid": "44f0a3e73ce1da840546600fde7fbabd",
"text": "Suggested Citation: Berens, Johannes; Oster, Simon; Schneider, Kerstin; Burghoff, Julian (2018) : Early Detection of Students at Risk Predicting Student Dropouts Using Administrative Student Data and Machine Learning Methods, Schumpeter Discussion Papers, No. 2018-006, University of Wuppertal, Schumpeter School of Business and Economics, Wuppertal, http://nbn-resolving.de/urn:nbn:de:hbz:468-20180719-085420-5",
"title": ""
},
{
"docid": "3abf10f8539840b1830f14d83a7d3ab0",
"text": "We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to Zhang et al. (2016), who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the “noise scale” g = (NB −1) ≈ N/B, where is the learning rate, N the training set size and B the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, Bopt ∝ N . We verify these predictions empirically.",
"title": ""
},
{
"docid": "70f672268ae0b3e0e344a4f515057e6b",
"text": "Murder-suicide, homicide-suicide, and dyadic death all refer to an incident where a homicide is committed followed by the perpetrator's suicide almost immediately or soon after the homicide. Homicide-suicides are relatively uncommon and vary from region to region. In the selected literature that we reviewed, shooting was the common method of killing and suicide, and only 3 cases of homicidal hanging involving child victims were identified. We present a case of dyadic death where the method of killing and suicide was hanging, and the victim was a young woman.",
"title": ""
},
{
"docid": "89d59a76e93339e1d779146d9ffbd41a",
"text": "Serious Games (SGs) are gaining an ever increasing interest for education and training. Exploiting the latest simulation and visualization technologies, SGs are able to contextualize the player’s experience in challenging, realistic environments, supporting situated cognition. However, we still miss methods and tools for effectively and deeply infusing pedagogy and instruction inside digital games. After presenting an overview of the state of the art of the SG taxonomies, the paper introduces the pedagogical theories and models most relevant to SGs and their implications on SG design. We also present a schema for a proper integration of games in education, supporting different goals in different steps of a formal education process. By analyzing a set of well-established SGs and formats, the paper presents the main mechanics and models that are being used in SG designs, with a particular focus on assessment, feedback and learning analytics. An overview of tools and models for SG design is also presented. Finally, based on the performed analysis, indications for future research in the field are provided.",
"title": ""
},
{
"docid": "c9f6de422e349ac1319b1017d2a6547b",
"text": "This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short-term impacts of increased openness appear mostly socially beneficial in expectation. The strategic implications of medium and long-term impacts are complex. The evaluation of long-term impacts, in particular, may depend on whether the objective is to benefit the present generation or to promote a time-neutral aggregate of well-being of future generations. Some forms of openness are plausibly positive on both counts (openness about safety measures, openness about goals). Others (openness about source code, science, and possibly capability) could lead to a tightening of the competitive situation around the time of the introduction of advanced AI, increasing the probability that winning the AI race is incompatible with using any safety method that incurs a delay or limits performance. We identify several key factors that must be taken into account by any well-founded opinion on the matter. Policy Implications • The global desirability of openness in AI development – sharing e.g. source code, algorithms, or scientific insights – depends – on complex tradeoffs. • A central concern is that openness could exacerbate a racing dynamic: competitors trying to be the first to develop advanced (superintelligent) AI may accept higher levels of existential risk in order to accelerate progress. • Openness may reduce the probability of AI benefits being monopolized by a small group, but other potential political consequences are more problematic. • Partial openness that enables outsiders to contribute to an AI project’s safety work and to supervise organizational plans and goals appears desirable. The goal of this paper is to conduct a preliminary analysis of the long-term strategic implications of openness in AI development. What effects would increased openness in AI development have, on the margin, on the long-term impacts of AI? Is the expected value for society of these effects positive or negative? Since it is typically impossible to provide definitive answers to this type of question, our ambition here is more modest: to introduce some relevant considerations and develop some thoughts on their weight and plausibility. Given recent interest in the topic of openness in AI and the absence (to our knowledge) of any academic work directly addressing this issue, even this modest ambition would offer scope for a worthwhile contribution. Openness in AI development can refer to various things. For example, we could use this phrase to refer to open source code, open science, open data, or to openness about safety techniques, capabilities, and organizational goals, or to a non-proprietary development regime generally. We will have something to say about each of those different aspects of openness – they do not all have the same strategic implications. But unless we specify otherwise, we will use the shorthand ‘openness’ to refer to the practice of releasing into the public domain (continuously and as promptly as is practicable) all relevant source code and platforms and publishing freely about algorithms and scientific insights and ideas gained in the course of the research. Currently, most leading AI developers operate with a high but not maximal degree of openness. AI researchers at Google, Facebook, Microsoft and Baidu regularly present their latest work at technical conferences and post it on preprint servers. So do researchers in academia. Sometimes, but not always, these publications are accompanied by a release of source code, which makes it easier for outside researchers to replicate the work and build on it. Each of the aforementioned companies have developed and released under open source licences source code for platforms that help researchers (and students and other interested folk) implement machine learning architectures. The movement of staff and interns is another important vector for the spread of ideas. The recently announced OpenAI initiative even has openness explicitly built into its brand identity. Global Policy (2017) doi: 10.1111/1758-5899.12403 © 2017 The Authors Global Policy published by Durham University and John Wiley & Sons, Ltd. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Global Policy",
"title": ""
},
{
"docid": "69093927f11b5028f86322b458889596",
"text": "Although artificial neural network (ANN) usually reaches high classification accuracy, the obtained results sometimes may be incomprehensible. This fact is causing a serious problem in data mining applications. The rules that are derived from ANN are needed to be formed to solve this problem and various methods have been improved to extract these rules. Activation function is critical as the behavior and performance of an ANN model largely depends on it. So far there have been limited studies with emphasis on setting a few free parameters in the neuron activation function. ANN’s with such activation function seem to provide better fitting properties than classical architectures with fixed activation function neurons [Xu, S., & Zhang, M. (2005). Data mining – An adaptive neural network model for financial analysis. In Proceedings of the third international conference on information technology and applications]. In this study a new method that uses artificial immune systems (AIS) algorithm has been presented to extract rules from trained adaptive neural network. Two real time problems data were investigated for determining applicability of the proposed method. The data were obtained from University of California at Irvine (UCI) machine learning repository. The datasets were obtained from Breast Cancer disease and ECG data. The proposed method achieved accuracy values 94.59% and 92.31% for ECG and Breast Cancer dataset, respectively. It has been observed that these results are one of the best results comparing with results obtained from related previous studies and reported in UCI web sites. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28bb04440e9f5d0bfe465ec9fe685eda",
"text": "Model transformations are at the heart of model driven engineering (MDE) and can be used in many different application scenarios. For instance, model transformations are used to integrate very large models. As a consequence, they are becoming more and more complex. However, these transformations are still developed manually. Several code patterns are implemented repetitively, increasing the probability of programming errors and reducing code reusability. There is not yet a complete solution that automates the development of model transformations. In this paper we propose a novel approach that uses matching transformations and weaving models to semi-automate the development of transformations. Matching transformations are a special kind of transformations that implement heuristics and algorithms to create weaving models. Weaving models are models that capture different kinds of relationships between models. Our solution enables to rapidly implement and to customize these heuristics. We combine different heuristics, and we propose a new metamodel-based heuristic that exploits metamodel data to automatically produce weaving models. The weaving models are derived into model integration transformations.",
"title": ""
},
{
"docid": "1919bad34819f8f1d92b53c04b6a3c85",
"text": "Reviews keep playing an increasingly important role in the decision process of buying products and booking hotels. However, the large amount of available information can be confusing to users. A more succinct interface, gathering only the most helpful reviews, can reduce information processing time and save effort. To create such an interface in real time, we need reliable prediction algorithms to classify and predict new reviews which have not been voted but are potentially helpful. So far such helpfulness prediction algorithms have benefited from structural aspects, such as the length and readability score. Since emotional words are at the heart of our written communication and are powerful to trigger listeners’ attention, we believe that emotional words can serve as important parameters for predicting helpfulness of review text. Using GALC, a general lexicon of emotional words associated with a model representing 20 different categories, we extracted the emotionality from the review text and applied supervised classification method to derive the emotion-based helpful review prediction. As the second contribution, we propose an evaluation framework comparing three different real-world datasets extracted from the most well-known product review websites. This framework shows that emotion-based methods are outperforming the structure-based approach, by up to 9%.",
"title": ""
}
] |
scidocsrr
|
8b4dbd7566f03dbd45870ab5d974fa59
|
A video database of moving faces and people
|
[
{
"docid": "6ec538cd952641e0847eeef03909f936",
"text": "Automated recognition of facial expression is an important addition to computer vision research because of its relevance to the study of psychological phenomena and the development of human-computer interaction (HCI). We developed a computer vision system that automatically recognizes individual action units or action unit combinations in the upper face using Hidden Markov Models (HMMs). Our approach to facial expression recognition is based on the Facial Action Coding System (FACS), which separates expressions into upper and lower face action. In this paper, we use three approaches to extract facial expression information: (1) facial feature point tracking, (2) dense flow tracking with principal component analysis (PCA), and (3) high gradient component detection (i.e., furrow detection). The recognition results of the upper face expressions using feature point tracking, dense flow tracking, and high gradient component detection are 85%, 93%, and 85%,",
"title": ""
}
] |
[
{
"docid": "62a611b7f5a5d3bb99659c4ee9e5e4a3",
"text": "Transmissible spongioform enchephalopathies (TSE's), include bovine spongiform encephalopathy (also called BSE or \"mad cow disease\"), Creutzfeldt-Jakob disease (CJD) in humans, and scrapie in sheep. They remain a mystery, their cause hotly debated. But between 1994 and 1996, 12 people in England came down with CJD, the human form of mad cow, and all had eaten beef from suspect cows. Current mad cow diagnosis lies solely in the detection of late appearing \"prions\", an acronym for hypothesized, gene-less, misfolded proteins, somehow claimed to cause the disease. Yet laboratory preparations of prions contain other things, which could include unidentified bacteria or viruses. Furthermore, the rigors of prion purification alone, might, in and of themselves, have killed the causative virus or bacteria. Therefore, even if samples appear to infect animals, it is impossible to prove that prions are causative. Manuelidis found viral-like particles, which even when separated from prions, were responsible for spongiform STE's. Subsequently, Lasmezas's study showed that 55% of mice injected with cattle BSE, and who came down with disease, had no detectable prions. Still, incredibly, prions, are held as existing TSE dogma and Heino Dringer, who did pioneer work on their nature, candidly predicts \"it will turn out that the prion concept is wrong.\" Many animals that die of spongiform TSE's never show evidence of misfolded proteins, and Dr. Frank Bastian, of Tulane, an authority, thinks the disorder is caused by the bacterial DNA he found in this group of diseases. Recently, Roels and Walravens isolated Mycobacterium bovis it from the brain of a cow with the clinical and histopathological signs of mad cow. Moreover, epidemiologic maps of the origins and peak incidence of BSE in the UK, suggestively match those of England's areas of highest bovine tuberculosis, the Southwest, where Britain's mad cow epidemic began. The neurotoxic potential for cow tuberculosis was shown in pre-1960 England, where one quarter of all tuberculous meningitis victims suffered from Mycobacterium bovis infection. And Harley's study showed pathology identical to \"mad cow\" from systemic M. bovis in cattle, causing a tuberculous spongiform encephalitis. In addition to M. bovis, Mycobacterium avium subspecies paratuberculosis (fowl tuberculosis) causes Johne's disease, a problem known and neglected in cattle and sheep for almost a century, and rapidly emerging as the disease of the new millennium. Not only has M. paratuberculosis been found in human Crohn's disease, but both Crohn's and Johne's both cross-react with the antigens of cattle paratuberculosis. Furthermore, central neurologic manifestations of Crohn's disease are not unknown. There is no known disease which better fits into what is occurring in Mad Cow and the spongiform enchephalopathies than bovine tuberculosis and its blood-brain barrier penetrating, virus-like, cell-wall-deficient forms. It is for these reasons that future research needs to be aimed in this direction.",
"title": ""
},
{
"docid": "10b6b29254236c600040d27498f40feb",
"text": "Large-scale clustering has been widely used in many applications, and has received much attention. Most existing clustering methods suffer from both expensive computation and memory costs when applied to large-scale datasets. In this paper, we propose a novel clustering method, dubbed compressed k-means (CKM), for fast large-scale clustering. Specifically, high-dimensional data are compressed into short binary codes, which are well suited for fast clustering. CKM enjoys two key benefits: 1) storage can be significantly reduced by representing data points as binary codes; 2) distance computation is very efficient using Hamming metric between binary codes. We propose to jointly learn binary codes and clusters within one framework. Extensive experimental results on four large-scale datasets, including two million-scale datasets demonstrate that CKM outperforms the state-of-theart large-scale clustering methods in terms of both computation and memory cost, while achieving comparable clustering accuracy.",
"title": ""
},
{
"docid": "3c135cae8654812b2a4f805cec78132e",
"text": "Binarized Neural Network (BNN) removes bitwidth redundancy in classical CNN by using a single bit (-1/+1) for network parameters and intermediate representations, which has greatly reduced the off-chip data transfer and storage overhead. However, a large amount of computation redundancy still exists in BNN inference. By analyzing local properties of images and the learned BNN kernel weights, we observe an average of ~78% input similarity and ~59% weight similarity among weight kernels, measured by our proposed metric in common network architectures. Thus there does exist redundancy that can be exploited to further reduce the amount of on-chip computations.\n Motivated by the observation, in this paper, we proposed two types of fast and energy-efficient architectures for BNN inference. We also provide analysis and insights to pick the better strategy of these two for different datasets and network models. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. By experiments, we demonstrate that 80% of the computation and 40% of the buffer access can be skipped by exploiting BNN similarity. Thus, our design can achieve 17% reduction in total power consumption, 54% reduction in on-chip power consumption and 2.4× maximum speedup, compared to the baseline without applying our reuse technique. Our design also shows 1.9× more area-efficiency compared to state-of-the-art BNN inference design. We believe our deployment of BNN on FPGA leads to a promising future of running deep learning models on mobile devices.",
"title": ""
},
{
"docid": "a027c9dd3b4522cdf09a2238bfa4c37e",
"text": "Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.",
"title": ""
},
{
"docid": "e0117deae4c8ba64c338e56f08fb0968",
"text": "Denoising auto-encoders (DAEs) have been successfully used to learn new representations for a wide range of machine learning tasks. During training, DAEs make many passes over the training dataset and reconstruct it from partial corruption generated from a pre-specified corrupting distribution. This process learns robust representation, though at the expense of requiring many training epochs, in which the data is explicitly corrupted. In this paper we present the marginalized Denoising Auto-encoder (mDAE), which (approximately) marginalizes out the corruption during training. Effectively, the mDAE takes into account infinitely many corrupted copies of the training data in every epoch, and therefore is able to match or outperform the DAE with much fewer training epochs. We analyze our proposed algorithm and show that it can be understood as a classic auto-encoder with a special form of regularization. In empirical evaluations we show that it attains 1-2 order-of-magnitude speedup in training time over other competing approaches.",
"title": ""
},
{
"docid": "09b51a81775f598abed9c401c8f5617d",
"text": "In this work, we propose a novel method to involve full-scale-features into the fully convolutional neural networks (FCNs) for Semantic Segmentation. Current works on FCN has brought great advances in the task of semantic segmentation, but the receptive field, which represents region areas of input volume connected to any output neuron, limits the available information of output neuron’s prediction accuracy. We investigate how to involve the full-scale or full-image features into FCNs to enrich the receptive field. Specially, the fullscale feature network (FFN) extends the full-connected network and makes an end-to-end unified training structure. It has two appealing properties. First, the introduction of full-scale-features is beneficial for prediction. We build a unified extracting network and explore several fusion functions for concatenating features. Amounts of experiments have been carried out to prove that full-scale-features makes fair accuracy raising. Second, FFN is applicable to many variants of FCN which could be regarded as a general strategy to improve the segmentation accuracy. Our proposed method is evaluated on PASCAL VOC 2012, and achieves a state-of-art result.",
"title": ""
},
{
"docid": "744d7ce024289df3f32c0d5d3ec6becf",
"text": "Three homeotic mutants, aristapedia (ssa and ssa-UCl) and Nasobemia (Ns) which involve antenna-leg transformations were analyzed with respect to their time of expression. In particular we studied the question of whether these mutations are expressed when the mutant cells pass through additional cell divisions in culture. Mutant antennal discs were cultured in vivo and allowed to duplicate the antennal anlage. Furthermore, regeneration of the mutant antennal anlage was obtained by culturing eye discs and a particular fragment of the eye disc. Both duplicated and regenerated antennae showed at least a partial transformation into leg structures which indicates that the mutant gene is expressed during proliferation in culture.",
"title": ""
},
{
"docid": "f811ec2ab6ce7e279e97241dc65de2a5",
"text": "Summary Kraljic's purchasing portfolio approach has inspired many academic writers to undertake further research into purchasing portfolio models. Although it is evident that power and dependence issues play an important role in the Kraljic matrix, scant quantitative research has been undertaken in this respect. In our study we have filled this gap by proposing quantitative measures for ‘relative power’ and ‘total interdependence’. By undertaking a comprehensive survey among Dutch purchasing professionals, we have empirically quantified ‘relative power’ and ‘total interdependence’ for each quadrant of the Kraljic portfolio matrix. We have compared theoretical expectations on power and dependence levels with our empirical findings. A remarkable finding is the observed supplier dominance in the strategic quadrant of the Kraljic matrix. This indicates that the supplier dominates even satisfactory partnerships. In the light of this finding future research cannot assume any longer that buyersupplier relationships in the strategic quadrant of the Kraljic matrix are necessarily characterised by symmetric power. 1 Marjolein C.J. Caniëls, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW), P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762724; Fax: +31 45 5762103 E-mail: marjolein.caniels@ou.nl 2 Cees J. Gelderman, Open University of the Netherlands (OUNL), Faculty of Management Sciences (MW) P.O. Box 2960, 6401 DL Heerlen, the Netherlands. Tel: +31 45 5762590; Fax: +31 45 5762103 E-mail: kees.gelderman@ou.nl",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "d214ef50a5c26fb65d8c06ea7db3d07c",
"text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.",
"title": ""
},
{
"docid": "ed7c4d2c562a4ad6d9e8d0fc0fc589e3",
"text": "The reported research extends classic findings that after briefly viewing structured, but not random, chess positions, chess masters reproduce these positions much more accurately than less-skilled players. Using a combination of the gaze-contingent window paradigm and the change blindness flicker paradigm, we documented dramatically larger visual spans for experts while processing structured, but not random, chess positions. In addition, in a check-detection task, a minimized 3 x 3 chessboard containing a King and potentially checking pieces was displayed. In this task, experts made fewer fixations per trial than less-skilled players, and had a greater proportion of fixations between individual pieces, rather than on pieces. Our results provide strong evidence for a perceptual encoding advantage for experts attributable to chess experience, rather than to a general perceptual or memory superiority.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
},
{
"docid": "22e5e34e2df4c02a4df3255dbabf1fcb",
"text": "In this paper, we propose to show how video data available in standard CCTV transportation systems can represent a useful source of information for transportation infrastructure management, optimization and planning if adequately analyzed (e.g. to facilitate equipment usage understanding, to ease diagnostic and planning for system managers). More precisely, we present two algorithms allowing to estimate the number of people in a camera view and to measure the platform time-occupancy by trains. A statistical analysis of the results of each algorithm provide interesting insights regarding station usage. It is also shown that combining information from the algorithms in different views provide a finer understanding of the station usage. An end-user point of view confirms the interest of the proposed analysis.",
"title": ""
},
{
"docid": "3f26885065251a6108072b4c0b4de5df",
"text": "We present a Few-Shot Relation Classification Dataset (FewRel), consisting of 70, 000 sentences on 100 relations derived from Wikipedia and annotated by crowdworkers. The relation of each sentence is first recognized by distant supervision methods, and then filtered by crowdworkers. We adapt the most recent state-of-the-art few-shot learning methods for relation classification and conduct thorough evaluation of these methods. Empirical results show that even the most competitive few-shot learning models struggle on this task, especially as compared with humans. We also show that a range of different reasoning skills are needed to solve our task. These results indicate that few-shot relation classification remains an open problem and still requires further research. Our detailed analysis points multiple directions for future research. All details and resources about the dataset and baselines are released on http://zhuhao.me/fewrel.",
"title": ""
},
{
"docid": "d650d20b0179eabd24e5d8381e9d5cc2",
"text": "Despite the massive popularity of probabilistic (association) football forecasting models, and the relative simplicity of the outcome of such forecasts (they require only three probability values corresponding to home win, draw, and away win) there is no agreed scoring rule to determine their forecast accuracy. Moreover, the various scoring rules used for validation in previous studies are inadequate since they fail to recognise that football outcomes represent a ranked (ordinal) scale. This raises severe concerns about the validity of conclusions from previous studies. There is a well-established generic scoring rule, the Rank Probability Score (RPS), which has been missed by previous researchers, but which properly assesses football forecasting models.",
"title": ""
},
{
"docid": "de052fc7092f8baa599cf8c79ecd8059",
"text": "In this paper, we propose a novel multi-task learning architecture, which incorporates recent advances in attention mechanisms. Our approach, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with task-specific soft-attention modules, which are trainable in an end-to-end manner. These attention modules allow for learning of task-specific features from the global pool, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. Experiments on the CityScapes dataset show that our method outperforms several baselines in both single-task and multi-task learning, and is also more robust to the various weighting schemes in the multitask loss function. We further explore the effectiveness of our method through experiments over a range of task complexities, and show how our method scales well with task complexity compared to baselines.",
"title": ""
},
{
"docid": "f70b6d0a0b315a1ca87ccf5184c43da4",
"text": "Transmitting secret information through internet requires more security because of interception and improper manipulation by eavesdropper. One of the most desirable explications of this is “Steganography”. This paper proposes a technique of steganography using Advanced Encryption Standard (AES) with secured hash function in the blue channel of image. The embedding system is done by dynamic bit adjusting system in blue channel of RGB images. It embeds message bits to deeper into the image intensity which is very difficult for any type improper manipulation of hackers. Before embedding text is encrypted using AES with a hash function. For extraction the cipher text bit is found from image intensity using the bit adjusting extraction algorithm and then it is decrypted by AES with same hash function to get the real secret text. The proposed approach is better in Pick Signal to Noise Ratio (PSNR) value and less in histogram error between stego images and cover images than some existing systems. KeywordsAES-128, SHA-512, Cover Image, Stego image, Bit Adjusting, Blue Channel",
"title": ""
},
{
"docid": "8cb6a2a3014bd3a7f945abd4cb2ffe88",
"text": "In order to identify and explore the strength and weaknesses of particular organizational designs, a wide range of maturity models have been developed by both, practitioners and academics over the past years. However, a systematization and generalization of the procedure on how to design maturity models as well as a synthesis of design science research with the rather behavioural field of organization theory is still lacking. Trying to combine the best of both fields, a first design proposition of a situational maturity model is presented in this paper. The proposed maturity model design is illustrated with the help of an instantiation for the healthcare domain.",
"title": ""
},
{
"docid": "aa60d0d73efdf21adcc95c6ad7a7dbc3",
"text": "While hardware obfuscation has been used in industry for many years, very few scientific papers discuss layout-level obfuscation. The main aim of this paper is to start a discussion about hardware obfuscation in the academic community and point out open research problems. In particular, we introduce a very flexible layout-level obfuscation tool that we use as a case study for hardware obfuscation. In this obfuscation tool, a small custom-made obfuscell is used in conjunction with a standard cell to build a new obfuscated standard cell library called Obfusgates. This standard cell library can be used to synthesize any HDL code with standard synthesis tools, e.g. Synopsis Design Compiler. However, only obfuscating the functionality of individual gates is not enough. Not only the functionality of individual gates, but also their connectivity, leaks important important information about the design. In our tool we therefore designed the obfuscation gates to include a large number of \"dummy wires\". Due to these dummy wires, the connectivity of the gates in addition to their logic functionality is obfuscated. We argue that this aspect of obfuscation is of great importance in practice and that there are many interesting open research questions related to this.",
"title": ""
},
{
"docid": "44e0cd40b9a06abd5a4e54524b214dce",
"text": "A large majority of road accidents are relative to driver fatigue, distraction and drowsiness which are widely believed to be the largest contributors to fatalities and severe injuries, either as a direct cause of falling asleep at the wheel or as a contributing factor in lowering the attention and reaction time of a driver in critical situations. Thus to prevent road accidents, a countermeasure device has to be used. This paper illuminates and highlights the various measures that have been studied to detect drowsiness such as vehicle based, physiological based, and behavioural based measures. The main objective is to develop a real time non-contact system which will be able to identify driver’s drowsiness beforehand. The system uses an IR sensitive monochrome camera that detects the position and state of the eyes to calculate the drowsiness of a driver. Once the driver is detected as drowsy, the system will generate warning signals to alert the driver. In case the signal is not re-established the system will shut off the engine to prevent any mishap. Keywords— Drowsiness, Road Accidents, Eye Detection, Face Detection, Blink Pattern, PERCLOS, MATLAB, Arduino Nano",
"title": ""
}
] |
scidocsrr
|
e590c9f6a2d1fae86ca555f616c412ef
|
The Impact of Deep Hierarchical Discourse Structures in the Evaluation of Text Coherence
|
[
{
"docid": "ba129dec7a922884759bfec3f5f3048e",
"text": "Text-level discourse parsing remains a challenge. The current state-of-the-art overall accuracy in relation assignment is 55.73%, achieved by Joty et al. (2013). However, their model has a high order of time complexity, and thus cannot be applied in practice. In this work, we develop a much faster model whose time complexity is linear in the number of sentences. Our model adopts a greedy bottom-up approach, with two linear-chain CRFs applied in cascade as local classifiers. To enhance the accuracy of the pipeline, we add additional constraints in the Viterbi decoding of the first CRF. In addition to efficiency, our parser also significantly outperforms the state of the art. Moreover, our novel approach of post-editing, which modifies a fully-built tree by considering information from constituents on upper levels, can further improve the accuracy.",
"title": ""
}
] |
[
{
"docid": "d7d0fa6279b356d37c2f64197b3d721d",
"text": "Estimating the pose of a human in 3D given an image or a video has recently received significant attention from the scientific community. The main reasons for this trend are the ever increasing new range of applications (e.g., humanrobot interaction, gaming, sports performance analysis) which are driven by current technological advances. Although recent approaches have dealt with several challenges and have reported remarkable results, 3D pose estimation remains a largely unsolved problem because real-life applications impose several challenges which are not fully addressed by existing methods. For example, estimating the 3D pose of multiple people in an outdoor environment remains a largely unsolved problem. In this paper, we review the recent advances in 3D human pose estimation from RGB images or image sequences. We propose a taxonomy of the approaches based on the input (e.g., single image or video, monocular or multi-view) and in each case we categorize the methods according to their key characteristics. To provide an overview of the current capabilities, we conducted an extensive experimental evaluation of state-of-the-art approaches in a synthetic dataset created specifically for this task, which along with its ground truth is made publicly available for research purposes. Finally, we provide an in-depth discussion of the insights obtained from reviewing the literature and the results of our experiments. Future directions and challenges are identified.",
"title": ""
},
{
"docid": "ab47dbcafba637ae6e3b474642439bd3",
"text": "Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 x 10-6) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.",
"title": ""
},
{
"docid": "a33c723760f9870744ab004b693e8904",
"text": "Portfolio analysis of the publication profile of a unit of interest, ranging from individuals, organizations, to a scientific field or interdisciplinary programs, aims to inform analysts and decision makers about the position of the unit, where it has been, and where it may go in a complex adaptive environment. A portfolio analysis may aim to identify the gap between the current position of an organization and a goal that it intends to achieve or identify competencies of multiple institutions. We introduce a new visual analytic method for analyzing, comparing, and contrasting characteristics of publication portfolios. The new method introduces a novel design of dual-map thematic overlays on global maps of science. Each publication portfolio can be added as one layer of dual-map overlays over two related but distinct global maps of science, one for citing journals and the other for cited journals. We demonstrate how the new design facilitates a portfolio analysis in terms of patterns emerging from the distributions of citation threads and the dynamics of trajectories as a function of space and time. We first demonstrate the analysis of portfolios defined on a single source article. Then we contrast publication portfolios of multiple comparable units of interest, namely, colleges in universities, corporate research organizations. We also include examples of overlays of scientific fields. We expect the new method will provide new insights to portfolio analysis.",
"title": ""
},
{
"docid": "0e218dd5654ae9125d40bdd5c0a326d6",
"text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.",
"title": ""
},
{
"docid": "4b0cf6392d84a0cc8ab80c6ed4796853",
"text": "This paper introduces the Finite-State TurnTaking Machine (FSTTM), a new model to control the turn-taking behavior of conversational agents. Based on a non-deterministic finite-state machine, the FSTTM uses a cost matrix and decision theoretic principles to select a turn-taking action at any time. We show how the model can be applied to the problem of end-of-turn detection. Evaluation results on a deployed spoken dialog system show that the FSTTM provides significantly higher responsiveness than previous approaches.",
"title": ""
},
{
"docid": "b4b5f751294ab44e913279ca883abe7b",
"text": "Personal and shared vision have a long history in management and organizational practices yet only recently have we begun to build a systematic body of empirical knowledge about the role of personal and shared vision in organizations. As the introductory paper for this special topic in Frontiers in Psychology, we present a theoretical argument as to the existence and critical role of two states in which a person, dyad, team, or organization may find themselves when engaging in the creation of a personal or shared vision: the positive emotional attractor (PEA) and the negative emotional attractor (NEA). These two primary states are strange attractors, each characterized by three dimensions: (1) positive versus negative emotional arousal; (2) endocrine arousal of the parasympathetic nervous system versus sympathetic nervous system; and (3) neurological activation of the default mode network versus the task positive network. We argue that arousing the PEA is critical when creating or affirming a personal vision (i.e., sense of one's purpose and ideal self). We begin our paper by reviewing the underpinnings of our PEA-NEA theory, briefly review each of the papers in this special issue, and conclude by discussing the practical implications of the theory.",
"title": ""
},
{
"docid": "9e188833829fafc941d199a59c4d627b",
"text": "The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, we propose that the brain continuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. The basic elements of this proposal include analogical mapping, associative representations and the generation of predictions. This review concentrates on visual recognition as the model system for developing and testing ideas about the role and mechanisms of top-down predictions in the brain. We cover relevant behavioral, computational and neural aspects, explore links to emotion and action preparation, and consider clinical implications for schizophrenia and dyslexia. We then discuss the extension of the general principles of this proposal to other cognitive domains.",
"title": ""
},
{
"docid": "ca6001c3ed273b4f23565f4d40ddeb29",
"text": "Learning semantic representations and tree structures of bilingual phrases is beneficial for statistical machine translation. In this paper, we propose a new neural network model called Bilingual Correspondence Recursive Autoencoder (BCorrRAE) to model bilingual phrases in translation. We incorporate word alignments into BCorrRAE to allow it freely access bilingual constraints at different levels. BCorrRAE minimizes a joint objective on the combination of a recursive autoencoder reconstruction error, a structural alignment consistency error and a crosslingual reconstruction error so as to not only generate alignment-consistent phrase structures, but also capture different levels of semantic relations within bilingual phrases. In order to examine the effectiveness of BCorrRAE, we incorporate both semantic and structural similarity features built on bilingual phrase representations and tree structures learned by BCorrRAE into a state-of-the-art SMT system. Experiments on NIST Chinese-English test sets show that our model achieves a substantial improvement of up to 1.55 BLEU points over the baseline.",
"title": ""
},
{
"docid": "b0838169b054977874418a40ef9b075d",
"text": "A high linearity CMOS RF (radio frequency) power amplifier, which has the advantage of being low-cost and easily integrated on chip has become the technology of choice for RFID readers. This work demonstrates a 3.3V single voltage self-biased 2.4GHz--2.5GHz high linearity RF power amplifier for RFID applications.",
"title": ""
},
{
"docid": "9cebd0ff0e218d742e44ebe05fb2e394",
"text": "Studies supporting the notion that physical activity and exercise can help alleviate the negative impact of age on the body and the mind abound. This literature review provides an overview of important findings in this fast growing research domain. Results from cross-sectional, longitudinal, and intervention studies with healthy older adults, frail patients, and persons suffering from mild cognitive impairment and dementia are reviewed and discussed. Together these finding suggest that physical exercise is a promising nonpharmaceutical intervention to prevent age-related cognitive decline and neurodegenerative diseases.",
"title": ""
},
{
"docid": "dfb16d97d293776e255397f1dc49bbbf",
"text": "Self-service automatic teller machines (ATMs) have dramatically altered the ways in which customers interact with banks. ATMs provide the convenience of completing some banking transactions remotely and at any time. AT&T Global Information Solutions (GIS) is the world's leading provider of ATMs. These machines support such familiar services as cash withdrawals and balance inquiries. Further technological development has extended the utility and convenience of ATMs produced by GIS by facilitating check cashing and depositing, as well as direct bill payment, using an on-line system. These enhanced services, discussed in this paper, are made possible primarily through sophisticated optical character recognition (OCR) technology. Developed by an AT&T team that included GIS, AT&T Bell Laboratories Quality, Engineering, Software, and Technologies (QUEST), and AT&T Bell Laboratories Research, OCR technology was crucial to the development of these advanced ATMs.",
"title": ""
},
{
"docid": "0ba9b70029eda6c7de02adcd71b817ff",
"text": "We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation. We evaluated the algorithm and showed the improvements compared to other tracking approaches.",
"title": ""
},
{
"docid": "3473417f1701c82a4a06c00545437a3c",
"text": "The eXtensible Markup Language (XML) and related technologies offer promise for (among other things) applying data management technology to documents, and also for providing a neutral syntax for interoperability among disparate systems. But like many new technologies, it has raised unrealistic expectations. We give an overview of XML and related standards, and offer opinions to help separate vaporware (with a chance of solidifying) from hype. In some areas, XML technologies may offer revolutionary improvements, such as in processing databases' outputs and extending data management to semi-structured data. For some goals, either a new class of DBMSs is required, or new standards must be built. For such tasks, progress will occur, but may be measured in ordinary years rather than Web time. For hierarchical formatted messages that do not need maximum compression (e.g., many military messages), XML may have considerable benefit. For interoperability among enterprise systems, XML's impact may be moderate as an improved basis for software, but great in generating enthusiasm for standardizing concepts and schemas.",
"title": ""
},
{
"docid": "56ea461f00ef3dd9d760f122d405da81",
"text": "Neuronal apoptosis sculpts the developing brain and has a potentially important role in neurodegenerative diseases. The principal molecular components of the apoptosis programme in neurons include Apaf-1 (apoptotic protease-activating factor 1) and proteins of the Bcl-2 and caspase families. Neurotrophins regulate neuronal apoptosis through the action of critical protein kinase cascades, such as the phosphoinositide 3-kinase/Akt and mitogen-activated protein kinase pathways. Similar cell-death-signalling pathways might be activated in neurodegenerative diseases by abnormal protein structures, such as amyloid fibrils in Alzheimer's disease. Elucidation of the cell death machinery in neurons promises to provide multiple points of therapeutic intervention in neurodegenerative diseases.",
"title": ""
},
{
"docid": "a7acd2da721136143ebd9608a041236b",
"text": "Mr M, a patient with semantic dementia — a neurodegenerative disease that is characterized by the gradual deterioration of semantic memory — was being driven through the countryside to visit a friend and was able to remind his wife where to turn along the not-recently-travelled route. Then, pointing at the sheep in the field, he asked her “What are those things?” Prior to the onset of symptoms in his late 40s, this man had normal semantic memory. What has gone wrong in his brain to produce this dramatic and selective erosion of conceptual knowledge?",
"title": ""
},
{
"docid": "261ef8b449727b615f8cd5bd458afa91",
"text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.",
"title": ""
},
{
"docid": "aa8a60e058fa783aeaaec092fb2496f8",
"text": "Forthcoming Fifth Generation wireless network will be converged version of all the available wireless and wired networks including cognitive radio network, Wi-Fi, cellular, Wi-Max, WSN, IoT, Li-Fi, satellite communication and optical fiber network. Presently in the era of 4G, numbers of researchers have designed different antennas which can integrate maximum four services. This research work proposes design and implementation of Multiband Fractal smart antenna for at least seven converged wireless network services. The antenna works for several services such as GSM (0.89-0.96GHz), DCS (1.71-1.88GHz), WLAN/WSN (2.44-2.45GHz), LTE (1.7-1.9GHz), Wi-Fi/Wi-Max (2.68-6.45GHz), HIPER LAN2 (5.15-5.25GHz) and Ku Band (8-12GHz) application. The antenna planed to be designed and optimized by making using of commercially available software-High Frequency Structure Simulator (HFSS).",
"title": ""
},
{
"docid": "5f1abca1f9c3244b4f1655e34a7a9765",
"text": "This paper conceptualizes and develops valid measurements of the key dimensions of information systems development project (ISDP) complexity. A conceptual framework is proposed to define four components of ISDP complexity: structural organizational complexity, structural IT complexity, dynamic organizational complexity, and dynamic IT complexity. Measures of ISDP complexity are generated based on literature review, field interviews, focus group discussions and two pilot tests with 76 IS managers. The measures are then tested using both exploratory and confirmatory data analyses with survey responses from managers of 541 ISDPs. Results from both the exploratory and confirmatory analyses support the fourcomponent conceptualization of ISDP complexity. The final 20-item measurements of ISDP complexity are shown to adequately satisfy the criteria for unidimensionality, convergent validity, discriminant validity, reliability, factorial invariance across different types of ISDPs, and nomological validity. Implications of the study results to theory development and practice as well as future research directions are discussed.",
"title": ""
},
{
"docid": "c50af4826396403ad41c7ea041a81219",
"text": "The estimation of the human’s point of visual gaze is important for many applications. This information can be used in visual gaze based human-computer interaction, advertisement, human cognitive state analysis, attentive interfaces, human behavior analysis. Visual gaze direction can also provide high-level semantic cues such as who is speaking to whom, information on non-verbal communication and the mental state/attention of a human (e.g., a driver). Overall, the visual gaze direction is important to understand the human’s attention, motivation and intention. There is a tremendous amount of research concerned with the estimation of the point of visual gaze first attempts can be traced back 1989. Many visual gaze trackers are offered to this date, although they all share the same drawbacks either they are highly intrusive (head mounted systems, electro-ocolography methods, sclera search coil methods), either they restrict the user to keep his/her head as static as possible (PCCR methods, neural network methods, ellipse fitting methods) or they rely on expensive and sophisticated hardware and prior knowledge in order to grant the freedom of natural head movement to the user (stereo vision methods, 3D eye modeling methods). Furthermore, all proposed visual gaze trackers require an user specific calibration procedure that can be uncomfortable and erroneous, leading to a negative impact on the accuracy of the tracker. Although, some of these trackers achieve extremely high accuracy, they lack of simplicity and can not be efficiently used in everyday life. This manuscript investigates and proposes a visual gaze tracker that tackles the problem using only an ordinary web camera and no prior knowledge in any sense (scene set-up, camera intrinsic and/or extrinsic parameters). The tracker we propose is based on the observation that our desire to grant the freedom of natural head movement to the user requires 3D modeling of the scene set-up. Although, using a single low resolution web camera bounds us in dimensions (no depth can be recovered), we propose ways to cope with this drawback and model the scene in front of the user. We tackle this three-dimensional problem by",
"title": ""
},
{
"docid": "96973058d3ca943f3621dfe843baf631",
"text": "Many organizations are gradually catching up with the tide of adopting agile practices at workplace, but they seem to be struggling with how to choose the agile practices and mix them into their IT software project development and management. These organizations have already had their own development styles, many of which have adhered to the traditional plan-driven methods such as waterfall. The inherent corporate culture of resisting to change or hesitation to abandon what they have established for a whole new methodology hampers the process change. In this paper, we will review the current state of agile adoption in business organizations and propose a new approach to IT project development and management by blending Scrum, an agile method, into traditional plan-driven project development and management. The management activity involved in Scrum is discussed, the team and meeting composing of Scrum are investigated, the challenges and benefits of applying Scrum in traditional IT project development and management are analyzed, the blending structure is illustrated and discussed, and the iterative process with Scrum and planned process without Scrum are compared.",
"title": ""
}
] |
scidocsrr
|
f3dd57cd23d98c1ec32c354b58e32406
|
Public Review for Knowledge-Defined Networking
|
[
{
"docid": "ae4ffd43ea098581aa1d1980e61ebe6c",
"text": "In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this position paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS we propose to combine the learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared to past and current research efforts in this area, the technical approach depicted in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication networks.",
"title": ""
},
{
"docid": "bf0d5ee15b213c47d9d4a6a95d19e14a",
"text": "We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, and automatically fix a detected problem or explain why it cannot do so.We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high-level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective.",
"title": ""
},
{
"docid": "8b3ad3d48da22c529e65c26447265372",
"text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.",
"title": ""
}
] |
[
{
"docid": "fbac56ecc5d477586707c9bfc1bf8196",
"text": "This paper presents implementation of a highly dynamic running gait with a hierarchical controller on the",
"title": ""
},
{
"docid": "ca69aff379826d91f10cebb0dccf8f41",
"text": "The symbiotic organisms search (SOS) algorithm is an effective metaheuristic developed in 2014, which mimics the symbiotic relationship among the living beings, such as mutualism, commensalism, and parasitism, to survive in the ecosystem. In this study, three modified versions of the SOS algorithm are proposed by introducing adaptive benefit factors in the basic SOS algorithm to improve its efficiency. The basic SOS algorithm only considers benefit factors, whereas the proposed variants of the SOS algorithm, consider effective combinations of adaptive benefit factors and benefit factors to study their competence to lay down a good balance between exploration and exploitation of the search space. The proposed algorithms are tested to suit its applications to the engineering structures subjected to dynamic excitation, which may lead to undesirable vibrations. Structure optimization problems become more challenging if the shape and size variables are taken into account along with the frequency. To check the feasibility and effectiveness of the proposed algorithms, six different planar and space trusses are subjected to experimental analysis. The results obtained using the proposed methods are compared with those obtained using other optimization methods well established in the literature. The results reveal that the adaptive SOS algorithm is more reliable and efficient than the basic SOS algorithm and other state-of-the-art algorithms.",
"title": ""
},
{
"docid": "88de6047cec54692dea08abe752acd25",
"text": "Heap-based attacks depend on a combination of memory management error and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, FreeBSD and OpenBSD, and shows that they remain vulnerable to attack. It them presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.",
"title": ""
},
{
"docid": "09581c79829599090d8f838416058c05",
"text": "This paper proposes to tackle the AMR parsing bottleneck by improving two components of an AMR parser: concept identification and alignment. We first build a Bidirectional LSTM based concept identifier that is able to incorporate richer contextual information to learn sparse AMR concept labels. We then extend an HMM-based word-to-concept alignment model with graph distance distortion and a rescoring method during decoding to incorporate the structural information in the AMR graph. We show integrating the two components into an existing AMR parser results in consistently better performance over the state of the art on various datasets.",
"title": ""
},
{
"docid": "5baf5eb1c98a06ccf129fc65f539ea35",
"text": "In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation. Our model makes use of graph convolutional network to utilize implicit topic information within a discussion thread to assist the evaluation of convincingness of each single comment. In order to test the effectiveness of our proposed model, we annotate topic information on top of a public dataset for argument convincingness evaluation. Experimental results show that topic information is able to improve the performance for convincingness evaluation. We also make a move to detect topic aspects automatically.",
"title": ""
},
{
"docid": "7bd47ba6f139905b9cfa5af8cc66ddd3",
"text": "The impact of ICT (Information Communication Technology) hotel and hospitality industries has been widely recognized as one of the major changes in the last decade: new ways of communicating with guests, using ICT to improve services delivery to guest etc. The study tried to investigate the ICT Infrastructural Diffusion in hotels in Owerri, Imo State. In order to know the extent of spread, the study examine the current ICT infrastructures being used, the rate at which its being used and the factors affecting its adoption. The data collected was analyzed using SPSS Software and Regression model was estimated. The findings revealed that the rate at which hotels adopt and use ICT infrastructure is low and the most significant factor affecting the adoption and use of ICT is scope of activities the hotel is engaged in. It is therefore recommended that Government should increase the economic activities in the state so as to increase the adoption of ICT infrastructures.",
"title": ""
},
{
"docid": "754fb355da63d024e3464b4656ea5e8d",
"text": "Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device.",
"title": ""
},
{
"docid": "07bb0aec18894ae389eea9e2756443f8",
"text": "Generative Adversarial Networks (GANs) and their extensions have carved open many exciting ways to tackle well known and challenging medical image analysis problems such as medical image denoising, reconstruction, segmentation, data simulation, detection or classification. Furthermore, their ability to synthesize images at unprecedented levels of realism also gives hope that the chronic scarcity of labeled data in the medical field can be resolved with the help of these generative models. In this review paper, a broad overview of recent literature on GANs for medical applications is given, the shortcomings and opportunities of the proposed methods are thoroughly discussed and potential future work is elaborated. A total of 63 papers published until end of July 2018 are reviewed. For quick access, the papers and important details such as the underlying method, datasets and performance are summarized in tables.",
"title": ""
},
{
"docid": "f555d9c9aeb28059138527bc190a1a10",
"text": "This paper presents a novel method for entity disambiguation in anonymized graphs using local neighborhood structure. Most existing approaches leverage node information, which might not be available in several contexts due to privacy concerns, or information about the sources of the data. We consider this problem in the supervised setting where we are provided only with a base graph and a set of nodes labelled as ambiguous or unambiguous. We characterize the similarity between two nodes based on their local neighborhood structure using graph kernels; and solve the resulting classification task using SVMs. We give empirical evidence on two real-world datasets, comparing our approach to a state-of-the-art method, highlighting the advantages of our approach. We show that using less information, our method is significantly better in terms of either speed or accuracy or both. We also present extensions of two existing graphs kernels, namely, the direct product kernel and the shortest-path kernel, with significant improvements in accuracy. For the direct product kernel, our extension also provides significant computational benefits. Moreover, we design and implement the algorithms of our method to work in a distributed fashion using the GraphLab framework, ensuring high scalability.",
"title": ""
},
{
"docid": "66b680500240631b9a4b682b33a5bafa",
"text": "Multichannel customer management is “the design, deployment, and evaluation of channels to enhance customer value through effective customer acquisition, retention, and development” (Neslin, Scott A., D. Grewal, R. Leghorn, V. Shankar, M. L. Teerling, J. S. Thomas, P. C. Verhoef (2006), Challenges and Opportunities in Multichannel Management. Journal of Service Research 9(2) 95–113). Channels typically include the store, the Web, catalog, sales force, third party agency, call center and the like. In recent years, multichannel marketing has grown tremendously and is anticipated to grow even further. While we have developed a good understanding of certain issues such as the relative value of a multichannel customer over a single channel customer, several research and managerial questions still remain. We offer an overview of these emerging issues, present our future outlook, and suggest important avenues for future research. © 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "68f422172815df9fff6bf515bf7ea803",
"text": "Active learning (AL) promises to reduce the cost of annotating labeled datasets for trainable human language technologies. Contrary to expectations, when creating labeled training material for HPSG parse selection and latereusing it with other models, gains from AL may be negligible or even negative. This has serious implications for using AL, showing that additional cost-saving strategies may need to be adopted. We explore one such strategy: using a model during annotation to automate some of the decisions. Our best results show an 80% reduction in annotation cost compared with labeling randomly selected data with a single model.",
"title": ""
},
{
"docid": "69ab1b5f07c307397253f6619681a53f",
"text": "BACKGROUND\nIncreasing evidence demonstrates that motor-skill memories improve across a night of sleep, and that non-rapid eye movement (NREM) sleep commonly plays a role in orchestrating these consolidation enhancements. Here we show the benefit of a daytime nap on motor memory consolidation and its relationship not simply with global sleep-stage measures, but unique characteristics of sleep spindles at regionally specific locations; mapping to the corresponding memory representation.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nTwo groups of subjects trained on a motor-skill task using their left hand - a paradigm known to result in overnight plastic changes in the contralateral, right motor cortex. Both groups trained in the morning and were tested 8 hr later, with one group obtaining a 60-90 minute intervening midday nap, while the other group remained awake. At testing, subjects that did not nap showed no significant performance improvement, yet those that did nap expressed a highly significant consolidation enhancement. Within the nap group, the amount of offline improvement showed a significant correlation with the global measure of stage-2 NREM sleep. However, topographical sleep spindle analysis revealed more precise correlations. Specifically, when spindle activity at the central electrode of the non-learning hemisphere (left) was subtracted from that in the learning hemisphere (right), representing the homeostatic difference following learning, strong positive relationships with offline memory improvement emerged-correlations that were not evident for either hemisphere alone.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results demonstrate that motor memories are dynamically facilitated across daytime naps, enhancements that are uniquely associated with electrophysiological events expressed at local, anatomically discrete locations of the brain.",
"title": ""
},
{
"docid": "5fc0622037e9b8e3802bc27d1ef44863",
"text": "The purpose of this study was to compare the effect of two different methods of organizing endurance training in trained cyclists. One group of cyclists performed block periodization, wherein the first week constituted five sessions of high-intensity aerobic training (HIT), followed by 3 weeks of one weekly HIT session and focus on low-intensity training (LIT) (BP; n = 10, VO2max = 62 ± 2 mL/kg/min). Another group of cyclists performed a more traditional organization, with 4 weeks of two weekly HIT sessions interspersed with LIT (TRAD; n = 9, VO2max = 63 ± 2 mL/kg/min). Similar volumes of both HIT and LIT was performed in the two groups. While BP increased VO2max , peak power output (Wmax) and power output at 2 mmol/L [la(-)] by 4.6 ± 3.7%, 2.1 ± 2.8%, and 10 ± 12%, respectively (P < 0.05), no changes occurred in TRAD. BP showed relative improvements in VO2max compared with TRAD (P < 0.05). Mean effect size (ES) of the relative improvement in VO2max , Wmax , and power output at 2 mmol/L [la(-)] revealed large to moderate effects of BP training compared with TRAD training (ES = 1.34, ES = 0.85, and ES = 0.71, respectively). The present study suggests that block periodization of training provides superior adaptations to traditional organization during a 4-week endurance training period, despite similar training volume and intensity.",
"title": ""
},
{
"docid": "5e0d5cf53369cc1065bdf0dedb74c557",
"text": "The automatic detection of diseases in images acquired through chest X-rays can be useful in clinical diagnosis because of a shortage of experienced doctors. Compared with natural images, those acquired through chest X-rays are obtained by using penetrating imaging technology, such that there are multiple levels of features in an image. It is thus difficult to extract the features of a disease for further diagnosis. In practice, healthy people are in a majority and the morbidities of different disease vary, because of which the obtained labels are imbalanced. The two main challenges of diagnosis though chest X-ray images are to extract discriminative features from X-ray images and handle the problem of imbalanced data distribution. In this paper, we propose a deep neural network called DeepCXray that simultaneously solves these two problems. An InceptionV3 model is trained to extract features from raw images, and a new objective function is designed to address the problem of imbalanced data distribution. The proposed objective function is a performance index based on cross entropy loss that automatically weights the ratio of positive to negative samples. In other words, the proposed loss function can automatically reduce the influence of an overwhelming number of negative samples by shrinking each cross entropy terms by a different extent. Extensive experiments highlight the promising performance of DeepCXray on the ChestXray14 dataset of the National Institutes of Health in terms of the area under the receiver operating characteristic curve.",
"title": ""
},
{
"docid": "d43017f76aa417595bcd9a764a6ed991",
"text": "Energy storage is traditionally well established in the form of large scale pumped-hydro systems, but nowadays is finding increased attraction in medium and smaller scale systems. Such expansion is entirely complementary to the forecasted wider integration of intermittent renewable resources in future electrical distribution systems (Smart Grids). This paper is intended to offer a useful tool for analyzing potential advantages of distributed energy storages in Smart Grids with reference to both different possible conceivable regulatory schemes and services to be provided. The Smart Grid Operator is assumed to have the ownership and operation of the energy storage systems, and a new cost-based optimization strategy for their optimal placement, sizing and control is proposed. The need to quantify benefits of both the Smart Grid where the energy storage devices are included and the external interconnected grid is explored. Numerical applications to a Medium Voltage test Smart Grid show the advantages of using storage systems related to different options in terms of incentives and services to be provided.",
"title": ""
},
{
"docid": "9c452434ad1c25d0fbe71138b6c39c4b",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "103ed71841db091f880cef60e24b3411",
"text": "An integrated equal-split Wilkinson power combiner/divider tailored for operation in the X-band is reported in this letter. The combiner features differential input/output ports with different characteristic impedances, thus embedding an impedance transformation feature. Over the frequency range from 8 to 14 GHz it shows insertion loss of 1.4dB, return loss greater than 12 dB and isolation greater than 10 dB. It is implemented in a SiGe bipolar technology, and it occupies an area of 0.12 mm2.",
"title": ""
},
{
"docid": "f779bf251b3d066e594867680e080ef4",
"text": "Machine Translation is area of research since six decades. It is gaining popularity since last decade due to better computational facilities available at personal computer systems. This paper presents different Machine Translation system where Sanskrit is involved as source, target or key support language. Researchers employ various techniques like Rule based, Corpus based, Direct for machine translation. The main aim to focus on Sanskrit in Machine Translation in this paper is to uncover the language suitability, its morphology and employ appropriate MT techniques.",
"title": ""
},
{
"docid": "f83d8a69a4078baf4048b207324e505f",
"text": "Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.",
"title": ""
},
{
"docid": "e812bed02753b807d1e03a2e05e87cb8",
"text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.",
"title": ""
}
] |
scidocsrr
|
01f2765599e42489699f00ee9018dad1
|
ConFirm: Detecting firmware modifications in embedded systems using Hardware Performance Counters
|
[
{
"docid": "f9b6662dc19c47892bb7b95c5b7dc181",
"text": "The ability to update firmware is a feature that is found in nearly all modern embedded systems. We demonstrate how this feature can be exploited to allow attackers to inject malicious firmware modifications into vulnerable embedded devices. We discuss techniques for exploiting such vulnerable functionality and the implementation of a proof of concept printer malware capable of network reconnaissance, data exfiltration and propagation to general purpose computers and other embedded device types. We present a case study of the HP-RFU (Remote Firmware Update) LaserJet printer firmware modification vulnerability, which allows arbitrary injection of malware into the printer’s firmware via standard printed documents. We show vulnerable population data gathered by continuously tracking all publicly accessible printers discovered through an exhaustive scan of IPv4 space. To show that firmware update signing is not the panacea of embedded defense, we present an analysis of known vulnerabilities found in third-party libraries in 373 LaserJet firmware images. Prior research has shown that the design flaws and vulnerabilities presented in this paper are found in other modern embedded systems. Thus, the exploitation techniques presented in this paper can be generalized to compromise other embedded systems. Keywords-Embedded system exploitation; Firmware modification attack; Embedded system rootkit; HP-RFU vulnerability.",
"title": ""
},
{
"docid": "80e4ac10df91cbbc6bbce4d30ec5abcf",
"text": "Although many users are aware of the threats that malware pose, users are unaware that malware can infect peripheral devices. Many embedded devices support firmware update capabilities, yet they do not authenticate such updates; this allows adversaries to infect peripherals with malicious firmware. We present a case study of the Logitech G600 mouse, demonstrating attacks on networked systems which are also feasible against airgapped systems. If the target machine is air-gapped, we show that the Logitech G600 has enough space available to host an entire malware package inside its firmware. We also wrote a file transfer utility that transfers the malware from the mouse to the target machine. If the target is networked, the mouse can be used as a persistent threat that updates and reinstalls malware as desired. To mitigate these attacks, we implemented signature verification code which is essential to preventing malicious firmware from being installed on the mouse. We demonstrate that it is reasonable to include such signature verification code in the bootloader of the mouse.",
"title": ""
}
] |
[
{
"docid": "682254fdd4f79a1c04ce5ded334c4d99",
"text": "Measuring voice quality for telephony is not a new problem. However, packet-switched, best-effort networks such as the Internet present significant new challenges for the delivery of real-time voice traffic. Unlike the circuit-switched public switched telephone network (PSTN), Internet protocol (IP) networks guarantee neither sufficient bandwidth for the voice traffic nor a constant, acceptable delay. Dropped packets and varying delays introduce distortions not found in traditional telephony. In addition, if a low bitrate codec is used in voice over IP (VoIP) to achieve a high compression ratio, the original waveform can be significantly distorted. These new potential sources of signal distortion present significant challenges for objectively measuring speech quality. Measurement techniques designed for the PSTN may not perform well in VoIP environments. Our objective is to find a speech quality metric that accurately predicts subjective human perception under the conditions present in VoIP systems. To do this, we compared three types of measures: perceptually weighted distortion measures such as enhanced modified Bark spectral distance (EMBSD) and measuring normalizing blocks (MNB), word-error rates of continuous speech recognizers, and the ITU E-model. We tested the performance of these measures under conditions typical of a VoIP system. We found that the E-model had the highest correlation with mean opinion scores (MOS). The E-model is well-suited for online monitoring because it does not require the original (undistorted) signal to compute its quality metric and because it is computationally simple.",
"title": ""
},
{
"docid": "a1fda952f3be635444f3f27b0ec6a59c",
"text": "We propose an unsupervised model for novelty detection. The subject is treated as a density estimation problem, in which a deep neural network is employed to learn a parametric function that maximizes probabilities of training samples. This is achieved by equipping an autoencoder with a novel module, responsible for the maximization of compressed codes’ likelihood by means of autoregression. We illustrate design choices and proper layers to perform autoregressive density estimation when dealing with both image and video inputs. Despite a very general formulation, our model shows promising results in diverse one-class novelty detection and video anomaly detection benchmarks.",
"title": ""
},
{
"docid": "7c9c047055d123aff65c9c7a3db59dfc",
"text": "Organizations publish the individual’s information in order to utilize the data for the research purpose. But the confidential information about the individual is revealed by the adversary by combining the various releases of the several organizations. This is called as linkage attacks. This attack can be avoided by the SLOMS method which vertically partitions the single quasi table and multiple sensitive tables. The SLOMS method uses MSB-KACA algorithm to generalize the quasi identifier table in order to implement k-Anonymity and bucketizes the sensitive attribute table to implement l-diversity. But there is a chance of probabilistic inference attack due to bucketization. So, the method called t-closeness can be applied over MSB-KACA algorithm which compute the value using Earth Mover Distance(EMD) and set the minimum value as threshold in order to equally distribute the attributes in the table based on the threshold ’t’. Such that the probabilistic inference attack can be avoided. The performance of t-closeness gets improved and evaluated by Disclosure rate which becomes minimal while comparing with MSB-KACA algorithm.",
"title": ""
},
{
"docid": "3a0cac0050f40b9ce62bb0d4234ecf52",
"text": "The ephemeral nature of human communication via networks today poses interesting and challenging problems for information technologists. The Intelink intelligence network, for example, has a need to monitor chat-room conversations to ensure the integrity of sensitive data being transmitted via the network. However, the sheer volume of communication in venues such as email, newsgroups, and chat precludes manual techniques of information management. It has been estimated that over 430 million instant messages, for example, are exchanged each day on the America Online network [3]. Although a not insignificant fraction of such data may be temporarily archived (e.g., newsgroups), no systematic mechanisms exist for accumulating these artifacts of communication in a form that lends itself to the construction of models of semantics [12]. In essence, dynamic techniques of analysis are needed if textual data of this nature is to be effectively mined. This article reports our progress in developing a text mining tool for analysis of chat-room conversations. Central to our efforts is the development of functionality to answer questions such as \"What topics are being discussed in a chat-room?\", \"Who is discussing which topics?\" and \"Who is interacting with whom?\" The objective of our research is to develop technology that can automatically identify such patterns of interaction in both social and semantic terms. In this article we report our preliminary findings in identifying threads of conversation in multi-topic, multi-person chat-rooms. We have achieved promising results in terms of precision and recall by employing pattern recognition techniques based on finite state automata. We also report the design of our approach to building models of social and semantic interactions based on our HDDI text mining infrastructure [13].",
"title": ""
},
{
"docid": "d615916992e4b8a9b6f3040adace7b44",
"text": "The paper presents a new design of dual-mode dielectric-loaded rectangular cavity filters. The response of the filter is mainly controlled by the location and orientation of the coupling apertures with no intra-cavity coupling. Each dual-mode dielectric-loaded cavity generates and controls one transmission zero which can be placed on either side of the passband. Example filters which demonstrate the soundness of the design technique are presented.",
"title": ""
},
{
"docid": "f35d0784dc7ae4140754b3d0ab2b9c8c",
"text": "The future 5G wireless is triggered by the higher demand on wireless capacity. With Software Defined Network (SDN), the data layer can be separated from the control layer. The development of relevant studies about Network Function Virtualization (NFV) and cloud computing has the potential of offering a quicker and more reliable network access for growing data traffic. Under such circumstances, Software Defined Mobile Network (SDMN) is presented as a promising solution for meeting the wireless data demands. This paper provides a survey of SDMN and its related security problems. As SDMN integrates cloud computing, SDN, and NFV, and works on improving network functions, performance, flexibility, energy efficiency, and scalability, it is an important component of the next generation telecommunication networks. However, Yongfeng Qian yongfeng.hust@gmail.com Min Chen minchen@ieee.org Shiwen Mao smao@ieee.org Wan Tang tangwan@scuec.edu.cn Ximin Yang yangximin@scuec.edu.cn 1 Embedded and Pervasive Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China 2 Department of Electrical & Computer Engineering, Auburn University, 200 Broun Hall, Auburn, AL, 36849-5201, USA 3 College of Computer Science, South-Central University for Nationalities, Wuhan 430074, China the SDMN concept also raises new security concerns. We explore relevant security threats and their corresponding countermeasures with respect to the data layer, control layer, application layer, and communication protocols. We also adopt the STRIDE method to classify various security threats to better reveal them in the context of SDMN. This survey is concluded with a list of open security challenges in SDMN.",
"title": ""
},
{
"docid": "b1f3f0dac49d6613f381b30ebf5b0ad7",
"text": "In the current Web scenario a video browsing tool that produces on-the-fly storyboards is more and more a need. Video summary techniques can be helpful but, due to their long processing time, they are usually unsuitable for on-the-fly usage. Therefore, it is common to produce storyboards in advance, penalizing users customization. The lack of customization is more and more critical, as users have different demands and might access the Web with several different networking and device technologies. In this paper we propose STIMO, a summarization technique designed to produce on-the-fly video storyboards. STIMO produces still and moving storyboards and allows advanced users customization (e.g., users can select the storyboard length and the maximum time they are willing to wait to get the storyboard). STIMO is based on a fast clustering algorithm that selects the most representative video contents using HSV frame color distribution. Experimental results show that STIMO produces storyboards with good quality and in a time that makes on-the-fly usage possible.",
"title": ""
},
{
"docid": "25f67b19daa65a8c7ade4cabe1153c60",
"text": "This paper deals with feedback controller synthesis for Timed Event Graphs in dioids. We discuss here the existence and the computation of a controller which leads to a closed-loop system whose behavior is as close as possible to the one of a given reference model and which delays as much as possible the input of tokens inside the (controlled) system. The synthesis presented here is mainly based on residuation theory results and some Kleene star properties.",
"title": ""
},
{
"docid": "9d0a33f14d81cda52075348b44c29757",
"text": "We present PoseShop - a pipeline to construct segmented human image database with minimal manual intervention. By downloading, analyzing, and filtering massive amounts of human images from the Internet, we achieve a database which contains 400 thousands human figures that are segmented out of their background. The human figures are organized based on action semantic, clothes attributes, and indexed by the shape of their poses. They can be queried using either silhouette sketch or a skeleton to find a given pose. We demonstrate applications for this database for multiframe personalized content synthesis in the form of comic-strips, where the main character is the user or his/her friends. We address the two challenges of such synthesis, namely personalization and consistency over a set of frames, by introducing head swapping and clothes swapping techniques. We also demonstrate an action correlation analysis application to show the usefulness of the database for vision application.",
"title": ""
},
{
"docid": "c2aa1c74d0569a068b6e381f314aa1ff",
"text": "For the purpose of discovering security flaws in software, many dynamic and static taint analyzing techniques have been proposed. By analyzing information flow at runtime, dynamic taint analysis can precisely find security flaws of software. However, on one hand, it suffers from substantial runtime overhead and is incapable of discovering the potential threats. On the other hand, static taint analysis analyzes program’s code without actually executing it which incurs no runtime overhead, and can cover all the code, but it is often not accurate enough. In addition, since the source code of most software is hard to acquire and intruders simply do not attach target program’s source code in practice, software flaw tracking becomes rather complicated. In order to cope with these issues, this paper proposes HYBit, a novel hybrid framework which integrates dynamic and static taint analysis to diagnose the flaws or vulnerabilities for binary programs. In the framework, the source binary is first analyzed by the dynamic taint analyzer. Then, with the runtime information provided by its dynamic counterpart, the static taint analyzer can process the unexecuted part of the target program easily. Furthermore, a taint behavior filtration mechanism is proposed to optimize the performance of the framework. We evaluate our framework from three perspectives: efficiency, coverage, and effectiveness. The results are encouraging.",
"title": ""
},
{
"docid": "4fd19f75059fd8ec42cea3e70251d90f",
"text": "We report the case of C.L., an 8-year-old child who, following the surgical removal of an ependymoma from the left cerebral ventricle at the age of 4 years, developed significant difficulties in retaining day-to-day events and information. A thorough neuropsychological analysis documented in C.L. a severe anterograde amnesic syndrome, characterised by normal short-term memory, but poor performance on episodic long-term memory tests. In particular, C.L. demonstrated virtually no ability to recollect new verbal information several minutes after the presentation. As for semantic memory, C.L. demonstrated general semantic competencies, which, depending on the test, ranged from the level of a 6-year-old girl to a level corresponding to her actual chronological age. Finding a patient who, despite being severely impaired in the ability to recollect new episodic memories, still demonstrates at least partially preserved abilities to acquire new semantic knowledge suggests that neural circuits implicated in the memorisation of autobiographical events and factual information do not overlap completely. This case is examined in the light of growing literature concerned with the dissociation between episodic and semantic memory in childhood amnesia.",
"title": ""
},
{
"docid": "0d11c7f94973be05d906f94238d706e4",
"text": "Head-Mounted Displays (HMDs) combined with 3-or-more Degree-of-Freedom (DoF) input enable rapid manipulation of stereoscopic 3D content. However, such input is typically performed with hands in midair and therefore lacks precision and stability. Also, recent consumer-grade HMDs suffer from limited angular resolution and/or limited field-of-view as compared to a desktop monitor. We present the DualCAD system that implements two solutions to these problems. First, the user may freely switch at runtime between an augmented reality HMD mode, and a traditional desktop mode with precise 2D mouse input and an external desktop monitor. Second, while in the augmented reality HMD mode, the user holds a smartphone in their non-dominant hand that is tracked with 6 DoF, allowing it to be used as a complementary high-resolution display as well as an alternative input device for stylus or multitouch input. Two novel bimanual interaction techniques that leverage the properties of the smartphone are presented. We also report initial user feedback.",
"title": ""
},
{
"docid": "9cf59b5f67d07787da8eeae825066525",
"text": "Event correlation has become the cornerstone of many reactive applications, particularly in distributed systems. However, support for programming with complex events is still rather specific and rudimentary. This paper presents EventJava, an extension of Java with generic support for event-based distributed programming. EventJava seamlessly integrates events with methods, and broadcasting with unicasting of events; it supports reactions to combinations of events, and predicates guarding those reactions. EventJava is implemented as a framework to allow for customization of event semantics, matching, and dispatching. We present its implementation, based on a compiler transforming specific primitives to Java, along with a reference implementation of the framework. We discuss ordering properties of EventJava through a formalization of its core as an extension of Featherweight Java. In a performance evaluation, we show that EventJava compares favorably to a highly tuned database-backed event correlation engine as well as to a comparably lightweight concurrency mechanism.",
"title": ""
},
{
"docid": "51b766b0a7f1e3bc1f49d16df04a69f7",
"text": "This study reports the results of a biometrical genetical analysis of scores on a personality inventory (The Eysenck Personality Questionnaire, or EPQ), which purports to measure psychoticism, neuroticism, extraversion and dissimulation (Lie Scale). The subjects were 544 pairs of twins, from the Maudsley Twin Register. The purpose of the study was to test the applicability of various genotypeenvironmental models concerning the causation of P scores. Transformation of the raw scores is required to secure a scale on which the effects of genes and environment are additive. On such a scale 51% of the variation in P is due to environmental differences within families, but the greater part (77%) of this environmental variation is due to random effects which are unlikely to be controllable. . The genetical consequences ot'assortative mating were too slight to be detectable in this study, and the genetical variation is consistent with the hypothesis that gene effects are additive. This is a general finding for traits which have been subjected to stabilizing selection. Our model for P is consistent with these advanced elsewhere to explain the origin of certain kinds of psychopathology. The data provide little support for the view that the \"family environment\" (including the environmental influence of parents) plays a major part in the determination of individual differences in P, though we cite evidence suggesting that sibling competition effects are producing genotypeenvironmental covariation for the determinants of P in males. The genetical and environmental determinants of the covariation of P with other personality dimensions are considered. Assumptions are discussed and tested where possible.",
"title": ""
},
{
"docid": "09c27f3f680188637177e7f2913c1ef7",
"text": "The implementation of a monitoring and control system for the induction motor based on programmable logic controller (PLC) technology is described. Also, the implementation of the hardware and software for speed control and protection with the results obtained from tests on induction motor performance is provided. The PLC correlates the operational parameters to the speed requested by the user and monitors the system during normal operation and under trip conditions. Tests of the induction motor system driven by inverter and controlled by PLC prove a higher accuracy in speed regulation as compared to a conventional V/f control system. The efficiency of PLC control is increased at high speeds up to 95% of the synchronous speed. Thus, PLC proves themselves as a very versatile and effective tool in industrial control of electric drives.",
"title": ""
},
{
"docid": "83f970bc22a2ada558aaf8f6a7b5a387",
"text": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact, that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R. Introduction In almost every domain from industry (Billinton et al., 1996) to biology (Bar-Joseph et al., 2003), finance (Taylor, 2007) up to social science (Gottman, 1981) different time series data are measured. While the recorded datasets itself may be different, one common problem are missing values. Many analysis methods require missing values to be replaced with reasonable values up-front. In statistics this process of replacing missing values is called imputation. Time series imputation thereby is a special sub-field in the imputation research area. Most popular techniques like Multiple Imputation (Rubin, 1987), Expectation-Maximization (Dempster et al., 1977), Nearest Neighbor (Vacek and Ashikaga, 1980) and Hot Deck (Ford, 1983) rely on interattribute correlations to estimate values for the missing data. Since univariate time series do not possess more than one attribute, these algorithms cannot be applied directly. Effective univariate time series imputation algorithms instead need to employ the inter-time correlations. On CRAN there are several packages solving the problem of imputation of multivariate data. Most popular and mature (among others) are AMELIA (Honaker et al., 2011), mice (van Buuren and Groothuis-Oudshoorn, 2011), VIM (Kowarik and Templ, 2016) and missMDA (Josse and Husson, 2016). However, since these packages are designed for multivariate data imputation only they do not work for univariate time series. At the moment imputeTS (Moritz, 2016a) is the only package on CRAN that is solely dedicated to univariate time series imputation and includes multiple algorithms. Nevertheless, there are some other packages that include imputation functions as addition to their core package functionality. Most noteworthy being zoo (Zeileis and Grothendieck, 2005) and forecast (Hyndman, 2016). Both packages offer also some advanced time series imputation functions. The packages spacetime (Pebesma, 2012), timeSeries (Rmetrics Core Team et al., 2015) and xts (Ryan and Ulrich, 2014) should also be mentioned, since they contain some very simple but quick time series imputation methods. For a broader overview about available time series imputation packages in R see also (Moritz et al., 2015). In this technical report we evaluate the performance of several univariate imputation functions in R on different time series. This paper is structured as follows: Section Overview imputeTS package gives an overview, about all features and functions included in the imputeTS package. This is followed by Usage examples of the different provided functions. The paper ends with a Conclusions section. Overview imputeTS package The imputeTS package can be found on CRAN and is an easy to use package that offers several utilities for ’univariate, equi-spaced, numeric time series’. Univariate means there is just one attribute that is observed over time. Which leads to a sequence of single observations o1, o2, o3, ... on at successive points t1, t2, t3, ... tn in time. Equi-spaced means, that time increments between successive data points are equal |t1 − t2| = |t2 − t3| = ... = |tn−1 − tn|. Numeric means that the observations are measurable quantities that can be described as a number. In the first part of this section, a general overview about all available functions and datasets is given. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 2 This is followed by more detailed overviews about the three areas covered by the package: ’Plots & Statistics’, ’Imputation’ and ’Datasets’. Information about how to apply these functions and tools can be found later in the Usage examples section. General overview As can be seen in Table 1, beyond several imputation algorithm implementations the package also includes plotting functions and datasets. The imputation algorithms can be divided into rather simple but fast approaches like mean imputation and more advanced algorithms that need more computation time like kalman smoothing on a structural model. Simple Imputation Imputation Plots & Statistics Datasets na.locf na.interpolation plotNA.distribution tsAirgap na.mean na.kalman plotNA.distributionBar tsAirgapComplete na.random na.ma plotNA.gapsize tsHeating na.replace na.seadec plotNA.imputations tsHeatingComplete na.remove na.seasplit statsNA tsNH4 tsNH4Complete Table 1: General Overview imputeTS package As a whole, the package aims to support the user in the complete process of replacing missing values in time series. This process starts with analyzing the distribution of the missing values using the statsNA function and the plots of plotNA.distribution, plotNA.distributionBar, plotNA.gapsize. In the next step the actual imputation can take place with one of the several algorithm options. Finally, the imputation results can be visualized with the plotNA.imputations function. Additionally, the package contains three datasets, each in a version with and without missing values, that can be used to test imputation algorithms. Plots & Statistics functions An overview about the available plots and statistics functions can be found in Table 2. To get a good impression what the plots look like section Usage examples is recommended. Function Description plotNA.distribution Visualize Distribution of Missing Values plotNA.distributionBar Visualize Distribution of Missing Values (Barplot) plotNA.gapsize Visualize Distribution of NA gap sizes plotNA.imputations Visualize Imputed Values statsNA Print Statistics about the Missing Data Table 2: Overview Plots & Statistics The statsNA function calculates several missing data statistics of the input data. This includes overall percentage of missing values, absolute amount of missing values, amount of missing value in different sections of the data, longest series of consecutive NAs and occurrence of consecutive NAs. The plotNA.distribution function visualizes the distribution of NAs in a time series. This is done using a standard time series plot, in which areas with missing data are colored red. This enables the user to see at first sight where in the series most of the missing values are located. The plotNA.distributionBar function provides the same insights to users, but is designed for very large time series. This is necessary for time series with 1000 and more observations, where it is not possible to plot each observation as a single point. The plotNA.gapsize function provides information about consecutive NAs by showing the most common NA gap sizes in the time series. The plotNA.imputations function is designated for visual inspection of the results after applying an imputation algorithm. Therefore, newly imputed observations are shown in a different color than the rest of the series. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 3 Imputation functions An overview about all available imputation algorithms can be found in Table 3. Even if these functions are really easy applicable, some examples can be found later in section Usage examples. More detailed information about the theoretical background of the algorithms can be found in the imputeTS manual (Moritz, 2016b). Function Option Description na.interpolation linear Imputation by Linear Interpolation spline Imputation by Spline Interpolation stine Imputation by Stineman Interpolation na.kalman StructTS Imputation by Structural Model & Kalman Smoothing auto.arima Imputation by ARIMA State Space Representation & Kalman Sm. na.locf locf Imputation by Last Observation Carried Forward nocb Imputation by Next Observation Carried Backward na.ma simple Missing Value Imputation by Simple Moving Average linear Missing Value Imputation by Linear Weighted Moving Average exponential Missing Value Imputation by Exponential Weighted Moving Average na.mean mean MissingValue Imputation by Mean Value median Missing Value Imputation by Median Value mode Missing Value Imputation by Mode Value na.random Missing Value Imputation by Random Sample na.replace Replace Missing Values by a Defined Value na.seadec Seasonally Decomposed Missing Value Imputation na.seasplit Seasonally Splitted Missing Value Imputation na.remove Remove Missing Values Table 3: Overview Imputation Algorithms For convenience similar algorithms are available under one function name as parameter option. For example linear, spline and stineman interpolation are all included in the na.interpolation function. The na.mean, na.locf, na.replace, na.random functions are all simple and fast. In comparison, na.interpolation, na.kalman, na.ma, na.seasplit, na.seadec are more advanced algorithms that need more computation time. The na.remove function is a special case, since it only deletes all missing values. Thus, it is not really an imputation function. It should be handled with care since removing observations may corrupt the time information of the series. The na.seasplit and na.seadec functions are as well exceptions. These perform seasonal split / decomposition operations as a preprocessing step. For the imputation itself, one out of the other imputation algorithms can be used (which one can be set as option). Looking at all available imputation methods, no single overall best method can b",
"title": ""
},
{
"docid": "c2f340d9ac07783680b5dc96d1e26ae9",
"text": "Transportation plays a significant role in carbon dioxide (CO2) emissions, accounting for approximately a third of the United States’ inventory. In order to reduce CO2 emissions in the future, transportation policy makers are looking to make vehicles more efficient and increasing the use of carbon-neutral alternative fuels. In addition, CO2 emissions can be lowered by improving traffic operations, specifically through the reduction of traffic congestion. This paper examines traffic congestion and its impact on CO2 emissions using detailed energy and emission models and linking them to real-world driving patterns and traffic conditions. Using a typical traffic condition in Southern California as example, it has been found that CO2 emissions can be reduced by up to almost 20% through three different strategies: 1) congestion mitigation strategies that reduce severe congestion, allowing traffic to flow at better speeds; 2) speed management techniques that reduce excessively high free-flow speeds to more moderate conditions; and 3) shock wave suppression techniques that eliminate the acceleration/deceleration events associated with stop-and-go traffic that exists during congested conditions. Barth/Boriboonsomsin 3",
"title": ""
},
{
"docid": "15b0b080f27059cca6b137e71144712e",
"text": "The current study explored the elaborative retrieval hypothesis as an explanation for the testing effect: the tendency for a memory test to enhance retention more than restudying. In particular, the retrieval process during testing may activate elaborative information related to the target response, thereby increasing the chances that activation of any of this information will facilitate later retrieval of the target. In a test of this view, participants learned cue-target pairs, which were strongly associated (e.g., Toast: Bread) or weakly associated (e.g., Basket: Bread), through either a cued recall test (Toast: _____) or a restudy opportunity (Toast: Bread). A final test requiring free recall of the targets revealed that tested items were retained better than restudied items, and although strong cues facilitated recall of tested items initially, items recalled from weak cues were retained better over time, such that this advantage was eliminated or reversed at the time of the final test. Restudied items were retained at similar rates on the final test regardless of the strength of the cue-target relationship. These results indicate that the activation of elaborative information-which would occur to a greater extent during testing than restudying--may be one mechanism that underlies the testing effect.",
"title": ""
},
{
"docid": "273abcab379d49680db121022fba3e8f",
"text": "Current emotion recognition computational techniques have been successful on associating the emotional changes with the EEG signals, and so they can be identified and classified from EEG signals if appropriate stimuli are applied. However, automatic recognition is usually restricted to a small number of emotions classes mainly due to signal’s features and noise, EEG constraints and subject-dependent issues. In order to address these issues, in this paper a novel feature-based emotion recognition model is proposed for EEGbased Brain–Computer Interfaces. Unlike other approaches, our method explores a wider set of emotion types and incorporates additional features which are relevant for signal pre-processing and recognition classification tasks, based on a dimensional model of emotions: Valence and Arousal. It aims to improve the accuracy of the emotion classification task by combining mutual information based feature selection methods and kernel classifiers. Experiments using our approach for emotion classification which combines efficient feature selection methods and efficient kernel-based classifiers on standard EEG datasets show the promise of the approach when compared with state-of-the-art computational methods. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
7f1c18cb5e5c74734dea4e223e85bd62
|
Detection and molecular characterization of naturally transmitted sheep associated malignant catarrhal fever in cattle in India
|
[
{
"docid": "7fe1cea4990acabf7bc3c199d3c071ce",
"text": "Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net.",
"title": ""
}
] |
[
{
"docid": "8d3c4598b7d6be5894a1098bea3ed81a",
"text": "Retrieval enhances long-term retention. However, reactivation of a memory also renders it susceptible to modifications as shown by studies on memory reconsolidation. The present study explored whether retrieval diminishes or enhances subsequent retroactive interference (RI) and intrusions. Participants learned a list of objects. Two days later, they were either asked to recall the objects, given a subtle reminder, or were not reminded of the first learning session. Then, participants learned a second list of objects or performed a distractor task. After another two days, retention of List 1 was tested. Although retrieval enhanced List 1 memory, learning a second list impaired memory in all conditions. This shows that testing did not protect memory from RI. While a subtle reminder before List 2 learning caused List 2 items to later intrude into List 1 recall, very few such intrusions were observed in the testing and the no reminder conditions. The findings are discussed in reference to the reconsolidation account and the testing effect literature, and implications for educational practice are outlined. © 2015 Elsevier Inc. All rights reserved. Retrieval practice or testing is one of the most powerful memory enhancers. Testing that follows shortly after learning benefits long-term retention more than studying the to-be-remembered material again (Roediger & Karpicke, 2006a, 2006b). This effect has been shown using a variety of materials and paradigms, such as text passages (e.g., Roediger & Karpicke, 2006a), paired associates (Allen, Mahler, & Estes, 1969), general knowledge questions (McDaniel & Fisher, 1991), and word and picture lists (e.g., McDaniel & Masson, 1985; Wheeler & Roediger, 1992; Wheeler, Ewers, & Buonanno, 2003). Testing effects have been observed in traditional lab as well as educational settings (Grimaldi & Karpicke, 2015; Larsen, Butler, & Roediger, 2008; McDaniel, Anderson, Derbish, & Morrisette, 2007). Testing not only improves long-term retention, it also enhances subsequent encoding (Pastötter, Schicker, Niedernhuber, & Bäuml, 2011), protects memories from the buildup of proactive interference (PI; Nunes & Weinstein, 2012; Wahlheim, 2014), and reduces the probability that the tested items intrude into subsequently studied lists (Szpunar, McDermott, & Roediger, 2008; Weinstein, McDermott, & Szpunar, 2011). The reduced PI and intrusion rates are assumed to reflect enhanced list discriminability or improved within-list organization. Enhanced list discriminability in turn helps participants distinguish different sets or sources of information and allows them to circumscribe the search set during retrieval to the relevant list (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). ∗ Correspondence to: Department of Psychology, Lehigh University, 17 Memorial Drive East, Bethlehem, PA 18015, USA. E-mail address: hupbach@lehigh.edu http://dx.doi.org/10.1016/j.lmot.2015.01.004 0023-9690/© 2015 Elsevier Inc. All rights reserved. 24 A. Hupbach / Learning and Motivation 49 (2015) 23–30 If testing increases list discriminability, then it should also protect the tested list(s) from RI and intrusions from material that is encoded after retrieval practice. However, testing also necessarily reactivates a memory, and according to the reconsolidation account reactivation re-introduces plasticity into the memory trace, making it especially vulnerable to modifications (e.g., Dudai, 2004; Nader, Schafe, & LeDoux, 2000; for a recent review, see e.g., Hupbach, Gomez, & Nadel, 2013). Increased vulnerability to modification would suggest increased rather than reduced RI and intrusions. The few studies addressing this issue have yielded mixed results, with some suggesting that retrieval practice diminishes RI (Halamish & Bjork, 2011; Potts & Shanks, 2012), and others showing that retrieval practice can exacerbate the potential negative effects of post-retrieval learning (e.g., Chan & LaPaglia, 2013; Chan, Thomas, & Bulevich, 2009; Walker, Brakefield, Hobson, & Stickgold, 2003). Chan and colleagues (Chan & Langley, 2011; Chan et al., 2009; Thomas, Bulevich, & Chan, 2010) assessed the effects of testing on suggestibility in a misinformation paradigm. After watching a television episode, participants answered cuedrecall questions about it (retrieval practice) or performed an unrelated distractor task. Then, all participants read a narrative, which summarized the video but also contained some misleading information. A final cued-recall test revealed that participants in the retrieval practice condition recalled more misleading details and fewer correct details than participants in the distractor condition; that is, retrieval increased the misinformation effect (retrieval-enhanced suggestibility, RES). Chan et al. (2009) discuss two mechanisms that can explain this finding. First, since testing can potentiate subsequent new learning (e.g., Izawa, 1967; Tulving & Watkins, 1974), initial testing might have improved encoding of the misinformation. Indeed, when a modified final test was used, which encouraged the recall of both the correct information and the misinformation, participants in the retrieval practice condition recalled more misinformation than participants in the distractor condition (Chan et al., 2009). Second, retrieval might have rendered the memory more susceptible to interference by misinformation, an explanation that is in line with the reconsolidation account. Indeed, Chan and LaPaglia (2013) found reduced recognition of the correct information when retrieval preceded the presentation of misinformation (cf. Walker et al., 2003 for a similar effect in procedural memory). In contrast to Chan and colleagues’ findings, a study by Potts and Shanks (2012) suggests that testing protects memories from the negative influences of post-retrieval encoding of related material. Potts and Shanks asked participants to learn English–Swahili word pairs (List 1, A–B). One day later, one group of participants took a cued recall test of List 1 (testing condition) immediately before learning English–Finnish word pairs with the same English cues as were used in List 1 (List 2, A–C). Additionally, several control groups were implemented: one group was tested on List 1 without learning a second list, one group learned List 2 without prior retrieval practice, and one group did not participate in this session at all. On the third day, all participants took a final cued-recall test of List 1. Although retrieval practice per se did not enhance List 1 memory (i.e., no testing effect in the groups that did not learn List 2), it protected memory from RI (see Halamish & Bjork, 2011 for a similar result in a one-session study). Crucial for assessing the reconsolidation account is the comparison between the groups that learned List 2 either after List 1 recall or without prior List 1 recall. Contrary to the predictions derived from the reconsolidation account, final List 1 recall was enhanced when retrieval of List 1 preceded learning of List 2.1 While this clearly shows that testing counteracts RI, it would be premature to conclude that testing prevented the disruption of memory reconsolidation, because (a) retrieval practice without List 2 learning led to minimal forgetting between Day 2 and 3, while retrieval practice followed by List 2 learning led to significant memory decline, and (b) a reactivation condition that is independent from retrieval practice is missing. One could argue that repeating the cue words in List 2 likely reactivated memory for the original associations. It has been shown that the strength of reactivation (Detre, Natarajan, Gershman, & Norman, 2013) and the specific reminder structure (Forcato, Argibay, Pedreira, & Maldonado, 2009) determine whether or not a memory will be affected by post-reactivation procedures. The current study re-evaluates the question of how testing affects RI and intrusions. It uses a reconsolidation paradigm (Hupbach, Gomez, Hardt, & Nadel, 2007; Hupbach, Hardt, Gomez, & Nadel, 2008; Hupbach, Gomez, & Nadel, 2009; Hupbach, Gomez, & Nadel, 2011) to assess how testing in comparison to other reactivation procedures affects declarative memory. This paradigm will allow for a direct evaluation of the hypotheses that testing makes declarative memories vulnerable to interference, or that testing protects memories from the potential negative effects of subsequently learned material, as suggested by the list-separation hypothesis (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). This question has important practical implications. For instance, when students test their memory while preparing for an exam, will such testing increase or reduce interference and intrusions from information that is learned afterwards?",
"title": ""
},
{
"docid": "4ad535f3b4f1afba4497a4026236424e",
"text": "We study the problem of noninvasively estimating Blood Pressure (BP) without using a cuff, which is attractive for continuous monitoring of BP over Body Area Networks. It has been shown that the Pulse Arrival Time (PAT) measured as the delay between the ECG peak and a point in the finger PPG waveform can be used to estimate systolic and diastolic BP. Our aim is to evaluate the performance of such a method using the available MIMIC database, while at the same time improve the performance of existing techniques. We propose an algorithm to estimate BP from a combination of PAT and heart rate, showing improvement over PAT alone. We also show how the method achieves recalibration using an RLS adaptive algorithm. Finally, we address the use case of ECG and PPG sensors wirelessly communicating to an aggregator and study the effect of skew and jitter on BP estimation.",
"title": ""
},
{
"docid": "4effadcb9af57e2df43ef830453e1c2e",
"text": "It is well-known that probiotics have a number of beneficial health effects in humans and animals, including the reduction of symptoms in lactose intolerance and enhancement of the bioavailability of nutrients. Probiotics have showed to possess antimutagenic, anticarcinogenic and hypocholesterolemic properties. Further, they were also observed to have antagonistic actions against intestinal and food-borne pathogens, to decrease the prevalence of allergies in susceptible individuals and to have immunomodulatory effects. Typically, the bacteria colonise the intestinal tract first and then reinforce the host defence systems by inducing a generalised mucosal immune response, balanced T-helper cell response, self-limited inflammatory response and secretion of polymeric IgA. Scientific reports showed that the Taiwan native lactic acid bacterium from newborn infant faeces identified as Lactobacillus paracasei subsp. paracasei NTU 101 and its fermented products proved to be effective for the management of blood cholesterol and pressure, prevention of gastric mucosal lesion development, immunomodulation and alleviation of allergies, anti-osteoporosis and inhibition the fat tissue accumulation. This review article describes that the beneficial effects of this Lactobacillus strains and derivative products may be suitable for human and animals.",
"title": ""
},
{
"docid": "2b8aa68835bc61f3d0b5da39441185c9",
"text": "This position paper explores the threat to individual privacy due to the widespread use of consumer drones. Present day consumer drones are equipped with sensors such as cameras and microphones, and their types and numbers can be well expected to increase in future. Drone operators have absolute control on where the drones fly and what the on-board sensors record with no options for bystanders to protect their privacy. This position paper proposes a policy language that allows homeowners, businesses, governments, and privacy-conscious individuals to specify location access-control for drones, and discusses how these policy-based controls might be realized in practice. This position paper also explores the potential future problem of managing consumer drone traffic that is likely to emerge with increasing use of consumer drones for various tasks. It proposes a privacy preserving traffic management protocol for directing drones towards their respective destinations without requiring drones to reveal their destinations.",
"title": ""
},
{
"docid": "aa29947afd004987f39166b0ad52823b",
"text": "The present study examined the temporal relationship between posttraumatic stress disorder (PTSD) and social support among 128 male veterans treated for chronic PTSD. Level of perceived interpersonal support and stressors were assessed at two time points (6 months apart) for four different potential sources of support: spouse, relatives, nonveteran friends, and veteran peers. Veteran peers provided relatively high perceived support and little interpersonal stress. Spouses were seen as both interpersonal resources and sources of interpersonal stress. More severe PTSD symptoms at Time 1 predicted greater erosion in perceived support from nonveteran friends, but not from relatives. Contrary to expectations, initial levels of perceived support and stressors did not predict the course of chronic PTSD symptoms.",
"title": ""
},
{
"docid": "2dc69fff31223cd46a0fed60264b2de1",
"text": "The authors offer a framework for conceptualizing collective identity that aims to clarify and make distinctions among dimensions of identification that have not always been clearly articulated. Elements of collective identification included in this framework are self-categorization, evaluation, importance, attachment and sense of interdependence, social embeddedness, behavioral involvement, and content and meaning. For each element, the authors take note of different labels that have been used to identify what appear to be conceptually equivalent constructs, provide examples of studies that illustrate the concept, and suggest measurement approaches. Further, they discuss the potential links between elements and outcomes and how context moderates these relationships. The authors illustrate the utility of the multidimensional organizing framework by analyzing the different configuration of elements in 4 major theories of identification.",
"title": ""
},
{
"docid": "98c9adda989991cc2d2ddbe27988a2cd",
"text": "Multi-user, touch-sensing input devices create opportunities for the use of cooperative gestures -- multi-user gestural interactions for single display groupware. Cooperative gestures are interactions where the system interprets the gestures of more than one user as contributing to a single, combined command. Cooperative gestures can be used to enhance users' sense of teamwork, increase awareness of important system events, facilitate reachability and access control on large, shared displays, or add a unique touch to an entertainment-oriented activity. This paper discusses motivating scenarios for the use of cooperative gesturing and describes some initial experiences with CollabDraw, a system for collaborative art and photo manipulation. We identify design issues relevant to cooperative gesturing interfaces, and present a preliminary design framework. We conclude by identifying directions for future research on cooperative gesturing interaction techniques.",
"title": ""
},
{
"docid": "e324d34ba582466ddf21457e28981644",
"text": "Writing was invented too recently to have influenced the human genome. Consequently, reading acquisition must rely on partial recycling of pre-existing brain systems. Prior fMRI evidence showed that in literates a left-hemispheric visual region increases its activation to written strings relative to illiterates and reduces its response to faces. Increasing literacy also leads to a stronger right-hemispheric lateralization for faces. Here, we evaluated whether this reorganization of the brain's face system has behavioral consequences for the processing of non-linguistic visual stimuli. Three groups of adult illiterates, ex-illiterates and literates were tested with the sequential composite face paradigm that evaluates the automaticity with which faces are processed as wholes. Illiterates were consistently more holistic than participants with reading experience in dealing with faces. A second experiment replicated this effect with both faces and houses. Brain reorganization induced by literacy seems to reduce the influence of automatic holistic processing of faces and houses by enabling the use of a more analytic and flexible processing strategy, at least when holistic processing is detrimental to the task.",
"title": ""
},
{
"docid": "391ee7fbe7c5a83c8dada4062b8c432d",
"text": "A crystal oscillator is proposed which can exhibit a frequency versus temperature stability comparable to that of the best atomic frequency standards.<<ETX>>",
"title": ""
},
{
"docid": "8b09387799c37a0131e6ba08715ed187",
"text": "Simulation optimization tools have the potential to provide an unprecedented level of support for the design and execution of operational control in Discrete Event Logistics Systems (DELS). While much of the simulation optimization literature has focused on developing and exploiting integration and syntactical interoperability between simulation and optimization tools, maximizing the effectiveness of these tools to support the design and execution of control behavior requires an even greater degree of interoperability than the current state of the art. In this paper, we propose a modeling methodology for operational control decision-making that can improve the interoperability between these two analysis methods and their associated tools in the context of DELS control. This methodology establishes a standard definition of operational control for both simulation and optimization methods and defines a mapping between decision variables (optimization) and execution mechanisms (simulation / base system). The goal is a standard for creating conforming simulation and optimization tools that are capable of meeting the functional needs of operational control decision making in DELS.",
"title": ""
},
{
"docid": "d7e6b07fee74d6efd97733ac0b22f92c",
"text": "Low level optimisations from conventional compiler technology often give very poor results when applied to code from lazy functional languages, mainly because of the completely diierent structure of the code, unknown control ow, etc. A novel approach to compiling laziness is needed. We describe a complete back end for lazy functional languages, which uses various interprocedural optimisations to produce highly optimised code. The main features of our new back end are the following. It uses a monadic intermediate code, called GRIN (Graph Reduction Intermediate Notation). This code has a very functional avourr, making it well suited for analysis and program transformations, but at the same time provides the low levell machinery needed to express many concrete implementation concerns. Using a heap points-to analysis, we are able to eliminate most unknown control ow due to evals (i.e., forcing of closures) and applications of higher order functions, in the program. A transformation machinery uses many, each very simple, GRIN program transformations to optimise the intermediate code. Eventually, the GRIN code is translated into RISC machine code, and we apply an interpro-cedural register allocation algorithm, followed by many other low level optimisations. The elimination of unknown control ow, made earlier, will help a lot in making the low level optimisations work well. Preliminary measurements look very promising: we are currently twice as fast as the Glasgow Haskell Compiler for some small programs. Our approach still gives us many opportunities for further optimisations (though yet unexplored).",
"title": ""
},
{
"docid": "0da5045988b5064544870e1ff0f7ba44",
"text": "Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.",
"title": ""
},
{
"docid": "f31138cf18018ef2df82cc121f6f2721",
"text": "In this paper, we present EPCBC, a lightweight cipher that has 96-bit key size and 48-bit/96-bit block size. This is suitable for Electronic Product Code (EPC) encryption, which uses low-cost passive RFID-tags and exactly 96 bits as a unique identifier on the item level. EPCBC is based on a generalized PRESENT with block size 48 and 96 bits for the main cipher structure and customized key schedule design which provides strong protection against related-key differential attacks, a recent class of powerful attacks on AES. Related-key attacks are especially relevant when a block cipher is used as a hash function. In the course of proving the security of EPCBC, we could leverage on the extensive security analyses of PRESENT, but we also obtain new results on the differential and linear cryptanalysis bounds for the generalized PRESENT when the block size is less than 64 bits, and much tighter bounds otherwise. Further, we analyze the resistance of EPCBC against integral cryptanalysis, statistical saturation attack, slide attack, algebraic attack and the latest higher-order differential cryptanalysis from FSE 2011 [11]. Our proposed cipher would be the most efficient at EPC encryption, since for other ciphers such as AES and PRESENT, it is necessary to encrypt 128-bit blocks (which results in a 33% overhead being incurred). The efficiency of our proposal therefore leads to huge market implications. Another contribution is an optimized implementation of PRESENT that is smaller and faster than previously published results.",
"title": ""
},
{
"docid": "5a077d1d4d6c212b7f817cc115bf31bd",
"text": "Focus group interviews are widely used in health research to explore phenomena and are accepted as a legitimate qualitative methodology. They are used to draw out interaction data from discussions among participants; researchers running these groups need to be skilled in interviewing and in managing groups, group dynamics and group discussions. This article follows Doody et al's (2013) article on the theory of focus group research; it addresses the preparation for focus groups relating to the research environment, interview process, duration, participation of group members and the role of the moderator. The article aims to assist researchers to prepare and plan for focus groups and to develop an understanding of them, so information from the groups can be used for academic studies or as part of a research proposal.",
"title": ""
},
{
"docid": "b9c40aa4c8ac9d4b6cbfb2411c542998",
"text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.",
"title": ""
},
{
"docid": "6c8cca245879bac1d024b665ab62ec92",
"text": "This study investigates the feasibility of a novel concept, contact-free support structures, for part overhangs in powder-bed metal additive manufacturing. The intent is to develop alternative support designs that require no or little post-processing, and yet, maintain effective in minimizing overhang distortions. The idea is to build, simultaneously during part fabrications, a heat sink (called “heat support”), underneath an overhang to alter adverse thermal behaviors. Thermomechanical modeling and simulations using finite element analysis were applied to numerically research the heat support effect on overhang distortions. Experimentally, a powderbed electron beam additive manufacturing system was utilized to fabricate heat support designs and examine their functions. The results prove the concept and demonstrate the effectiveness of contact-free heat supports. Moreover, the method has been tested with different heat support parameters and applied to various overhang geometries. It is concluded that the heat support proposed has potential to be implemented in industrial applications.",
"title": ""
},
{
"docid": "d4dc33b15df0a27259180fef3c28b546",
"text": "Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert’s knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31% in terms of accuracy. Prediction error rate decreases from 1.83% to 0.69%, i.e., it decreases by 1.14%, or 62.3% relatively compared with other methods that use predefined feature set (Table 3).",
"title": ""
},
{
"docid": "627e4d3c2dfb8233f0e345410064f6d0",
"text": "Data clustering is an important task in many disciplines. A large number of studies have attempted to improve clustering by using the side information that is often encoded as pairwise constraints. However, these studies focus on designing special clustering algorithms that can effectively exploit the pairwise constraints. We present a boosting framework for data clustering,termed as BoostCluster, that is able to iteratively improve the accuracy of any given clustering algorithm by exploiting the pairwise constraints. The key challenge in designing a boosting framework for data clustering is how to influence an arbitrary clustering algorithm with the side information since clustering algorithms by definition are unsupervised. The proposed framework addresses this problem by dynamically generating new data representations at each iteration that are, on the one hand, adapted to the clustering results at previous iterations by the given algorithm, and on the other hand consistent with the given side information. Our empirical study shows that the proposed boosting framework is effective in improving the performance of a number of popular clustering algorithms (K-means, partitional SingleLink, spectral clustering), and its performance is comparable to the state-of-the-art algorithms for data clustering with side information.",
"title": ""
},
{
"docid": "c175910d1809ad6dc073f79e4ca15c0c",
"text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.",
"title": ""
},
{
"docid": "e4edffeea08d6eae4dfc89c05f4c7507",
"text": "A partially reflective surface (PRS) antenna design enabling 1-bit dynamic beamwidth control is presented. The antenna operates at X-band and is based on microelectromechanical systems (MEMS) technology. The reconfigurable PRS unit cell monolithically integrates MEMS elements, whose positions are chosen to reduce losses while allowing a considerable beamwidth variation. The combined use of the proposed PRS unit cell topology and MEMS technology allows achieving low loss in the reconfigurable PRS. In addition, the antenna operates in dual-linear polarization with independent beamwidth control of each polarization. An operative MEMS-based PRS unit cell is fabricated and measured upon reconfiguration, showing very good agreement with simulations. The complete antenna system performance is rigorously evaluated based on full-wave simulations and the unit cell measurements, demonstrating an 18° and 23° variation of the half-power beamwidth in the E-plane and the H-plane, respectively. The antenna radiation efficiency is better than 75% in all states of operation.",
"title": ""
}
] |
scidocsrr
|
e313e0fe6865830c7e8269f52ff133bc
|
Wideband Inline Coaxial to Ridge Waveguide Transition With Tuning Capability for Ridge Gap Waveguide
|
[
{
"docid": "7192e2ae32eb79aaefdf8e54cdbba715",
"text": "Recently, ridge gap waveguides are considered as guiding structures in high-frequency applications. One of the major problems facing this guiding structure is the limited ability of using all the possible bandwidths due to the limited bandwidth of the transition to the coaxial lines. Here, a review of the different excitation techniques associated with this guiding structure is presented. Next, some modifications are proposed to improve its response in order to cover the possible actual bandwidth. The major aim of this paper is to introduce a wideband coaxial to ridge gap waveguide transition based on five sections of matching networks. The introduced transition shows excellent return loss, which is better than 15 dB over the actual possible bandwidth for double transitions.",
"title": ""
},
{
"docid": "ee21b2744b26a11647c72d09025a6e11",
"text": "This paper presents the design of microstrip-ridge gap waveguide using via-holes in printed circuit boards, a solution for high-frequency circuits. The study includes how to define the numerical ports, pin sensitivity, losses, and also a comparison with performance of normal microstrip lines and inverted microstrip lines. The results are produced using commercially available electromagnetic simulators. A WR-15 to microstrip-ridge gap waveguide transition was also designed. The results are verified with measurements on microstrip-ridge gap waveguides with WR15 transitions at both ends.",
"title": ""
}
] |
[
{
"docid": "2d47bf032f9364ae56e93fa03079eae8",
"text": "Five studies tested two general hypotheses: Individuals differ in their use of emotion regulation strategies such as reappraisal and suppression, and these individual differences have implications for affect, well-being, and social relationships. Study 1 presents new measures of the habitual use of reappraisal and suppression. Study 2 examines convergent and discriminant validity. Study 3 shows that reappraisers experience and express greater positive emotion and lesser negative emotion, whereas suppressors experience and express lesser positive emotion, yet experience greater negative emotion. Study 4 indicates that using reappraisal is associated with better interpersonal functioning, whereas using suppression is associated with worse interpersonal functioning. Study 5 shows that using reappraisal is related positively to well-being, whereas using suppression is related negatively.",
"title": ""
},
{
"docid": "b31ebdbd7edc0b30b0529a85fab0b612",
"text": "In this paper, we present RFMS, the real-time flood monitoring system with wireless sensor networks, which is deployed in two volcanic islands Ulleung-do and Dok-do located in the East Sea near to the Korean peninsula and developed for flood monitoring. RFMS measures river and weather conditions through wireless sensor nodes equipped with different sensors. Measured information is employed for early-warning via diverse types of services such as SMS (short message service) and a Web service.",
"title": ""
},
{
"docid": "19a1aab60faad5a9376bb220352dc081",
"text": "BACKGROUND\nPatients with type 2 diabetes mellitus (T2DM) struggle with the management of their condition due to difficulty relating lifestyle behaviors with glycemic control. While self-monitoring of blood glucose (SMBG) has proven to be effective for those treated with insulin, it has been shown to be less beneficial for those only treated with oral medications or lifestyle modification. We hypothesized that the effective self-management of non-insulin treated T2DM requires a behavioral intervention that empowers patients with the ability to self-monitor, understand the impact of lifestyle behaviors on glycemic control, and adjust their self-care based on contextualized SMBG data.\n\n\nOBJECTIVE\nThe primary objective of this randomized controlled trial (RCT) is to determine the impact of bant2, an evidence-based, patient-centered, behavioral mobile app intervention, on the self-management of T2DM. Our second postulation is that automated feedback delivered through the mobile app will be as effective, less resource intensive, and more scalable than interventions involving additional health care provider feedback.\n\n\nMETHODS\nThis study is a 12-month, prospective, multicenter RCT in which 150 participants will be randomly assigned to one of two groups: the control group will receive current standard of care, and the intervention group will receive the mobile phone app system in addition to standard of care. The primary outcome measure is change in glycated hemoglobin A1c from baseline to 12 months.\n\n\nRESULTS\nThe first patient was enrolled on July 28, 2015, and we anticipate completing this study by September, 2018.\n\n\nCONCLUSIONS\nThis RCT is one of the first to evaluate an evidence-based mobile app that focuses on facilitating lifestyle behavior change driven by contextualized and structured SMBG. The results of this trial will provide insights regarding the usage of mobile tools and consumer-grade devices for diabetes self-care, the economic model of using incentives to motivate behavior change, and the consumption of test strips when following a rigorously structured approach for SMBG.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02370719; https://clinicaltrials.gov/ct2/show/NCT02370719 (Archived at http://www.webcitation.org/6jpyjfVRs).",
"title": ""
},
{
"docid": "2bcd273c197261cf1555b81fc3b5f107",
"text": "Deep Learning has managed to push boundaries in a wide variety of tasks. One area of interest is to tackle problems in reasoning and understanding, in an aim to emulate human intelligence. In this work, we describe a deep learning model that addresses the reasoning task of question-answering on bar graphs and pie charts. We introduce a novel architecture that learns to identify various plot elements, quantify the represented values and determine a relative ordering of these statistical values. We test our model on the recently released FigureQA dataset, which provides images and accompanying questions, for bar graphs and pie charts, augmented with rich annotations. Our approach outperforms the state-of-the-art Relation Networks baseline and traditional CNN-LSTM models when evaluated on this dataset. Our model also has a considerably faster training time of approximately 2 days on 1 GPU compared to the Relation Networks baseline which requires around two weeks to train on 4 GPUs.",
"title": ""
},
{
"docid": "c1bfef951e9775f6ffc949c5110e1bd1",
"text": "In the interest of more systematically documenting the early signs of autism, and of testing specific hypotheses regarding their underlying neurodevelopmental substrates, we have initiated a longitudinal study of high-risk infants, all of whom have an older sibling diagnosed with an autistic spectrum disorder. Our sample currently includes 150 infant siblings, including 65 who have been followed to age 24 months, who are the focus of this paper. We have also followed a comparison group of low-risk infants. Our measures include a novel observational scale (the first, to our knowledge, that is designed to assess autism-specific behavior in infants), a computerized visual orienting task, and standardized measures of temperament, cognitive and language development. Our preliminary results indicate that by 12 months of age, siblings who are later diagnosed with autism may be distinguished from other siblings and low-risk controls on the basis of: (1) several specific behavioral markers, including atypicalities in eye contact, visual tracking, disengagement of visual attention, orienting to name, imitation, social smiling, reactivity, social interest and affect, and sensory-oriented behaviors; (2) prolonged latency to disengage visual attention; (3) a characteristic pattern of early temperament, with marked passivity and decreased activity level at 6 months, followed by extreme distress reactions, a tendency to fixate on particular objects in the environment, and decreased expression of positive affect by 12 months; and (4) delayed expressive and receptive language. We discuss these findings in the context of various neural networks thought to underlie neurodevelopmental abnormalities in autism, including poor visual orienting. Over time, as we are able to prospectively study larger numbers and to examine interrelationships among both early-developing behaviors and biological indices of interest, we hope this work will advance current understanding of the neurodevelopmental origins of autism.",
"title": ""
},
{
"docid": "4fe3f01fef636f8f5cb3c7655a619390",
"text": "This paper replicates the results of Dai, Olah, and Le’s paper ”Document Embedding with Paragraph Vectors” and compares the performance of three unsupervised document modeling algorithms [1]. We built and compared the results of Paragraph Vector, Latent Dirichlet Allocation, and traditional Word2Vec models on Wikipedia browsing. We then built three extensions to the original Paragraph Vector model, finding that combinations of paragraph structures assist in optimizing Paragraph Vector training.",
"title": ""
},
{
"docid": "fb214dfd39c4fef19b6598b3b78a1730",
"text": "Social media users share billions of items per year, only a small fraction of which is geotagged. We present a data-driven approach for identifying non-geotagged content items that can be associated with a hyper-local geographic area by modeling the location distributions of n-grams that appear in the text. We explore the trade-off between accuracy and coverage of this method. Further, we explore differences across content received from multiple platforms and devices, and show, for example, that content shared via different sources and applications produces significantly different geographic distributions, and that it is preferred to model and predict location for items according to their source. Our findings show the potential and the bounds of a data-driven approach to assigning location data to short social media texts, and offer implications for all applications that use data-driven approaches to locate content.",
"title": ""
},
{
"docid": "09cc8bd6fec4123a174f78586ef587df",
"text": "Cloud computing technology is garnering success and wisdom-like stories of savings, ease of use, and increased flexibility in controlling how resources are used at any given time to deliver computing capability. This paper develops a preliminary decision framework to assist managers who are determining which cloud solution matches their specific requirements and evaluating the numerous commercial claims (in many cases unsubstantiated) of a cloud's value. This decision framework and research helps managers allocate investments and assess cloud alternatives that now compete with in-house data centers that previously stored, accessed, and processed data or with another company's (outsourced) data center resources. The hypothetically newly captured corporate value (from cloud) is that resources are no longer idle most of the time, and are now much more fully utilized (with lower unit costs). This reduces high ownership and support costs, improves capital leverage, and delivers increased flexibility in the use of resources.",
"title": ""
},
{
"docid": "ae7bcfb547c4dcb1f30cfc48dd1d494f",
"text": "Recently, authority ranking has received increasing interests in both academia and industry, and it is applicable to many problems such as discovering influential nodes and building recommendation systems. Various graph-based ranking approaches like PageRank have been used to rank authors and papers separately in homogeneous networks. In this paper, we take venue information into consideration and propose a novel graph-based ranking framework, Tri-Rank, to co-rank authors, papers and venues simultaneously in heterogeneous networks. This approach is a flexible framework and it ranks authors, papers and venues iteratively in a mutually reinforcing way to achieve a more synthetic, fair ranking result. We conduct extensive experiments using the data collected from ACM Digital Library. The experimental results show that Tri-Rank is more effective and efficient than the state-of-the-art baselines including PageRank, HITS and Co-Rank in ranking authors. The papers and venues ranked by Tri-Rank also demonstrate that Tri-Rank is rational.",
"title": ""
},
{
"docid": "273de3969dbe7d5b64cdb9e4ba19d2be",
"text": "Insole pressure systems are often more appropriate than force platforms for analysing center of pressure (CoP) as they are more flexible in use and indicate the position of the CoP that characterizes the contact foot/shoe during gait with shoes. However, these systems are typically not synchronized with 3D motion analysis systems. The present paper proposes a direct method that does not require a force platform for synchronizing an insole pressure system with a 3D motion analysis system. The distance separating 24 different CoPs measured optically and their equivalents measured by the insoles and transformed in the global coordinate system did not exceed 2 mm, confirming the suitability of the method proposed. Additionally, during static single limb stance, distances smaller than 7 mm and correlations higher than 0.94 were found between CoP trajectories measured with insoles and force platforms. Similar measurements were performed during gait to illustrate the characteristics of the CoP measured with each system. The distance separating the two CoPs was below 19 mm and the coefficient of correlation above 0.86. The proposed method offers the possibility to conduct new experiments, such as the investigation of proprioception in climbing stairs or in the presence of obstacles.",
"title": ""
},
{
"docid": "1bd9cedbbbd26d670dd718fe47c952e7",
"text": "Recent advances in conversational systems have changed the search paradigm. Traditionally, a user poses a query to a search engine that returns an answer based on its index, possibly leveraging external knowledge bases and conditioning the response on earlier interactions in the search session. In a natural conversation, there is an additional source of information to take into account: utterances produced earlier in a conversation can also be referred to and a conversational IR system has to keep track of information conveyed by the user during the conversation, even if it is implicit. We argue that the process of building a representation of the conversation can be framed as a machine reading task, where an automated system is presented with a number of statements about which it should answer questions. The questions should be answered solely by referring to the statements provided, without consulting external knowledge. The time is right for the information retrieval community to embrace this task, both as a stand-alone task and integrated in a broader conversational search setting. In this paper, we focus on machine reading as a stand-alone task and present the Attentive Memory Network (AMN), an end-to-end trainable machine reading algorithm. Its key contribution is in efficiency, achieved by having an hierarchical input encoder, iterating over the input only once. Speed is an important requirement in the setting of conversational search, as gaps between conversational turns have a detrimental effect on naturalness. On 20 datasets commonly used for evaluating machine reading algorithms we show that the AMN achieves performance comparable to the state-of-theart models, while using considerably fewer computations.",
"title": ""
},
{
"docid": "85856deb5bf7cafef8f68ad13414d4b1",
"text": "human health and safety, serving as an early-warning system for hazardous environmental conditions, such as poor air and water quality (e.g., Glasgow et al. 2004, Normander et al. 2008), and natural disasters, such as fires (e.g., Hefeeda and Bagheri 2009), floods (e.g., Young 2002), and earthquakes (e.g., Hart and Martinez 2006). Collectively, these changes in the technological landscape are altering the way that environmental conditions are monitored, creating a platform for new scientific discoveries (Porter et al. 2009). Although sensor networks can provide many benefits, they are susceptible to malfunctions that can result in lost or poor-quality data. Some level of sensor failure is inevitable; however, steps can be taken to minimize the risk of loss and to improve the overall quality of the data. In the ecological community, it has become common practice to post streaming sensor data online with limited or no quality control. That is, these data are often delivered to end users in a raw form, without any checks or evaluations having been performed. In such cases, the data are typically released provisionally with the understanding that they could change in the future. However, when provisional data are made publically available before they have been comprehensively checked, there is the potential for erroneous or misleading results. Streaming sensor networks have advanced ecological research by providing enormous quantities of data at fine temporal and spatial resolutions in near real time (Szewczyk et al. 2004, Porter et al. 2005, Collins et al. 2006). The advent of wireless technologies has enabled connections with sensors in remote locations, making it possible to transmit data instantaneously using communication devices such as cellular phones, radios, and local area networks. Advancements in cyberinfrastructure have improved data storage capacity, processing speed, and communication bandwidth, making it possible to deliver to end users the most current observations from sensors (e.g., within minutes after their collection). Recent technological developments have resulted in a new generation of in situ sensors that provide continuous data streams on the physical, chemical, optical, acoustical, and biological properties of ecosystems. These new types of sensors provide a window into natural patterns not obtainable with discrete measurements (Benson et al. 2010). Techniques for rapidly processing and interpreting digital data, such as webcam images in investigations of tree phenology (Richardson et al. 2009) and acoustic data in wildlife research (Szewczyk et al. 2004), have also enhanced our understanding of ecological processes. Access to near-real-time data has become important for",
"title": ""
},
{
"docid": "28389713b203129f0e8a2576928241ab",
"text": "Mobile Ad hoc Network (MANET) is a collection of wireless mobile nodes that dynamically form a network temporarily without any support of central management. Moreover, Every node in MANET moves arbitrarily making the multi-hop network topology to change randomly at uncertain times. There are several familiar routing protocols like AODV,DSR,DSDV etc... which have been proposed for providing communication among all the nodes in the wireless network. This paper presents a performance comparison and study of reactive and proactive protocols AODV,DSR and DSDV based on metrics such as throughput, control overhead ,packet delivery ratio and average end-toend delay by using the NS-2 simulator.",
"title": ""
},
{
"docid": "b9daaabfc245958b9dee7d4910e80431",
"text": "Strawberry fruits are highly valued for their taste and nutritional value. However, results describing the bioaccessibility and intestinal absorption of phenolic compounds from strawberries are still scarce. In our study, a combined in vitro digestion/Caco-2 absorption model was used to mimic physiological conditions in the gastrointestinal track and identify compounds transported across intestinal epithelium. In the course of digestion, the loss of anthocyanins was noted whilst pelargonidin-3-glucoside remained the most abundant compound, amounting to nearly 12 mg per 100 g of digested strawberries. Digestion increased the amount of ellagic acid available by nearly 50%, probably due to decomposition of ellagitannins. Only trace amounts of pelargonidin-3-glucoside were found to be absorbed in the intestine model. Dihydrocoumaric acid sulphate and p-coumaric acid were identified as metabolites formed in enterocytes and released at the serosal side of the model.",
"title": ""
},
{
"docid": "148f306c8c9a4170afcdc8a0b6ff902c",
"text": "Word vectors require significant amounts of memory and storage, posing issues to resource limited devices like mobile phones and GPUs. We show that high quality quantized word vectors using 1-2 bits per parameter can be learned by introducing a quantization function into Word2Vec. We furthermore show that training with the quantization function acts as a regularizer. We train word vectors on English Wikipedia (2017) and evaluate them on standard word similarity and analogy tasks and on question answering (SQuAD). Our quantized word vectors not only take 8-16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering.",
"title": ""
},
{
"docid": "03e7070b1eb755d792564077f65ea012",
"text": "The widespread use of online social networks (OSNs) to disseminate information and exchange opinions, by the general public, news media, and political actors alike, has enabled new avenues of research in computational political science. In this paper, we study the problem of quantifying and inferring the political leaning of Twitter users. We formulate political leaning inference as a convex optimization problem that incorporates two ideas: (a) users are consistent in their actions of tweeting and retweeting about political issues, and (b) similar users tend to be retweeted by similar audience. We then apply our inference technique to 119 million election-related tweets collected in seven months during the 2012 U.S. presidential election campaign. On a set of frequently retweeted sources, our technique achieves 94 percent accuracy and high rank correlation as compared with manually created labels. By studying the political leaning of 1,000 frequently retweeted sources, 232,000 ordinary users who retweeted them, and the hashtags used by these sources, our quantitative study sheds light on the political demographics of the Twitter population, and the temporal dynamics of political polarization as events unfold.",
"title": ""
},
{
"docid": "0c1672cb538bfbc50136c5365f04282b",
"text": "We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are correlated with ground truth radiology reports on the DDSM dataset. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions, but also possibly discovers new visual knowledge relevant to medical diagnosis.",
"title": ""
},
{
"docid": "627e4d3c2dfb8233f0e345410064f6d0",
"text": "Data clustering is an important task in many disciplines. A large number of studies have attempted to improve clustering by using the side information that is often encoded as pairwise constraints. However, these studies focus on designing special clustering algorithms that can effectively exploit the pairwise constraints. We present a boosting framework for data clustering,termed as BoostCluster, that is able to iteratively improve the accuracy of any given clustering algorithm by exploiting the pairwise constraints. The key challenge in designing a boosting framework for data clustering is how to influence an arbitrary clustering algorithm with the side information since clustering algorithms by definition are unsupervised. The proposed framework addresses this problem by dynamically generating new data representations at each iteration that are, on the one hand, adapted to the clustering results at previous iterations by the given algorithm, and on the other hand consistent with the given side information. Our empirical study shows that the proposed boosting framework is effective in improving the performance of a number of popular clustering algorithms (K-means, partitional SingleLink, spectral clustering), and its performance is comparable to the state-of-the-art algorithms for data clustering with side information.",
"title": ""
}
] |
scidocsrr
|
ad80cdfd6e069d0a1370b7e6a7e6c5b9
|
Foreground–Background Separation From Video Clips via Motion-Assisted Matrix Restoration
|
[
{
"docid": "a3f06bfcc2034483cac3ee200803878c",
"text": "This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.",
"title": ""
}
] |
[
{
"docid": "c8edb6b8ed8176368faf591161718b95",
"text": "A new 4-group model of attachment styles in adulthood is proposed. Four prototypic attachment patterns are defined using combinations of a person's self-image (positive or negative) and image of others (positive or negative). In Study 1, an interview was developed to yield continuous and categorical ratings of the 4 attachment styles. Intercorrelations of the attachment ratings were consistent with the proposed model. Attachment ratings were validated by self-report measures of self-concept and interpersonal functioning. Each style was associated with a distinct profile of interpersonal problems, according to both self- and friend-reports. In Study 2, attachment styles within the family of origin and with peers were assessed independently. Results of Study 1 were replicated. The proposed model was shown to be applicable to representations of family relations; Ss' attachment styles with peers were correlated with family attachment ratings.",
"title": ""
},
{
"docid": "835b74c546ba60dfbb62e804daec8521",
"text": "The goal of Open Information Extraction (OIE) is to extract surface relations and their arguments from naturallanguage text in an unsupervised, domainindependent manner. In this paper, we propose MinIE, an OIE system that aims to provide useful, compact extractions with high precision and recall. MinIE approaches these goals by (1) representing information about polarity, modality, attribution, and quantities with semantic annotations instead of in the actual extraction, and (2) identifying and removing parts that are considered overly specific. We conducted an experimental study with several real-world datasets and found that MinIE achieves competitive or higher precision and recall than most prior systems, while at the same time producing shorter, semantically enriched extractions.",
"title": ""
},
{
"docid": "14fa72af2a1a4264b2e84e6c810df326",
"text": "This paper presents a clustering approach that simultaneously identifies product features and groups them into aspect categories from online reviews. Unlike prior approaches that first extract features and then group them into categories, the proposed approach combines feature and aspect discovery instead of chaining them. In addition, prior work on feature extraction tends to require seed terms and focus on identifying explicit features, while the proposed approach extracts both explicit and implicit features, and does not require seed terms. We evaluate this approach on reviews from three domains. The results show that it outperforms several state-of-the-art methods on both tasks across all three domains.",
"title": ""
},
{
"docid": "4661b378eda6cd44c95c40ebf06b066b",
"text": "Speech signal degradation in real environments mainly results from room reverberation and concurrent noise. While human listening is robust in complex auditory scenes, current speech segregation algorithms do not perform well in noisy and reverberant environments. We treat the binaural segregation problem as binary classification, and employ deep neural networks (DNNs) for the classification task. The binaural features of the interaural time difference and interaural level difference are used as the main auditory features for classification. The monaural feature of gammatone frequency cepstral coefficients is also used to improve classification performance, especially when interference and target speech are collocated or very close to one another. We systematically examine DNN generalization to untrained spatial configurations. Evaluations and comparisons show that DNN-based binaural classification produces superior segregation performance in a variety of multisource and reverberant conditions.",
"title": ""
},
{
"docid": "a6e35b743c2cfd2cd764e5ad83decaa7",
"text": "An e-vendor’s website inseparably embodies an interaction with the vendor and an interaction with the IT website interface. Accordingly, research has shown two sets of unrelated usage antecedents by customers: 1) customer trust in the e-vendor and 2) customer assessments of the IT itself, specifically the perceived usefulness and perceived ease-of-use of the website as depicted in the technology acceptance model (TAM). Research suggests, however, that the degree and impact of trust, perceived usefulness, and perceived ease of use change with experience. Using existing, validated scales, this study describes a free-simulation experiment that compares the degree and relative importance of customer trust in an e-vendor vis-à-vis TAM constructs of the website, between potential (i.e., new) customers and repeat (i.e., experienced) ones. The study found that repeat customers trusted the e-vendor more, perceived the website to be more useful and easier to use, and were more inclined to purchase from it. The data also show that while repeat customers’ purchase intentions were influenced by both their trust in the e-vendor and their perception that the website was useful, potential customers were not influenced by perceived usefulness, but only by their trust in the e-vendor. Implications of this apparent trust-barrier and guidelines for practice are discussed.",
"title": ""
},
{
"docid": "1c2f873f3fb57de69f5783cc1f9699ed",
"text": "Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.",
"title": ""
},
{
"docid": "9098d40a9e16a1bd1ed0a9edd96f3258",
"text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.",
"title": ""
},
{
"docid": "fe146c8db1a01bb043a3a2619118734c",
"text": "This paper presents a novel discriminative method for estimating 3D shape from a single image with 3D Morphable Model (3DMM). Until now, most traditional 3DMM fitting methods depend on the analysis-by-synthesis framework which searches for the best parameters by minimizing the difference between the input image and the model appearance. They are highly sensitive to initialization and have to rely on the stochastic optimization to handle local minimum problem, which is usually a time-consuming process. To solve the problem, we find a different direction to estimate the shape parameters through learning a regressor instead of minimizing the appearance difference. Compared with the traditional analysis-by-synthesis framework, the new discriminative approach makes it possible to utilize large databases to train a robust fitting model which can reconstruct shape from image features accurately and efficiently. We compare our method with two popular 3DMM fitting algorithms on FRGC database. Experimental results show that our approach significantly outperforms the state-of-the-art in terms of efficiency, robustness and accuracy.",
"title": ""
},
{
"docid": "b38adfeec4e495fdb0fd4cf98b7259a6",
"text": "Task switch cost (the deficit of performing a new task vs. a repeated task) has been partly attributed to priming of the repeated task, as well as to inappropriate preparation for the switched task. In the present study, we examined the nature of the priming effect by repeating stimulus-related processes, such as stimulus encoding or stimulus identification. We adopted a partial-overlap task-switching paradigm, in which only stimulus-related processes should be repeated or switched. The switch cost in this partial-overlap condition was smaller than the cost in the full-overlap condition, in which the task overlap involved more than stimulus processing, indicating that priming of a stimulus is a component of a switch cost. The switch cost in the partial-overlap condition, however, disappeared eventually with a long interval between two tasks, whereas the cost in the full-overlap condition remained significant. Moreover, the switch cost, in general, did not interact with foreknowledge, suggesting that preparation on the basis of foreknowledge may be related to processes beyond stimulus encoding. These results suggest that stimulus-related priming is automatic and short-lived and, therefore, is not a part of the persisting portion of switch cost.",
"title": ""
},
{
"docid": "d93d8a7c61b4cbe21b551d08458844c5",
"text": "Presently there is a rapid development of the internet and the telecommunication techniques. Importance of information security is increasing. Cryptography and steganography are the major areas which work on information hiding and security. In this paper a method is used to embed a color secret image inside a color cover image. A 2-3-3 LSB insertion method has been used for image steganography. The important quality of a steganographic system is to be less distortive while increasing the size of the secret image. Use of cryptography along with steganography increases the security. Arnold CATMAP encryption technique is used for encrypting the secret image. Color space plays an important role in increasing network bandwidth efficiency. YUV color space provides reduced bandwidth for chrominance components. This paper demonstrates that YUV color space can also be used for security purposes. Keywords— Watermarking, Haar Wavelet, DWT, PSNR",
"title": ""
},
{
"docid": "8037baf544198ca8d5d9bfae60505681",
"text": "Bounding volume hierarchies (BVH) are a commonly used method for speeding up ray tracing. Even though the memory footprint of a BVH is relatively low compared to other acceleration data structures, they still can consume a large amount of memory for complex scenes and exceed the memory bounds of the host system. This can lead to a tremendous performance decrease on the order of several magnitudes. In this paper we present a novel scheme for construction and storage of BVHs that can reduce the memory consumption to less than 1% of a standard BVH. We show that our representation, which uses only 2 bits per node, is the smallest possible representation on a per node basis that does not produce empty space deadlocks. Our data structure, called the Minimal Bounding Volume Hierarchy (MVH) reduces the memory requirements in two important ways: using implicit indexing and preset surface reduction factors. Obviously, this scheme has a non-negligible computational overhead, but this overhead can be compensated to a large degree by shooting larger ray bundles instead of single rays, using a simpler intersection scheme and a two-level representation of the hierarchy. These measure enable interactive ray tracing performance without the necessity to rely on out-of-core techniques that would be inevitable for a standard BVH. This is the author version of the paper. The definitive version is available at diglib.eg.org.",
"title": ""
},
{
"docid": "f5a012a451afbda47ad3b21e7d601b25",
"text": "In recent years Quantum Cryptography gets more attention as well as becomes most promising cryptographic field for faster, effective and more secure communications. Quantum Cryptography is an art of science which overlap quantum properties of light under quantum mechanics on cryptographic task instead of current state of algorithms based on mathematical computing technology. Major algorithms for public key encryption and some digital signature scheme such as RSA, El Gamal cryptosystem, hash function are vulnerable at quantum adversaries. Most of the factoring problem can be broken by Shore's algorithm and quantum computer threatens other hand discrete logarithm problem. Our paper describes why modern cryptography contributes quantum cryptography, security issues and future goals of modern cryptography.",
"title": ""
},
{
"docid": "b236003ad282e973b3ebf270894c2c07",
"text": "Darier's disease is characterized by dense keratotic lesions in the seborrheic areas of the body such as scalp, forehead, nasolabial folds, trunk and inguinal region. It is a rare genodermatosis, an autosomal dominant inherited disease that may be associated with neuropsichiatric disorders. It is caused by ATPA2 gene mutation, presenting cutaneous and dermatologic expressions. Psychiatric symptoms are depression, suicidal attempts, and bipolar affective disorder. We report a case of Darier's disease in a 48-year-old female patient presenting severe cutaneous and psychiatric manifestations.",
"title": ""
},
{
"docid": "06834aae2618b3f1e5b7a28bd8c10760",
"text": "Being external, the fungal cell wall plays a crucial role in the fungal life. By covering the underneath cell, it offers mechanical strength and acts as a barrier, thus protecting the fungus from the hostile environment. Chemically, this cell wall is composed of different polysaccharides. Because of their specific composition, the fungal cell wall and its underlying plasma membrane are unique targets for the development of drugs against pathogenic fungal species. The objective of this review is to consolidate the current knowledge on the antifungal drugs targeting the cell wall and plasma membrane, mainly of Aspergillus and Candida species - the most prevalent fungal pathogens, and also to present challenges and questions conditioning the development of new antifungal drugs targeting the cell wall.",
"title": ""
},
{
"docid": "950fce647311598a4fcef31643b15293",
"text": "Vernalization at 5°C for 5 days, either singly or in combination with a foliar application of atonik at 250, 500 and 1000 mg/l or 6-benzyl adenine (BA) at 25, 50 and 100 mg/l was studied on the growth parameters and flowering response, photosynthetic pigments, different carbohydrate and nitrogen fractions, ion contents and endogenous level of different phytohormones of Pisum sativum (cv. ‘Master Bean’). All determined growth parameters (root and shoot length, fresh weight and dry weight; number of nodes/plant; number of leaves/plant; total leaf area/plant; relative water content; number of flowers/plant) decreased in response to vernalization treatment. In contrast, vernalization in combination with 1000 mg/l atonik or 50 mg/l BA led to a significant increase in these parameters. Vernalization, alone or in combination with atonik or BA, significantly increased all photosynthetic pigments and generally led to a significant increase in different carbohydrate and nitrogen fractions and ion content. On the other hand, vernalization led to a significant decrease in total auxins, gibberellic acid (GA3) and different CK fractions (zeatin, kinetin and BA) in pea plant shoots; ABA increased significantly. In contrast, vernalization combined with atonik or BA at any concentration led to a progressive increase in total auxins, GA3 and different CK fractions while ABA decreased significantly compared with control values. _____________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "16635864d498d3ae01b42dd45085f2a1",
"text": "The previously developed radially bounded nearest neighbor (RBNN) algorithm have been shown a good performance for 3D point cloud segmentation in indoor scenarios. In outdoor scenarios however it is hard to adapt the original RBNN to an intelligent vehicle directly due to several drawbacks. In this paper, drawbacks of RBNN are addressed and we propose an enhanced RBNN for an intelligent vehicle operating in urban environments by proposing the ground elimination and the distance-varying radius. After the ground removal, objects can be remained to segment without merging the ground and objects, whereas the original RBNN with the fixed radius induced over-segmentation or under-segmentation. We design the distance-varying radius which is varied properly from the distance between a laser scanner and scanning objects. The proposed distance-varying radius is successfully induced to segment objects without over or under segmentation. In the experimental results, we have shown that the enhance RBNN is preferable to segment urban structures in terms of time consumption, and even segmentation rates.",
"title": ""
},
{
"docid": "39e2e40a5af5b56ddad0014e23f49558",
"text": "Utilization of long-lasting insecticidal nets (LLINs) is regarded as key malaria prevention and control strategy. However, studies have reported a large gap in terms of both ownership and utilization particularly in the sub-Saharan Africa (SSA). With continual efforts to improve the use of LLIN and to progress malaria elimination, examining the factors influencing the ownership and usage of LLIN is of high importance. Therefore, the current study was conducted to examine the level of ownership and use of LLIN along with identification of associated factors at household level. A cross-sectional study was conducted in Mirab Abaya District, Southern Ethiopia in June and July 2014. A total of 540 households, with an estimated 2690 members, were selected in four kebeles of the district known to have high incidence of malaria. Trained data collectors interviewed household heads to collect information on the knowledge, ownership and utilization of LLINs, which was complemented by direct observation on the conditions and use of the nets through house-to-house visit. Bivariate and multivariable logistic regression analyses were used to determine factors associated to LLIN use. Of 540 households intended to be included in the survey, 507 responded to the study (94.24% response rate), covering the homes of 2759 people. More than 58% of the households had family size >5 (the regional average), and 60.2% of them had at least one child below the age of 5 years. The ownership of at least one LLIN among households surveyed was 89.9%, and using at least one LLIN during the night prior to the survey among net owners was 85.1% (n = 456). Only 36.7% (186) mentioned at least as the mean of correct scores of all participants for 14 possible malaria symptoms and 32.7% (166) knew at least as the mean of correct scores of all participants for possible preventive methods. Over 30% of nets owned by the households were out of use. After controlling for confounding factors, having two or more sleeping places (adjusted odds ratio [aOR] = 2.58, 95% CI 1.17, 5.73), knowledge that LLIN prevents malaria (aOR = 2.51, 95% CI 1.17, 5.37), the presence of hanging bed nets (aOR = 19.24, 95% CI 9.24, 40.07) and walls of the house plastered or painted >12 months ago (aOR = 0.09, 95% CI 0.01, 0.71) were important predictors of LLIN utilization. This study found a higher proportion of LLIN ownership and utilization by households than had previously been found in similar studies in Ethiopia, and in many studies in SSA. However, poor knowledge of the transmission mechanisms and the symptoms of malaria, and vector control measures to prevent malaria were evident. Moderate proportions of nets were found to be out of use or in poor repair. Efforts should be in place to maintain the current rate of utilization of LLIN in the district and improve on the identified gaps in order to support the elimination of malaria.",
"title": ""
},
{
"docid": "7ceffe2b8345566f72027780681f2a43",
"text": "This paper presents a transistor optimization methodology for low-power analog integrated CMOS circuits, relying on the physics-based gm/ID characteristics as a design optimization guide. Our custom layout tool LIT implements and uses the ACM MOS compact model in the optimization loop. The methodology is implemented for automation within LIT and exploits all design space through the simulated annealing optimization process, providing solutions close to optimum with a single technology-dependent curve and accurate expressions for transconductance and current valid in all operation regions. The compact model itself contributes to convergence and to optimized implementations, since it has analytic expressions which are continuous in all current regimes, including weak and moderate inversion. The advantage of constraining the optimization within a power budget is of great importance for low-power CMOS. As examples we show the optimization results obtained with LIT, resulting in significant power savings, for the design of a two-stage Miller operational amplifier.",
"title": ""
},
{
"docid": "0ec7538bef6a3ad982b8935f6124127d",
"text": "New technology has been seen as a way for many businesses in the tourism industry to stay competitive and enhance their marketing campaign in various ways. AR has evolved as the buzzword of modern information technology and is gaining increasing attention in the media as well as through a variety of use cases. This trend is highly fostered across mobile applications as well as the hype of wearable computing triggered by Google’s Glass project to be launched in 2014. However, although research on AR has been conducted in various fields including the Urban Tourism industry, the majority of studies focus on technical aspects of AR, while others are tailored to specific applications. Therefore, this paper aims to examine the current implementation of AR in the Urban Tourism context and identifies areas of research and development that is required to guide the early stages of AR implementation in a purposeful way to enhance the tourist experience. The paper provides an overview of AR and examines the impacts AR has made on the economy. Hence, AR applications in Urban Tourism are identified and benefits of AR are discussed. Please cite this article as: Jung, T. and Han, D. (2014). Augmented Reality (AR) in Urban Heritage Tourism. e-Review of Tourism Research. (ISSN: 1941-5842) Augmented Reality (AR) in Urban Heritage Tourism Timothy Jung and Dai-In Han Department of Food and Tourism Management Manchester\t\r Metropolitan\t\r University,\t\r United\t\r Kingdom t.jung@mmu.ac.uk,\t\r d.han@mmu.ac.uk",
"title": ""
},
{
"docid": "11d418decc0d06a3af74be77d4c71e5e",
"text": "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.",
"title": ""
}
] |
scidocsrr
|
17c3a59dccb132b10e8ed771f93c7661
|
Concept-based Short Text Classification and Ranking
|
[
{
"docid": "57457909ea5fbee78eccc36c02464942",
"text": "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.",
"title": ""
},
{
"docid": "cbac071c932c73813630fd7384e4f98c",
"text": "In this paper we propose a method that, given a query submitte d to a search engine, suggests a list of related queries. The rela t d queries are based in previously issued queries, and can be issued by the user to the search engine to tune or redirect the search process. The method proposed i s based on a query clustering process in which groups of semantically similar queries are identified. The clustering process uses the content of historical prefe renc s of users registered in the query log of the search engine. The method not onl y discovers the related queries, but also ranks them according to a relevanc criterion. Finally, we show with experiments over the query log of a search engine the ffectiveness of the method.",
"title": ""
},
{
"docid": "1ec9b98f0f7509088e7af987af2f51a2",
"text": "In this paper, we describe an automated learning approach to text categorization based on perception learning and a new feature selection metric, called correlation coefficient. Our approach has been teated on the standard Reuters text categorization collection. Empirical results indicate that our approach outperforms the best published results on this % uters collection. In particular, our new feature selection method yields comiderable improvement. We also investigate the usability of our automated hxu-n~ approach by actually developing a system that categorizes texts into a treeof categories. We compare tbe accuracy of our learning approach to a rrddmsed, expert system ap preach that uses a text categorization shell built by Cams gie Group. Although our automated learning approach still gives a lower accuracy, by appropriately inmrporating a set of manually chosen worda to use as f~ures, the combined, semi-automated approach yields accuracy close to the * baaed approach.",
"title": ""
},
{
"docid": "b1a08b10ea79a250a62030a2987b67a6",
"text": "Most text mining tasks, including clustering and topic detection, are based on statistical methods that treat text as bags of words. Semantics in the text is largely ignored in the mining process, and mining results often have low interpretability. One particular challenge faced by such approaches lies in short text understanding, as short texts lack enough content from which statistical conclusions can be drawn easily. In this paper, we improve text understanding by using a probabilistic knowledgebase that is as rich as our mental world in terms of the concepts (of worldly facts) it contains. We then develop a Bayesian inference mechanism to conceptualize words and short text. We conducted comprehensive experiments on conceptualizing textual terms, and clustering short pieces of text such as Twitter messages. Compared to purely statistical methods such as latent semantic topic modeling or methods that use existing knowledgebases (e.g., WordNet, Freebase and Wikipedia), our approach brings significant improvements in short text understanding as reflected by the clustering accuracy.",
"title": ""
},
{
"docid": "ef08ef786fd759b33a7d323c69be19db",
"text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.",
"title": ""
}
] |
[
{
"docid": "c848f8194856335a19bc195a79942d48",
"text": "Managerial myopia in identifying competitive threats is a well-recognized phenomenon (Levitt, 1960; Zajac and Bazerman, 1991). Identifying such threats is particularly problematic, since they may arise from substitutability on the supply side as well as on the demand side. Managers who focus only on the product market arena in scanning their competitive environment may fail to notice threats that are developing due to the resources and latent capabilities of indirect or potential competitors. This paper brings together insights from the fields of strategic management and marketing to develop a simple but powerful set of tools for helping managers overcome this common problem. We present a two-stage framework for competitor identification and analysis that brings into consideration a broad range of competitors, including potential competitors, substitutors, and indirect competitors. Specifically we draw from Peteraf and Bergen’s (2001) framework for competitor identification to develop a hierarchy of competitor awareness. That is used, in combination with resource equivalence, to generate hypotheses on competitive analysis. This framework not only extends the ken of managers, but also facilitates an assessment of the strategic opportunities and threats that various competitors represent and allows managers to assess their significance in relative terms. Copyright # 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "299deaffdd1a494fc754b9e940ad7f81",
"text": "In this work, we study an important problem: learning programs from input-output examples. We propose a novel method to learn a neural program operating a domain-specific non-differentiable machine, and demonstrate that this method can be applied to learn programs that are significantly more complex than the ones synthesized before: programming language parsers from input-output pairs without knowing the underlying grammar. The main challenge is to train the neural program without supervision on execution traces. To tackle it, we propose: (1) LL machines and neural programs operating them to effectively regularize the space of the learned programs; and (2) a two-phase reinforcement learning-based search technique to train the model. Our evaluation demonstrates that our approach can successfully learn to parse programs in both an imperative language and a functional language, and achieve 100% test accuracy, while existing approaches’ accuracies are almost 0%. This is the first successful demonstration of applying reinforcement learning to train a neural program operating a non-differentiable machine that can fully generalize to test sets on a non-trivial task.",
"title": ""
},
{
"docid": "2d0b0511f8f2ce41b7d2d60d57bc7236",
"text": "There is broad consensus that good outcome measures are needed to distinguish interventions that are effective from those that are not. This task requires standardized, patient-centered measures that can be administered at a low cost. We developed a questionnaire to assess short- and long-term patient-relevant outcomes following knee injury, based on the WOMAC Osteoarthritis Index, a literature review, an expert panel, and a pilot study. The Knee injury and Osteoarthritis Outcome Score (KOOS) is self-administered and assesses five outcomes: pain, symptoms, activities of daily living, sport and recreation function, and knee-related quality of life. In this clinical study, the KOOS proved reliable, responsive to surgery and physical therapy, and valid for patients undergoing anterior cruciate ligament reconstruction. The KOOS meets basic criteria of outcome measures and can be used to evaluate the course of knee injury and treatment outcome.",
"title": ""
},
{
"docid": "28ccab4b6b7c9c70bc07e4b3219d99d4",
"text": "The Wireless Networking After Next (WNaN) radio is a handheld-sized radio that delivers unicast, multicast, and disruption-tolerant traffic in networks of hundreds of radios. This paper addresses scalability of the network from the routing control traffic point of view. Starting from a basic version of an existing mobile ad-hoc network (MANET) proactive link-state routing protocol, we describe the enhancements that were necessary to provide good performance in these conditions. We focus on techniques to reduce control traffic while maintaining route integrity. We present simulation results from 250-node mobile networks demonstrating the effectiveness of the routing mechanisms. Any MANET with design parameters and constraints similar to the WNaN radio will benefit from these improvements.",
"title": ""
},
{
"docid": "9775092feda3a71c1563475bae464541",
"text": "Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, sptitting flow at nodes where several outgoing tinks are on shortest paths to the destination. The weights of the tinks, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic rec. ommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimiz@ the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settiis that performed within a few percent from that of the optimal general routing where the flow for each demand is optimalty distributed over all paths between source and destination. This contrasts the common belief that OSPF routing leads to congestion and it shows that for the network and demand matrix studied we cannot get a substantially better load balancing by switching to the proposed more flexible Multi-protocol Label Switching (MPLS) technologies. Our techniques were atso tested on synthetic internetworks, based on a model of Zegura et al. (INFOCOM’96), for which we dld not always get quite as close to the optimal general routing. However, we compared witIs standard heuristics, such as weights inversely proportional to the capac.. ity or proportioml to the physical distances, and found that, for the same network and capacities, we could support a 50 Yo-1 10% increase in the demands. Our assumed demand matrix can also be seen as modeling service level agreements (SLAS) with customers, with demands representing guarantees of throughput for virtnal leased lines. Keywords— OSPF, MPLS, traffic engineering, local search, hashing ta. bles, dynamic shortest paths, mntti-cosnmodity network flows.",
"title": ""
},
{
"docid": "6eb7bb6f623475f7ca92025fd00dbc27",
"text": "Support vector machines (SVMs) have been recognized as one o f th most successful classification methods for many applications including text classific ation. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational com plexity is an essential issue to efficiently handle a large number of terms in practical applicat ions of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dim nsion of the document vectors dramatically. We also introduce decision functions for the centroid-based classification algorithm and support vector classifiers to handle the classification p r blem where a document may belong to multiple classes. Our substantial experimental results sh ow t at with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accu ra y of text classification even when the dimension of the input space is significantly reduced.",
"title": ""
},
{
"docid": "69d296d1302d9e0acd7fb576f551118d",
"text": "Event detection is a research area that attracted attention during the last years due to the widespread availability of social media data. The problem of event detection has been examined in multiple social media sources like Twitter, Flickr, YouTube and Facebook. The task comprises many challenges including the processing of large volumes of data and high levels of noise. In this article, we present a wide range of event detection algorithms, architectures and evaluation methodologies. In addition, we extensively discuss on available datasets, potential applications and open research issues. The main objective is to provide a compact representation of the recent developments in the field and aid the reader in understanding the main challenges tackled so far as well as identifying interesting future research directions.",
"title": ""
},
{
"docid": "d253029f47fe3afb6465a71e966fdbd5",
"text": "With the development of the social economy, more and more appliances have been presented in a house. It comes out a problem that how to manage and control these increasing various appliances efficiently and conveniently so as to achieve more comfortable, security and healthy space at home. In this paper, a smart control system base on the technologies of internet of things has been proposed to solve the above problem. The smart home control system uses a smart central controller to set up a radio frequency 433 MHz wireless sensor and actuator network (WSAN). A series of control modules, such as switch modules, radio frequency control modules, have been developed in the WSAN to control directly all kinds of home appliances. Application servers, client computers, tablets or smart phones can communicate with the smart central controller through a wireless router via a Wi-Fi interface. Since it has WSAN as the lower control layer, a appliance can be added into or withdrawn from the control system very easily. The smart control system embraces the functions of appliance monitor, control and management, home security, energy statistics and analysis.",
"title": ""
},
{
"docid": "58a9ef3dea7788c66942d7cb11dcd8fd",
"text": "Frontalis suspension is a commonly used surgery that is indicated in patients with blepharoptosis and poor levator muscle function. The surgery is based on connecting the tarsal plate to the eyebrow with various sling materials. Although fascia lata is most commonly used due to its long-lasting effect and low rate of complications, it has several limitations such as difficulty of harvesting, insufficient amounts in small children, and postoperative donor-site complications. Other sling materials have overcome these limitations, but on the other hand, have been reported to be associated with other complications. In this review we focus on the different techniques and materials which are used in frontalis suspension surgeries, as well as the advantage and disadvantage of these techniques.",
"title": ""
},
{
"docid": "cce477dd5efd3ecbabc57dfb237b72c9",
"text": "In this paper we present BabelDomains, a unified resource which provides lexical items with information about domains of knowledge. We propose an automatic method that uses knowledge from various lexical resources, exploiting both distributional and graph-based clues, to accurately propagate domain information. We evaluate our methodology intrinsically on two lexical resources (WordNet and BabelNet), achieving a precision over 80% in both cases. Finally, we show the potential of BabelDomains in a supervised learning setting, clustering training data by domain for hypernym discovery.",
"title": ""
},
{
"docid": "427b3cae516025381086021bc66f834e",
"text": "PhishGuru is an embedded training system that teaches users to avoid falling for phishing attacks by delivering a training message when the user clicks on the URL in a simulated phishing email. In previous lab and real-world experiments, we validated the effectiveness of this approach. Here, we extend our previous work with a 515-participant, real-world study in which we focus on long-term retention and the effect of two training messages. We also investigate demographic factors that influence training and general phishing susceptibility. Results of this study show that (1) users trained with PhishGuru retain knowledge even after 28 days; (2) adding a second training message to reinforce the original training decreases the likelihood of people giving information to phishing websites; and (3) training does not decrease users' willingness to click on links in legitimate messages. We found no significant difference between males and females in the tendency to fall for phishing emails both before and after the training. We found that participants in the 18--25 age group were consistently more vulnerable to phishing attacks on all days of the study than older participants. Finally, our exit survey results indicate that most participants enjoyed receiving training during their normal use of email.",
"title": ""
},
{
"docid": "5bf172cfc7d7de0c82707889cf722ab2",
"text": "The concept of a decentralized ledger usually implies that each node of a blockchain network stores the entire blockchain. However, in the case of popular blockchains, which each weigh several hundreds of GB, the large amount of data to be stored can incite new or low-capacity nodes to run lightweight clients. Such nodes do not participate to the global storage effort and can result in a centralization of the blockchain by very few nodes, which is contrary to the basic concepts of a blockchain. To avoid this problem, we propose new low storage nodes that store a reduced amount of data generated from the blockchain by using erasure codes. The properties of this technique ensure that any block of the chain can be easily rebuilt from a small number of such nodes. This system should encourage low storage nodes to contribute to the storage of the blockchain and to maintain decentralization despite of a globally increasing size of the blockchain. This system paves the way to new types of blockchains which would only be managed by low capacity nodes.",
"title": ""
},
{
"docid": "4f5f128195592fe881269f54fd3424e7",
"text": "In this research, a new method is proposed for the optimization of warship spare parts stock with genetic algorithm. Warships should fulfill her duties in all circumstances. Considering the warships have more than a hundred thousand unique parts, it is a very hard problem to decide which spare parts should be stocked at warehouse aiming to use in case of failure. In this study, genetic algorithm that is a heuristic optimization method is used to solve this problem. The demand quantity, the criticality and the cost of parts is used for optimization. A genetic algorithm with very long chromosome is used, i.e. over 1000 genes in one chromosome. The outputs of the method is analyzed and compared with the Price Sensitive 0.5 FLSIP+ model, which is widely used over navies, and came to a conclusion that the proposed method is better.",
"title": ""
},
{
"docid": "00f106ff157e515ed8fde53fdaf1491e",
"text": "In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.",
"title": ""
},
{
"docid": "b519ac8572520bfcc38b8974119d9eec",
"text": "Nastaliq is a calligraphic, beautiful and more aesthetic style of writing Urdu, the national language of Pakistan, also used to read and write in India and other countries of the region.\n OCRs developed for many world languages are already under efficient use but none exist for Nastaliq -- a calligraphic adaptation of the Arabic scrip which is inherently cursive in nature.\n In Nastaliq, word and character overlapping makes optical recognition more complex.\n This paper presents the ongoing research on Nastaliq Optical Character Recognition (NOCR). In this research, we have proposed a novel segmentation-free technique for the design and implementation of a Nastaliq OCR based on cross-correlation.",
"title": ""
},
{
"docid": "20ecae219ecf21429fb7c2697339fe50",
"text": "Massively multiplayer game holds a huge market in the digital entertainment industry. Companies invest heavily in the game and graphics development since a successful online game can attract million of users, and this translates to a huge investment payoff. However, multiplayer online game is also subjected to various forms of hacks and cheats. Hackers can alter the graphic rendering to reveal information otherwise be hidden in a normal game, or cheaters can use software robot to play the game automatically and gain an unfair advantage. Currently, some popular online games release software patches or incorporate anti-cheating software to detect known cheats. This not only creates deployment difficulty but new cheats will still be able to breach the normal game logic until software patches are available. Moreover, the anti-cheating software themselves are also vulnerable to hacks. In this paper, we propose a scalable and efficient method to detect whether a player is cheating or not. The methodology is based on the dynamic Bayesian network approach. The detection framework relies solely on the game states and runs in the game server only. Therefore it is invulnerable to hacks and it is a much more deployable solution. To demonstrate the effectiveness of the propose method, we implement a prototype multiplayer game system and to detect whether a player is using the “aiming robot” for cheating or not. Experiments show that not only we can effectively detect cheaters, but the false positive rate is extremely low. We believe the proposed methodology and the prototype system provide a first step toward a systematic study of cheating detection and security research in the area of online multiplayer games.",
"title": ""
},
{
"docid": "5b15a833cb6b4d9dd56dea59edb02cf8",
"text": "BACKGROUND\nQuantification of the biomechanical properties of each individual medial patellar ligament will facilitate an understanding of injury patterns and enhance anatomic reconstruction techniques by improving the selection of grafts possessing appropriate biomechanical properties for each ligament.\n\n\nPURPOSE\nTo determine the ultimate failure load, stiffness, and mechanism of failure of the medial patellofemoral ligament (MPFL), medial patellotibial ligament (MPTL), and medial patellomeniscal ligament (MPML) to assist with selection of graft tissue for anatomic reconstructions.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nTwenty-two nonpaired, fresh-frozen cadaveric knees were dissected free of all soft tissue structures except for the MPFL, MPTL, and MPML. Two specimens were ultimately excluded because their medial structure fibers were lacerated during dissection. The patella was obliquely cut to test the MPFL and the MPTL-MPML complex separately. To ensure that the common patellar insertion of the MPTL and MPML was not compromised during testing, only one each of the MPML and MPTL were tested per specimen (n = 10 each). Specimens were secured in a dynamic tensile testing machine, and the ultimate load, stiffness, and mechanism of failure of each ligament (MPFL = 20, MPML = 10, and MPTL = 10) were recorded.\n\n\nRESULTS\nThe mean ± SD ultimate load of the MPFL (178 ± 46 N) was not significantly greater than that of the MPTL (147 ± 80 N; P = .706) but was significantly greater than that of the MPML (105 ± 62 N; P = .001). The mean ultimate load of the MPTL was not significantly different from that of the MPML ( P = .210). Of the 20 MPFLs tested, 16 failed by midsubstance rupture and 4 by bony avulsion on the femur. Of the 10 MPTLs tested, 9 failed by midsubstance rupture and 1 by bony avulsion on the patella. Finally, of the 10 MPMLs tested, all 10 failed by midsubstance rupture. No significant difference was found in mean stiffness between the MPFL (23 ± 6 N/mm2) and the MPTL (31 ± 21 N/mm2; P = .169), but a significant difference was found between the MPFL and the MPML (14 ± 8 N/mm2; P = .003) and between the MPTL and MPML ( P = .028).\n\n\nCONCLUSION\nThe MPFL and MPTL had comparable ultimate loads and stiffness, while the MPML had lower failure loads and stiffness. Midsubstance failure was the most common type of failure; therefore, reconstruction grafts should meet or exceed the values reported herein.\n\n\nCLINICAL RELEVANCE\nFor an anatomic medial-sided knee reconstruction, the individual biomechanical contributions of the medial patellar ligamentous structures (MPFL, MPTL, and MPML) need to be characterized to facilitate an optimal reconstruction design.",
"title": ""
},
{
"docid": "ca599d7b637d25835d881c6803a9e064",
"text": "Accumulating research shows that prenatal exposure to maternal stress increases the risk for behavioral and mental health problems later in life. This review systematically analyzes the available human studies to identify harmful stressors, vulnerable periods during pregnancy, specificities in the outcome and biological correlates of the relation between maternal stress and offspring outcome. Effects of maternal stress on offspring neurodevelopment, cognitive development, negative affectivity, difficult temperament and psychiatric disorders are shown in numerous epidemiological and case-control studies. Offspring of both sexes are susceptible to prenatal stress but effects differ. There is not any specific vulnerable period of gestation; prenatal stress effects vary for different gestational ages possibly depending on the developmental stage of specific brain areas and circuits, stress system and immune system. Biological correlates in the prenatally stressed offspring are: aberrations in neurodevelopment, neurocognitive function, cerebral processing, functional and structural brain connectivity involving amygdalae and (pre)frontal cortex, changes in hypothalamo-pituitary-adrenal (HPA)-axis and autonomous nervous system.",
"title": ""
},
{
"docid": "c7d11801e1c3a6bd7e32b3ab7ea9767a",
"text": "With the increasing threat of sophisticated attacks on critical infrastructures, it is vital that forensic investigations take place immediately following a security incident. This paper presents an existing SCADA forensic process model and proposes a structured SCADA forensic process model to carry out a forensic investigations. A discussion on the limitations of using traditional forensic investigative processes and the challenges facing forensic investigators. Furthermore, flaws of existing research into providing forensic capability for SCADA systems are examined in detail. The study concludes with an experimentation of a proposed SCADA forensic capability architecture on the Siemens S7 PLC. Modifications to the memory addresses are monitored and recorded for forensic evidence. The collected forensic evidence will be used to aid the reconstruction of a timeline of events, in addition to other collected forensic evidence such as network packet captures.",
"title": ""
},
{
"docid": "16bfea9d5a3f736fe39fdd1f6725b642",
"text": "Tilting and motion are widely used as interaction modalities in smart objects such as wearables and smart phones (e.g., to detect posture or shaking). They are often sensed with accelerometers. In this paper, we propose to embed liquids into 3D printed objects while printing to sense various tilting and motion interactions via capacitive sensing. This method reduces the assembly effort after printing and is a low-cost and easy-to-apply way of extending the input capabilities of 3D printed objects. We contribute two liquid sensing patterns and a practical printing process using a standard dual-extrusion 3D printer and commercially available materials. We validate the method by a series of evaluations and provide a set of interactive example applications.",
"title": ""
}
] |
scidocsrr
|
54dc149c672835610e1c76acf600c99a
|
Scope of Artificial Intelligence in Law
|
[
{
"docid": "f7a42937973a45ed4fb5d23e3be316a9",
"text": "Domain specific information retrieval process has been a prominent and ongoing research in the field of natural language processing. Many researchers have incorporated different techniques to overcome the technical and domain specificity and provide a mature model for various domains of interest. The main bottleneck in these studies is the heavy coupling of domain experts, that makes the entire process to be time consuming and cumbersome. In this study, we have developed three novel models which are compared against a golden standard generated via the on line repositories provided, specifically for the legal domain. The three different models incorporated vector space representations of the legal domain, where document vector generation was done in two different mechanisms and as an ensemble of the above two. This study contains the research being carried out in the process of representing legal case documents into different vector spaces, whilst incorporating semantic word measures and natural language processing techniques. The ensemble model built in this study, shows a significantly higher accuracy level, which indeed proves the need for incorporation of domain specific semantic similarity measures into the information retrieval process. This study also shows, the impact of varying distribution of the word similarity measures, against varying document vector dimensions, which can lead to improvements in the process of legal information retrieval. keywords: Document Embedding, Deep Learning, Information Retrieval",
"title": ""
}
] |
[
{
"docid": "b2f0b5ef76d9e98e93e6c5ed64642584",
"text": "The yeast and fungal prions determine heritable and infectious traits, and are thus genes composed of protein. Most prions are inactive forms of a normal protein as it forms a self-propagating filamentous β-sheet-rich polymer structure called amyloid. Remarkably, a single prion protein sequence can form two or more faithfully inherited prion variants, in effect alleles of these genes. What protein structure explains this protein-based inheritance? Using solid-state nuclear magnetic resonance, we showed that the infectious amyloids of the prion domains of Ure2p, Sup35p and Rnq1p have an in-register parallel architecture. This structure explains how the amyloid filament ends can template the structure of a new protein as it joins the filament. The yeast prions [PSI(+)] and [URE3] are not found in wild strains, indicating that they are a disadvantage to the cell. Moreover, the prion domains of Ure2p and Sup35p have functions unrelated to prion formation, indicating that these domains are not present for the purpose of forming prions. Indeed, prion-forming ability is not conserved, even within Saccharomyces cerevisiae, suggesting that the rare formation of prions is a disease. The prion domain sequences generally vary more rapidly in evolution than does the remainder of the molecule, producing a barrier to prion transmission, perhaps selected in evolution by this protection.",
"title": ""
},
{
"docid": "35034e20bc7925286fcbb7b250681717",
"text": "Today, as in the past, within a country at a given time those with higher incomes are, on average, happier. However, raising the incomes of all does not increase the happiness of all. This is because the material norms on which judgments of well-being are based increase in the same proportion as the actual income of the society. These conclusions are suggested by data on reported happiness, material norms, and income collected in surveys in a number of countries over the past half century.",
"title": ""
},
{
"docid": "8df74743ab51e92c43ecf272485470c6",
"text": "We propose a new detection method to predict a vehicle's trajectory and use it for detecting lane changes of surrounding vehicles. According to the previous research, more than 90% of the car crashes are caused by human errors, and lane changes are the main factor. Therefore, if a lane change can be detected before a vehicle crosses the centerline, accident rates will decrease. Previously reported detection methods have the problem of frequent false alarms caused by zigzag driving that can result in user distrust in driving safety support systems. Most cases of zigzag driving are caused by the abortion of a lane change due to the presence of adjacent vehicles on the next lane. Our approach reduces false alarms by considering the possibility of a crash with adjacent vehicles by applying trajectory prediction when the target vehicle attempts to change a lane, and it reflects the result of lane-change detection. We used a traffic dataset with more than 500 lane changes and confirmed that the proposed method can considerably improve the detection performance.",
"title": ""
},
{
"docid": "d06d09c38988dffce44068986f912c6d",
"text": "Depression, the most prevalent mental illness, is underdiagnosed and undertreated, highlighting the need to extend the scope of current screening methods. Here, we use language from Facebook posts of consenting individuals to predict depression recorded in electronic medical records. We accessed the history of Facebook statuses posted by 683 patients visiting a large urban academic emergency department, 114 of whom had a diagnosis of depression in their medical records. Using only the language preceding their first documentation of a diagnosis of depression, we could identify depressed patients with fair accuracy [area under the curve (AUC) = 0.69], approximately matching the accuracy of screening surveys benchmarked against medical records. Restricting Facebook data to only the 6 months immediately preceding the first documented diagnosis of depression yielded a higher prediction accuracy (AUC = 0.72) for those users who had sufficient Facebook data. Significant prediction of future depression status was possible as far as 3 months before its first documentation. We found that language predictors of depression include emotional (sadness), interpersonal (loneliness, hostility), and cognitive (preoccupation with the self, rumination) processes. Unobtrusive depression assessment through social media of consenting individuals may become feasible as a scalable complement to existing screening and monitoring procedures.",
"title": ""
},
{
"docid": "4545a74d04769f6b251da9da7b357d09",
"text": "Despite a long history of research and debate, there is still no standard definition of intelligence. This has lead some to believe that intelligence may be approximately described, but cannot be fully defined. We believe that this degree of pessimism is too strong. Although there is no single standard definition, if one surveys the many definitions that have been proposed, strong similarities between many of the definitions quickly become obvious. In many cases different definitions, suitably interpreted, actually say the same thing but in different words. This observation lead us to believe that a single general and encompassing definition for arbitrary systems was possible. Indeed we have constructed a formal definition of intelligence, called universal intelligence [21], which has strong connections to the theory of optimal learning agents [19]. Rather than exploring very general formal definitions of intelligence, here we will instead take the opportunity to present the many informal definitions that we have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70 odd definitions presented below are, to the best of our knowledge, the largest and most well referenced collection there is. We continue to add to this collect as we discover further definitions, and keep the most up to date version of the collection available online [22]. If you know of additional definitions that we could add, please send us an email.",
"title": ""
},
{
"docid": "b08f67bc9b84088f8298b35e50d0b9c5",
"text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.",
"title": ""
},
{
"docid": "407574abdcba82be2e9aea5a9b38c0a3",
"text": "In this paper, we investigate resource block (RB) assignment and modulation-and-coding scheme (MCS) selection to maximize downlink throughput of long-term evolution (LTE) systems, where all RB's assigned to the same user in any given transmission time interval (TTI) must use the same MCS. We develop several effective MCS selection schemes by using the effective packet-level SINR based on exponential effective SINR mapping (EESM), arithmetic mean, geometric mean, and harmonic mean. From both analysis and simulation results, we show that the system throughput of all the proposed schemes are better than that of the scheme in [7]. Furthermore, the MCS selection scheme using harmonic mean based effective packet-level SINR almost reaches the optimal performance and significantly outperforms the other proposed schemes.",
"title": ""
},
{
"docid": "189d370fc5c12157b1fffa6196195798",
"text": "In this report a number of algorithms for optimal control of a double inverted pendulum on a cart (DIPC) are investigated and compared. Modeling is based on Euler-Lagrange equations derived by specifying a Lagrangian, difference between kinetic and potential energy of the DIPC system. This results in a system of nonlinear differential equations consisting of three 2-nd order equations. This system of equations is then transformed into a usual form of six 1-st order ordinary differential equations (ODE) for control design purposes. Control of a DIPC poses a certain challenge, since unlike a robot, the system is underactuated: one controlling force per three degrees of freedom (DOF). In this report, problem of optimal control minimizing a quadratic cost functional is addressed. Several approaches are tested: linear quadratic regulator (LQR), state-dependent Riccati equation (SDRE), optimal neural network (NN) control, and combinations of the NN with the LQR and the SDRE. Simulations reveal superior performance of the SDRE over the LQR and improvements provided by the NN, which compensates for model inadequacies in the LQR. Limited capabilities of the NN to approximate functions over the wide range of arguments prevent it from significantly improving the SDRE performance, providing only marginal benefits at larger pendulum deflections.",
"title": ""
},
{
"docid": "09211bc28dea118cc114b261d13f098e",
"text": "IEEE 802.11 Wireless LAN (WLAN) has gained popularity. WLANs use different security protocols like WEP, WPA and WPA2. The newly ratified WPA2 provides the highest level of security for data frames. However WPA2 does not really mention about protection of management frames. In other words IEEE 802.11 management frames are always sent in an unsecured manner. In fact the only security mechanism for management frames is CRC-32 bit algorithm. While useful for unintentional error detection, CRC-32 bit is not safe to completely verify data integrity in the face of intentional modifications. Therefore an unsecured management frame allows an attacker to start different kinds of attack. This paper proposes a new model to address these security problems in management frames. First we summarize security threats on management frames and their influences in WLANs. Then based on these security threats, we propose a new per frames security model to provide efficient security for these frames. Finally simulation methodology is presented and results are provided. Mathematical probabilities are discussed to demonstrate that the proposed security model is robust and efficient to secure management frames.",
"title": ""
},
{
"docid": "c86c10428bfca028611a5e989ca31d3f",
"text": "In the study, we discussed the ARCH/GARCH family models and enhanced them with artificial neural networks to evaluate the volatility of daily returns for 23.10.1987–22.02.2008 period in Istanbul Stock Exchange. We proposed ANN-APGARCH model to increase the forecasting performance of APGARCH model. The ANN-extended versions of the obtained GARCH models improved forecast results. It is noteworthy that daily returns in the ISE show strong volatility clustering, asymmetry and nonlinearity characteristics. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "de90ba5b4c869f425233933629236cec",
"text": "Generating a description of an image is called image captioning. Image captioning requires recognizing the important objects, their attributes, and their relationships in an image. It also needs to generate syntactically and semantically correct sentences. Deep-learning-based techniques are capable of handling the complexities and challenges of image captioning. In this survey article, we aim to present a comprehensive review of existing deep-learning-based image captioning techniques. We discuss the foundation of the techniques to analyze their performances, strengths, and limitations. We also discuss the datasets and the evaluation metrics popularly used in deep-learning-based automatic image captioning.",
"title": ""
},
{
"docid": "2e3dcd4ba0dbcabb86c8716d73760028",
"text": "Power transformers are one of the most critical devices in power systems. It is responsible for voltage conversion, power distribution and transmission, and provides power services. Therefore, the normal operation of the transformer is an important guarantee for the safe, reliable, high quality and economical operation of the power system. It is necessary to minimize and reduce the occurrence of transformer failure and accident. The on-line monitoring and fault diagnosis of power equipment is not only the prerequisite for realizing the predictive maintenance of equipment, but also the key to ensure the safe operation of equipment. Although the analysis of dissolved gas in transformer oil is an important means of transformer insulation monitoring, the coexistence of two kinds of faults, such as discharge and overheat, can lead to a lower positive rate of diagnosis. In this paper, we use the basic particle swarm optimization algorithm to optimize the BP neural network DGA method, select the typical oil in the oil as a neural network input, and then use the trained particle swarm algorithm to optimize the neural network for transformer fault type diagnosis. The results show that the method has a good classification effect, which can solve the problem of difficult to distinguish the faults of the transformer when the discharge and overheat coexist. The positive rate of fault diagnosis is high.",
"title": ""
},
{
"docid": "48d2f38037b0cab83ca4d57bf19ba903",
"text": "The term sentiment analysis can be used to refer to many different, but related, problems. Most commonly, it is used to refer to the task of automatically determining the valence or polarity of a piece of text, whether it is positive, negative, or neutral. However, more generally, it refers to determining one’s attitude towards a particular target or topic. Here, attitude can mean an evaluative judgment, such as positive or negative, or an emotional or affectual attitude such as frustration, joy, anger, sadness, excitement, and so on. Note that some authors consider feelings to be the general category that includes attitude, emotions, moods, and other affectual states. In this chapter, we use ‘sentiment analysis’ to refer to the task of automatically determining feelings from text, in other words, automatically determining valence, emotions, and other affectual states from text. Osgood, Suci, and Tannenbaum (1957) showed that the three most prominent dimensions of meaning are evaluation (good–bad), potency (strong–weak), and activity (active– passive). Evaluativeness is roughly the same dimension as valence (positive–negative). Russell (1980) developed a circumplex model of affect characterized by two primary dimensions: valence and arousal (degree of reactivity to stimulus). Thus, it is not surprising that large amounts of work in sentiment analysis are focused on determining valence. (See survey articles by Pang and Lee (2008), Liu and Zhang (2012), and Liu (2015).) However, there is some work on automatically detecting arousal (Thelwall, Buckley, Paltoglou, Cai, & Kappas, 2010; Kiritchenko, Zhu, & Mohammad, 2014b; Mohammad, Kiritchenko, & Zhu, 2013a) and growing interest in detecting emotions such as anger, frustration, sadness, and optimism in text (Mohammad, 2012; Bellegarda, 2010; Tokuhisa, Inui, & Matsumoto, 2008; Strapparava & Mihalcea, 2007; John, Boucouvalas, & Xu, 2006; Mihalcea & Liu, 2006; Genereux & Evans, 2006; Ma, Prendinger, & Ishizuka, 2005; Holzman & Pottenger, 2003; Boucouvalas, 2002; Zhe & Boucouvalas, 2002). Further, massive amounts of data emanating from social media have led to significant interest in analyzing blog posts, tweets, instant messages, customer reviews, and Facebook posts for both valence (Kiritchenko et al., 2014b; Kiritchenko, Zhu, Cherry, & Mohammad, 2014a; Mohammad et al., 2013a; Aisopos, Papadakis, Tserpes, & Varvarigou, 2012; Bakliwal, Arora, Madhappan, Kapre, Singh, & Varma, 2012; Agarwal, Xie, Vovsha, Rambow, & Passonneau, 2011; Thelwall, Buckley, & Paltoglou, 2011; Brody & Diakopoulos, 2011; Pak & Paroubek, 2010) and emotions (Hasan, Rundensteiner, & Agu, 2014; Mohammad & Kiritchenko, 2014; Mohammad, Zhu, Kiritchenko, & Martin, 2014; Choudhury, Counts, & Gamon, 2012; Mohammad, 2012a; Wang, Chen, Thirunarayan, & Sheth, 2012; Tumasjan, Sprenger, Sandner, & Welpe, 2010b; Kim, Gilbert, Edwards, &",
"title": ""
},
{
"docid": "0720bc0c6c3c0f902b303739cf4c3afb",
"text": "Bergfelt, A. 2018. Block Copolymer Electrolytes. Polymers for Solid-State Lithium Batteries. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1630. 68 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-513-0233-1. The use of solid polymer electrolytes (SPEs) for lithium battery devices is a rapidly growing research area. The liquid electrolytes that are used today are inflammable and harmful towards the battery components. The adoption of SPEs could drastically improve this situation, but they still suffer from a too low performance at ambient temperatures for most practical applications. However, by increasing the operating temperature to between 60 °C and 90 °C, the electrolyte performance can be drastically increased. The drawback of this approach, partly, is that parasitic side reactions become noticeable at these elevated temperatures, thus affecting battery lifetime and performance. Furthermore, the ionically conductive polymer loses its mechanical integrity, thus triggering a need for an external separator in the battery device. One way of combining both mechanical properties and electrochemical performance is to design block copolymer (BCP) electrolytes, that is, polymers that are tailored to combine one ionic conductive block with a mechanical block, into one polymer. The hypothesis is that the BCP electrolytes should self-assemble into well-defined microphase separated regions in order to maximize the block properties. By varying monomer composition and structure of the BCP, it is possible to design electrolytes with different battery device performance. In Paper I and Paper II two types of methacrylate-based triblock copolymers with different mechanical blocks were synthesized, in order to evaluate morphology, electrochemical performance, and battery performance. In Paper III and Paper IV a different strategy was adopted, with a focus on diblock copolymers. In this strategy, the ethylene oxide was replaced by poly(e-caprolactone) and poly(trimethylene carbonate) as the lithium-ion dissolving group. The investigated mechanical blocks in these studies were poly(benzyl methacrylate) and polystyrene. The battery performance for these electrolytes was superior to the methacrylatebased battery devices, thus resulting in stable battery cycling at 40 °C and 30 °C. Andreas Bergfelt, Department of Chemistry Ångström, Polymer Chemistry, Box 538, Uppsala University, SE-751 21 Uppsala, Sweden. © Andreas Bergfelt 2018 ISSN 1651-6214 ISBN 978-91-513-0233-1 urn:nbn:se:uu:diva-340856 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-340856)",
"title": ""
},
{
"docid": "c006fbd6887c7d080addcf814009bd40",
"text": "Aiming at diagnosing and preventing the cardiovascular disease, a portable ECG monitoring system based on Bluetooth mobile phones is presented. The system consists of some novel dry skin electrodes, an ECG monitoring circuit and a smart phone. The weak ECG signals extracted from the dry electrode can be amplified, band-pass filtered, analog-digital converted and so on. Finally it is sent to the mobile phone by Bluetooth technology for real-time display on screen. The core ECG monitoring circuit is composed of a CMOS preamplifier ASIC designed by ourselves, a band-pass filter, a microcontroller and a Bluetooth module. The volume is 5.5 cm × 3.4 cm × 1.6 cm, weight is only 20.76 g (without batteries), and power consumption is 115 mW. The tests show that the system can operate steadily, precisely and display the ECG in real time.",
"title": ""
},
{
"docid": "976aee37c264dbf53b7b1fbbf0d583c4",
"text": "This paper applies Halliday's (1994) theory of the interpersonal, ideational and textual meta-functions of language to conceptual metaphor. Starting from the observation that metaphoric expressions tend to be organized in chains across texts, the question is raised what functions those expressions serve in different parts of a text as well as in relation to each other. The empirical part of the article consists of the sample analysis of a business magazine text on marketing. This analysis is two-fold, integrating computer-assisted quantitative investigation with qualitative research into the organization and multifunctionality of metaphoric chains as well as the cognitive scenarios evolving from those chains. The paper closes by summarizing the main insights along the lines of the three Hallidayan meta-functions of conceptual metaphor and suggesting functional analysis of metaphor at levels beyond that of text. Im vorliegenden Artikel wird Hallidays (1994) Theorie der interpersonellen, ideellen und textuellen Metafunktion von Sprache auf das Gebiet der konzeptuellen Metapher angewandt. Ausgehend von der Beobachtung, dass metaphorische Ausdrücke oft in textumspannenden Ketten angeordnet sind, wird der Frage nachgegangen, welche Funktionen diese Ausdrücke in verschiedenen Teilen eines Textes und in Bezug aufeinander erfüllen. Der empirische Teil der Arbeit besteht aus der exemplarischen Analyse eines Artikels aus einem Wirtschaftsmagazin zum Thema Marketing. Diese Analysis gliedert sich in zwei Teile und verbindet computergestütze quantitative Forschung mit einer qualitativen Untersuchung der Anordnung und Multifunktionalität von Metaphernketten sowie der kognitiven Szenarien, die aus diesen Ketten entstehen. Der Aufsatz schließt mit einer Zusammenfassung der wesentlichen Ergebnisse im Licht der Hallidayschen Metafunktionen konzeptueller Metaphern und gibt einen Ausblick auf eine funktionale Metaphernanalyse, die über die rein textuelle Ebene hinausgeht.",
"title": ""
},
{
"docid": "ef55f11664a16933166e55548598b939",
"text": "In the paper, we present a new method for classifying documents with rigid geometry. Our approach is based on the fast and robust Viola-Jones object detection algorithm. The advantages of our proposed method are high speed, the possibility of automatic model construction using a training set, and processing of raw source images without any pre-processing steps such as draft recognition, layout analysis or binarisation. Furthermore, our algorithm allows not only to classify documents, but also to detect the placement and orientation of documents within an image.",
"title": ""
},
{
"docid": "e5175084f08ad8efc3244f52cbb8ef7b",
"text": "We consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that a subsequence of the iterates at each agent converges to a neighbourhood of the global minimum, where the size of the neighbourhood depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Subgradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.",
"title": ""
},
{
"docid": "26c61f3f3cffa1ae68e4891e94fd8941",
"text": "Likert-type scales are used extensively during usability evaluations, and more generally evaluations of interactive experiences, to obtain quantified data regarding attitudes, behaviors, and judgments of participants. Very often this data is analyzed using parametric statistics like the Student t-test or ANOVAs. These methods are chosen to ensure higher statistical power of the test (which is necessary in this field of research and practice where sample sizes are often small), or because of the lack of software to handle multi-factorial designs nonparametrically. With this paper we present to the HCI audience new developments from the field of medical statistics that enable analyzing multiple factor designs nonparametrically. We demonstrate the necessity of this approach by showing the errors in the parametric treatment of nonparametric data in experiments of the size typically reported in HCI research. We also provide a practical resource for researchers and practitioners who wish to use these new methods.",
"title": ""
}
] |
scidocsrr
|
84346fc7e5952e73411819430795a45b
|
Dynamics of Platform Competition: Exploring the Role of Installed Base, Platform Quality and Consumer Expectations
|
[
{
"docid": "4bfb389e1ae2433f797458ff3fe89807",
"text": "Many if not most markets with network externalities are two-sided. To succeed, platforms in industries such as software, portals and media, payment systems and the Internet, must “get both sides of the market on board ”. Accordingly, platforms devote much attention to their business model, that is to how they court each side while making money overall. The paper builds a model of platform competition with two-sided markets. It unveils the determinants of price allocation and enduser surplus for different governance structures (profit-maximizing platforms and not-for-profit joint undertakings), and compares the outcomes with those under an integrated monopolist and a Ramsey planner.",
"title": ""
}
] |
[
{
"docid": "45a24b15455b98277e0ee49b31b234d0",
"text": "Breakthroughs in genetics and molecular biology in the 1970s and 1980s were heralded as a major technological revolution in medicine that would yield a wave of new drug discoveries. However, some forty years later the expected benefits have not materialized. I question the narrative of biotechnology as a Schumpeterian revolution by comparing it to the academic research paradigm that preceded it, clinical research in hospitals. I analyze these as distinct research paradigms that involve different epistemologies, practices, and institutional loci. I develop the claim that the complexity of biological systems means that clinical research was well adapted to medical innovation, and that the genetics/molecular biology paradigm imposed a predictive logic to search that was less effective at finding new drugs. The paper describes how drug discovery unfolds in each paradigm: in clinical research, discovery originates with observations of human subjects and proceeds through feedback-based learning, whereas in the genetics model, discovery originates with a precisely-defined molecular target; feedback from patients enters late in the process. The paper reviews the post-War institutional history that witnessed the relative decline of clinical research and the rise of genetics and molecular science in the United States bio-medical research landscape. The history provides a contextual narrative to illustrate that, in contrast to the framing of biotechnology as a Schumpeterian revolution, the adoption of biotechnology as a core drug discovery platform was propelled by institutional changes that were largely disconnected from processes of scientific or technological selection. Implications for current medical policy initiatives and translational science are discussed. © 2016 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "eb0eec2fe000511a37e6487ff51ddb68",
"text": "We report on a laboratory study that compares reading from paper to reading on-line. Critical differences have to do with the major advantages paper offers in supporting annotation while reading, quick navigation, and flexibility of spatial layout. These, in turn, allow readers to deepen their understanding of the text, extract a sense of its structure, create a plan for writing, cross-refer to other documents, and interleave reading and writing. We discuss the design implications of these findings for the development of better reading technologies.",
"title": ""
},
{
"docid": "c943fcc6664681d832133dc8739e6317",
"text": "The explosion in online advertisement urges to better estimate the click prediction of ads. For click prediction on single ad impression, we have access to pairwise relevance among elements in an impression, but not to global interaction among key features of elements. Moreover, the existing method on sequential click prediction treats propagation unchangeable for different time intervals. In this work, we propose a novel model, Convolutional Click Prediction Model (CCPM), based on convolution neural network. CCPM can extract local-global key features from an input instance with varied elements, which can be implemented for not only single ad impression but also sequential ad impression. Experiment results on two public large-scale datasets indicate that CCPM is effective on click prediction.",
"title": ""
},
{
"docid": "bf7eb592ad9ad5e51e61749174b60d04",
"text": "Solving inverse problems continues to be a challenge in a wide array of applications ranging from deblurring, image inpainting, source separation etc. Most existing techniques solve such inverse problems by either explicitly or implicitly finding the inverse of the model. The former class of techniques require explicit knowledge of the measurement process which can be unrealistic, and rely on strong analytical regularizers to constrain the solution space, which often do not generalize well. The latter approaches have had remarkable success in part due to deep learning, but require a large collection of source-observation pairs, which can be prohibitively expensive. In this paper, we propose an unsupervised technique to solve inverse problems with generative adversarial networks (GANs). Using a pre-trained GAN in the space of source signals, we show that one can reliably recover solutions to under determined problems in a ‘blind’ fashion, i.e., without knowledge of the measurement process. We solve this by making successive estimates on the model and the solution in an iterative fashion. We show promising results in three challenging applications – blind source separation, image deblurring, and recovering an image from its edge map, and perform better than several baselines.",
"title": ""
},
{
"docid": "37a5089b7e9e427d330d4720cdcf00d9",
"text": "3D shape models are naturally parameterized using vertices and faces, i.e., composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent ‘geometry images’ representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images. Our code is available at https://github.com/sinhayan/surfnet.",
"title": ""
},
{
"docid": "7082e7b9828c316b24f3113cb516a50d",
"text": "The analog voltage-controlled filter used in historical music synthesizers by Moog is modeled using a digital system, which is then compared in terms of audio measurements with the original analog filter. The analog model is mainly borrowed from D'Angelo's previous work. The digital implementation of the filter incorporates a recently proposed antialiasing method. This method enhances the clarity of output signals in the case of large-level input signals, which cause harmonic distortion. The combination of these two ideas leads to a novel digital model, which represents the state of the art in virtual analog musical filters. It is shown that without the antialiasing, the output signals in the nonlinear regime may be contaminated by undesirable spectral components, which are the consequence of aliasing, but that the antialiasing technique suppresses these components sufficiently. Comparison of measurements of the analog and digital filters show that the digital model is accurate within a few dB in the linear regime and has very similar behavior in the nonlinear regime in terms of distortion. The proposed digital filter model can be used as a building block in virtual analog music synthesizers.",
"title": ""
},
{
"docid": "179ea205964d4f6a13ffbfbf501a189c",
"text": "Mangroves are among the most well described and widely studied wetland communities in the world. The greatest threats to mangrove persistence are deforestation and other anthropogenic disturbances that can compromise habitat stability and resilience to sea-level rise. To persist, mangrove ecosystems must adjust to rising sea level by building vertically or become submerged. Mangroves may directly or indirectly influence soil accretion processes through the production and accumulation of organic matter, as well as the trapping and retention of mineral sediment. In this review, we provide a general overview of research on mangrove elevation dynamics, emphasizing the role of the vegetation in maintaining soil surface elevations (i.e. position of the soil surface in the vertical plane). We summarize the primary ways in which mangroves may influence sediment accretion and vertical land development, for example, through root contributions to soil volume and upward expansion of the soil surface. We also examine how hydrological, geomorphological and climatic processes may interact with plant processes to influence mangrove capacity to keep pace with rising sea level. We draw on a variety of studies to describe the important, and often under-appreciated, role that plants play in shaping the trajectory of an ecosystem undergoing change.",
"title": ""
},
{
"docid": "30b74cdc0d4825957b4125c9ecd5cffe",
"text": "Popular Internet sites are under attack all the time from phishers, fraudsters, and spammers. They aim to steal user information and expose users to unwanted spam. The attackers have vast resources at their disposal. They are well-funded, with full-time skilled labor, control over compromised and infected accounts, and access to global botnets. Protecting our users is a challenging adversarial learning problem with extreme scale and load requirements. Over the past several years we have built and deployed a coherent, scalable, and extensible realtime system to protect our users and the social graph. This Immune System performs realtime checks and classifications on every read and write action. As of March 2011, this is 25B checks per day, reaching 650K per second at peak. The system also generates signals for use as feedback in classifiers and other components. We believe this system has contributed to making Facebook the safest place on the Internet for people and their information. This paper outlines the design of the Facebook Immune System, the challenges we have faced and overcome, and the challenges we continue to face.",
"title": ""
},
{
"docid": "34d7f848427052a1fc5f565a24f628ec",
"text": "This is the solutions manual (web-edition) for the book Pattern Recognition and Machine Learning (PRML; published by Springer in 2006). It contains solutions to the www exercises. This release was created September 8, 2009. Future releases with corrections to errors will be published on the PRML web-site (see below). The authors would like to express their gratitude to the various people who have provided feedback on earlier releases of this document. In particular, the \" Bishop Reading Group \" , held in the Visual Geometry Group at the University of Oxford provided valuable comments and suggestions. The authors welcome all comments, questions and suggestions about the solutions as well as reports on (potential) errors in text or formulae in this document; please send any such feedback to",
"title": ""
},
{
"docid": "fa8c3873cf03af8d4950a0e53f877b08",
"text": "The problem of formal likelihood-based (either classical or Bayesian) inference for discretely observed multi-dimensional diffusions is particularly challenging. In principle this involves data-augmentation of the observation data to give representations of the entire diffusion trajectory. Most currently proposed methodology splits broadly into two classes: either through the discretisation of idealised approaches for the continuous-time diffusion setup; or through the use of standard finite-dimensional methodologies discretisation of the diffusion model. The connections between these approaches have not been well-studied. This paper will provide a unified framework bringing together these approaches, demonstrating connections, and in some cases surprising differences. As a result, we provide, for the first time, theoretical justification for the various methods of imputing missing data. The inference problems are particularly challenging for reducible diffusions, and our framework is correspondingly more complex in that case. Therefore we treat the reducible and irreducible cases differently within the paper. Supplementary materials for the article are avilable on line. 1 Overview of likelihood-based inference for diffusions Diffusion processes have gained much popularity as statistical models for observed and latent processes. Among others, their appeal lies in their flexibility to deal with nonlinearity, time-inhomogeneity and heteroscedasticity by specifying two interpretable functionals, their amenability to efficient computations due to their Markov property, and the rich existing mathematical theory about their properties. As a result, they are used as models throughout Science; some book references related with this approach to modeling include Section 5.3 of [1] for physical systems, Section 8.3.3 (in conjunction with Section 6.3) of [12] for systems biology and mass action stochastic kinetics, and Chapter 10 of [27] for interest rates. A mathematically precise specification of a d-dimensional diffusion process V is as the solution of a stochastic differential equation (SDE) of the type: dVs = b(s, Vs; θ1) ds+ σ(s, Vs; θ2) dBs, s ∈ [0, T ] ; (1) where B is an m-dimensional standard Brownian motion, b(·, · ; · ) : R+ ×Rd ×Θ1 → R is the drift and σ(·, · ; · ) : R+ × R × Θ2 → R is the diffusion coefficient. These ICREA and Department of Economics, Universitat Pompeu Fabra, omiros.papaspiliopoulos@upf.edu Department of Statistics, University of Warwick Department of Statistics and Actuarial Science, University of Iowa, Iowa City, Iowa",
"title": ""
},
{
"docid": "44017678b3da8c8f4271a9832280201e",
"text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "c2ade16afaf22ac6cc546134a1227d68",
"text": "In this work we present a novel method for the challenging problem of depth image up sampling. Modern depth cameras such as Kinect or Time-of-Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we formulate a convex optimization problem using higher order regularization for depth image up sampling. In this optimization an an isotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the up sampling. We derive a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel up sampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate ground truth, which, for the first time, enable to benchmark depth up sampling methods using real sensor data.",
"title": ""
},
{
"docid": "0bb26233aa8776c6a0db8f2e65bb207a",
"text": "This paper presents methods for suppressing the slugging phenomenon occurring in multiphase flow. The considered systems include industrial oil production facilities such as gas-lifted wells and flowline risers with low-points. Given the difficulty to maintain sensors in deep locations, a particular emphasis is put on observer-based control design. It appears that, without any upstream pressure sensor, such vailable online 23 March 2012 eywords: lugging tabilization bserver a strategy can stabilize the flow. Besides, given a measurement or estimate of the upstream pressure, we propose a control strategy alternative to the classical techniques. The efficiency of these methods is assessed through experiments on a mid-scaled multiphase flow loop. © 2012 Elsevier Ltd. All rights reserved. However, such a simple controller is not always well suited ultiphase flow",
"title": ""
},
{
"docid": "d927e3b1e9bda244bc7b2ccd56b56ff4",
"text": "The formation of healthy gametes requires pairing of homologous chromosomes (homologs) as a prerequisite for their correct segregation during meiosis. Initially, homolog alignment is promoted by meiotic chromosome movements feeding into intimate homolog pairing by homologous recombination and/or synaptonemal complex formation. Meiotic chromosome movements in the fission yeast, Schizosaccharomyces pombe, depend on astral microtubule dynamics that drag the nucleus through the zygote; known as horsetail movement. The response of microtubule-led meiotic chromosome movements to environmental stresses such as ionizing irradiation (IR) and associated reactive oxygen species (ROS) is not known. Here, we show that, in contrast to budding yeast, the horsetail movement is largely radiation-resistant, which is likely mediated by a potent antioxidant defense. IR exposure of sporulating S. pombe cells induced misrepair and irreparable DNA double strand breaks causing chromosome fragmentation, missegregation and gamete death. Comparing radiation outcome in fission and budding yeast, and studying meiosis with poisoned microtubules indicates that the increased gamete death after IR is innate to fission yeast. Inhibition of meiotic chromosome mobility in the face of IR failed to influence the course of DSB repair, indicating that paralysis of meiotic chromosome mobility in a genotoxic environment is not a universal response among species.",
"title": ""
},
{
"docid": "9ac00559a52851ffd2e33e376dd58b62",
"text": "ARM servers are becoming increasingly common, making server technologies such as virtualization for ARM of growing importance. We present the first study of ARM virtualization performance on server hardware, including multicore measurements of two popular ARM and x86 hypervisors, KVM and Xen. We show how ARM hardware support for virtualization can enable much faster transitions between VMs and the hypervisor, a key hypervisor operation. However, current hypervisor designs, including both Type 1 hypervisors such as Xen and Type 2 hypervisors such as KVM, are not able to leverage this performance benefit for real application workloads. We discuss the reasons why and show that other factors related to hypervisor software design and implementation have a larger role in overall performance. Based on our measurements, we discuss changes to ARM's hardware virtualization support that can potentially bridge the gap to bring its faster VM-to-hypervisor transition mechanism to modern Type 2 hypervisors running real applications. These changes have been incorporated into the latest ARM architecture.",
"title": ""
},
{
"docid": "8d9a55b7d730d9acbff50aef4f55808b",
"text": "Interactions between light and matter can be dramatically modified by concentrating light into a small volume for a long period of time. Gaining control over such interaction is critical for realizing many schemes for classical and quantum information processing, including optical and quantum computing, quantum cryptography, and metrology and sensing. Plasmonic structures are capable of confining light to nanometer scales far below the diffraction limit, thereby providing a promising route for strong coupling between light and matter, as well as miniaturization of photonic circuits. At the same time, however, the performance of plasmonic circuits is limited by losses and poor collection efficiency, presenting unique challenges that need to be overcome for quantum plasmonic circuits to become a reality. In this paper, we survey recent progress in controlling emission from quantum emitters using plasmonic structures, as well as efforts to engineer surface plasmon propagation and design plasmonic circuits using these elements.",
"title": ""
},
{
"docid": "1045117f9e6e204ff51ef67a1aff031f",
"text": "Application of models to data is fraught. Data-generating collaborators often only have a very basic understanding of the complications of collating, processing and curating data. Challenges include: poor data collection practices, missing values, inconvenient storage mechanisms, intellectual property, security and privacy. All these aspects obstruct the sharing and interconnection of data, and the eventual interpretation of data through machine learning or other approaches. In project reporting, a major challenge is in encapsulating these problems and enabling goals to be built around the processing of data. Project overruns can occur due to failure to account for the amount of time required to curate and collate. But to understand these failures we need to have a common language for assessing the readiness of a particular data set. This position paper proposes the use of data readiness levels: it gives a rough outline of three stages of data preparedness and speculates on how formalisation of these levels into a common language for data readiness could facilitate project management.",
"title": ""
},
{
"docid": "79be4c64b46eca3c64bdcfbec12720a9",
"text": "We present several new variations on the theme of nonnegative matrix factorization (NMF). Considering factorizations of the form X = FGT, we focus on algorithms in which G is restricted to containing nonnegative entries, but allowing the data matrix X to have mixed signs, thus extending the applicable range of NMF methods. We also consider algorithms in which the basis vectors of F are constrained to be convex combinations of the data points. This is used for a kernel extension of NMF. We provide algorithms for computing these new factorizations and we provide supporting theoretical analysis. We also analyze the relationships between our algorithms and clustering algorithms, and consider the implications for sparseness of solutions. Finally, we present experimental results that explore the properties of these new methods.",
"title": ""
},
{
"docid": "1dc615b299a8a63caa36cd8e36459323",
"text": "Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly.",
"title": ""
},
{
"docid": "58156df07590448d89c2b8d4a46696ad",
"text": "Gene PmAF7DS confers resistance to wheat powdery mildew (isolate Bgt#211 ); it was mapped to a 14.6-cM interval ( Xgwm350 a– Xbarc184 ) on chromosome 7DS. The flanking markers could be applied in MAS breeding. Wheat powdery mildew (Pm) is caused by the biotrophic pathogen Blumeria graminis tritici (DC.) (Bgt). An ongoing threat of breakdown of race-specific resistance to Pm requires a continuous effort to discover new alleles in the wheat gene pool. Developing new cultivars with improved disease resistance is an economically and environmentally safe approach to reduce yield losses. To identify and characterize genes for resistance against Pm in bread wheat we used the (Arina × Forno) RILs population. Initially, the two parental lines were screened with a collection of 61 isolates of Bgt from Israel. Three Pm isolates Bgt#210 , Bgt#211 and Bgt#213 showed differential reactions in the parents: Arina was resistant (IT = 0), whereas Forno was moderately susceptible (IT = −3). Isolate Bgt#211 was then used to inoculate the RIL population. The segregation pattern of plant reactions among the RILs indicates that a single dominant gene controls the conferred resistance. A genetic map of the region containing this gene was assembled with DNA markers and assigned to the 7D physical bin map. The gene, temporarily designated PmAF7DS, was located in the distal region of chromosome arm 7DS. The RILs were also inoculated with Bgt#210 and Bgt#213. The plant reactions to these isolates showed high identity with the reaction to Bgt#211, indicating the involvement of the same gene or closely linked, but distinct single genes. The genomic location of PmAF7DS, in light of other Pm genes on 7DS is discussed.",
"title": ""
}
] |
scidocsrr
|
f7b8465326ff2100b4e69954f51c1d1a
|
Scalable Clustering of Time Series with U-Shapelets
|
[
{
"docid": "244a517d3a1c456a602ecc01fb99a78f",
"text": "Most literature on time series classification assumes that the beginning and ending points of the pattern of interest can be correctly identified, both during the training phase and later deployment. In this work, we argue that this assumption is unjustified, and this has in many cases led to unwarranted optimism about the performance of the proposed algorithms. As we shall show, the task of correctly extracting individual gait cycles, heartbeats, gestures, behaviors, etc., is generally much more difficult than the task of actually classifying those patterns. We propose to mitigate this problem by introducing an alignment-free time series classification framework. The framework requires only very weakly annotated data, such as “in this ten minutes of data, we see mostly normal heartbeats...,” and by generalizing the classic machine learning idea of data editing to streaming/continuous data, allows us to build robust, fast and accurate classifiers. We demonstrate on several diverse real-world problems that beyond removing unwarranted assumptions and requiring essentially no human intervention, our framework is both significantly faster and significantly more accurate than current state-of-the-art approaches.",
"title": ""
},
{
"docid": "8609f49cc78acc1ba25e83c8e68040a6",
"text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.",
"title": ""
},
{
"docid": "c7d6e273065ce5ca82cd55f0ba5937cd",
"text": "Many environmental and socioeconomic time–series data can be adequately modeled using Auto-Regressive Integrated Moving Average (ARIMA) models. We call such time–series ARIMA time–series. We consider the problem of clustering ARIMA time–series. We propose the use of the Linear Predictive Coding (LPC) cepstrum of time–series for clustering ARIMA time–series, by using the Euclidean distance between the LPC cepstra of two time–series as their dissimilarity measure. We demonstrate that LPC cepstral coefficients have the desired features for accurate clustering and efficient indexing of ARIMA time–series. For example, few LPC cepstral coefficients are sufficient in order to discriminate between time–series that are modeled by different ARIMA models. In fact this approach requires fewer coefficients than traditional approaches, such as DFT and DWT. The proposed distance measure can be used for measuring the similarity between different ARIMA models as well. We cluster ARIMA time–series using the Partition Around Medoids method with various similarity measures. We present experimental results demonstrating that using the proposed measure we achieve significantly better clusterings of ARIMA time–series data as compared to clusterings obtained by using other traditional similarity measures, such as DFT, DWT, PCA, etc. Experiments were performed both on simulated as well as real data.",
"title": ""
}
] |
[
{
"docid": "7f65b9d7d07eee04405fc7102bd51f71",
"text": "Researchers tend to cite highly cited articles, but how these highly cited articles influence the citing articles has been underexplored. This paper investigates how one highly cited essay, Hirsch’s “h-index” article (H-article) published in 2005, has been cited by other articles. Content-based citation analysis is applied to trace the dynamics of the article’s impact changes from 2006 to 2014. The findings confirm that citation context captures the changing impact of the H-article over time in several ways. In the first two years, average citation mention of H-article increased, yet continued to decline with fluctuation until 2014. In contrast with citation mention, average citation count stayed the same. The distribution of citation location over time also indicates three phases of the H-article “Discussion,” “Reputation,” and “Adoption” we propose in this study. Based on their locations in the citing articles and their roles in different periods, topics of citation context shifted gradually when an increasing number of other articles were co-mentioned with the H-article in the same sentences. These outcomes show that the impact of the H-article manifests in various ways within the content of these citing articles that continued to shift in nine years, data that is not captured by traditional means of citation analysis that do not weigh citation impacts over time.",
"title": ""
},
{
"docid": "207b24c58d8417fc309a42e3bbd6dc16",
"text": "This study mainly remarks the efficiency of black-box modeling capacity of neural networks in the case of forecasting soccer match results, and opens up several debates on the nature of prediction and selection of input parameters. The selection of input parameters is a serious problem in soccer match prediction systems based on neural networks or statistical methods. Several input vector suggestions are implemented in literature which is mostly based on direct data from weekly charts. Here in this paper, two different input vector parameters have been tested via learning vector quantization networks in order to emphasize the importance of input parameter selection. The input vector parameters introduced in this study are plain and also meaningful when compared to other studies. The results of different approaches presented in this study are compared to each other, and also compared with the results of other neural network approaches and statistical methods in order to give an idea about the successful prediction performance. The paper is concluded with discussions about the nature of soccer match forecasting concept that may draw the interests of researchers willing to work in this area.",
"title": ""
},
{
"docid": "28cfe864acc8c40eb8759261273cf3bb",
"text": "Mobile-edge computing (MEC) has recently emerged as a promising paradigm to liberate mobile devices from increasingly intensive computation workloads, as well as to improve the quality of computation experience. In this paper, we investigate the tradeoff between two critical but conflicting objectives in multi-user MEC systems, namely, the power consumption of mobile devices and the execution delay of computation tasks. A power consumption minimization problem with task buffer stability constraints is formulated to investigate the tradeoff, and an online algorithm that decides the local execution and computation offloading policy is developed based on Lyapunov optimization. Specifically, at each time slot, the optimal frequencies of the local CPUs are obtained in closed forms, while the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method. Performance analysis is conducted for the proposed algorithm, which indicates that the power consumption and execution delay obeys an $\\left[O\\left(1\\slash V\\right),O\\left(V\\right)\\right]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters to the system performance.",
"title": ""
},
{
"docid": "5b942618f753465fc595e707d5cd8ad9",
"text": "There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and comparison more difficult. This work provides simple and effective baseline methods. They are helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be available at https://github. com/leoxiaobin/pose.pytorch.",
"title": ""
},
{
"docid": "6042abbb698a8d8be6ea87690db9fbd2",
"text": "Machine learning is used in a number of security related applications such as biometric user authentication, speaker identification etc. A type of causative integrity attack against machine le arning called Poisoning attack works by injecting specially crafted data points in the training data so as to increase the false positive rate of the classifier. In the context of the biometric authentication, this means that more intruders will be classified as valid user, and in case of speaker identification system, user A will be classified user B. In this paper, we examine poisoning attack against SVM and introduce Curie a method to protect the SVM classifier from the poisoning attack. The basic idea of our method is to identify the poisoned data points injected by the adversary and filter them out. Our method is light weight and can be easily integrated into existing systems. Experimental results show that it works very well in filtering out the poisoned data.",
"title": ""
},
{
"docid": "c93902158d062938a1677d67b00f4e01",
"text": "Is Porter postmodern? The project originated in my need to ‘make sense’ of the strategic management literature, and specifically the place of Michael E Porter within it. The question, what is strategic management?, often leads to the work of Porter. Strategic management texts inevitably contain his models, theories and frameworks which imply that they are ‘fundamental’ to the field. An historical journey through six prominent management\\organization journals, Strategic Management Journal, Academy of Management Journal, Academy of Management Review, Journal of Management Studies, Organization Studies, Advances in Strategic Management, shows that Michael E Porter was not a constant contributor, in fact he is almost absent from the journals, but his work is often the study of empirical testing or theoretical debate (Foss 1996; Hill & Deeds 1996; Sharp & Dawson 1994; Miller & Dess 1993; Bowman 1992). This article does not attempt to account for his popularity, as others have offered substantial and convincing accounts (Barry & Elmes 1997; Whipp 1996; Knights 1992).",
"title": ""
},
{
"docid": "d704917077795fbe16e52ea2385e19ef",
"text": "The objectives of this review were to summarize the evidence from randomized controlled trials (RCTs) on the effects of animal-assisted therapy (AAT). Studies were eligible if they were RCTs. Studies included one treatment group in which AAT was applied. We searched the following databases from 1990 up to October 31, 2012: MEDLINE via PubMed, CINAHL, Web of Science, Ichushi Web, GHL, WPRIM, and PsycINFO. We also searched all Cochrane Database up to October 31, 2012. Eleven RCTs were identified, and seven studies were about \"Mental and behavioral disorders\". Types of animal intervention were dog, cat, dolphin, bird, cow, rabbit, ferret, and guinea pig. The RCTs conducted have been of relatively low quality. We could not perform meta-analysis because of heterogeneity. In a study environment limited to the people who like animals, AAT may be an effective treatment for mental and behavioral disorders such as depression, schizophrenia, and alcohol/drug addictions, and is based on a holistic approach through interaction with animals in nature. To most effectively assess the potential benefits for AAT, it will be important for further research to utilize and describe (1) RCT methodology when appropriate, (2) reasons for non-participation, (3) intervention dose, (4) adverse effects and withdrawals, and (5) cost.",
"title": ""
},
{
"docid": "8069999c95b31e8c847091f72b694af7",
"text": "Software defined radio (SDR) is a rapidly evolving technology which implements some functional modules of a radio system in software executing on a programmable processor. SDR provides a flexible mechanism to reconfigure the radio, enabling networked devices to easily adapt to user preferences and the operating environment. However, the very mechanisms that provide the ability to reconfigure the radio through software also give rise to serious security concerns such as unauthorized modification of the software, leading to radio malfunction and interference with other users' communications. Both the SDR device and the network need to be protected from such malicious radio reconfiguration.\n In this paper, we propose a new architecture to protect SDR devices from malicious reconfiguration. The proposed architecture is based on robust separation of the radio operation environment and user application environment through the use of virtualization. A secure radio middleware layer is used to intercept all attempts to reconfigure the radio, and a security policy monitor checks the target configuration against security policies that represent the interests of various parties. Therefore, secure reconfiguration can be ensured in the radio operation environment even if the operating system in the user application environment is compromised. We have prototyped the proposed secure SDR architecture using VMware and the GNU Radio toolkit, and demonstrate that the overheads incurred by the architecture are small and tolerable. Therefore, we believe that the proposed solution could be applied to address SDR security concerns in a wide range of both general-purpose and embedded computing systems.",
"title": ""
},
{
"docid": "438a9e517a98c6f98f7c86209e601f1b",
"text": "One of the most challenging tasks in large-scale multi-label image retrieval is to map images into binary codes while preserving multilevel semantic similarity. Recently, several deep supervised hashing methods have been proposed to learn hash functions that preserve multilevel semantic similarity with deep convolutional neural networks. However, these triplet label based methods try to preserve the ranking order of images according to their similarity degrees to the queries while not putting direct constraints on the distance between the codes of very similar images. Besides, the current evaluation criteria are not able to measure the performance of existing hashing methods on preserving fine-grained multilevel semantic similarity. To tackle these issues, we propose a novel Deep Multilevel Semantic Similarity Preserving Hashing (DMSSPH) method to learn compact similarity-preserving binary codes for the huge body of multi-label image data with deep convolutional neural networks. In our approach, we make the best of the supervised information in the form of pairwise labels to maximize the discriminability of output binary codes. Extensive evaluations conducted on several benchmark datasets demonstrate that the proposed method significantly outperforms the state-of-the-art supervised and unsupervised hashing methods at the accuracies of top returned images, especially for shorter binary codes. Meanwhile, the proposed method shows better performance on preserving fine-grained multilevel semantic similarity according to the results under the Jaccard coefficient based evaluation criteria we propose.",
"title": ""
},
{
"docid": "b0bd9a0b3e1af93a9ede23674dd74847",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
},
{
"docid": "3a651ab1f8c05cfae51da6a14f6afef8",
"text": "The taxonomical relationship of Cylindrospermopsis raciborskii and Raphidiopsis mediterranea was studied by morphological and 16S rRNA gene diversity analyses of natural populations from Lake Kastoria, Greece. Samples were obtained during a bloom (23,830 trichomes mL ) in August 2003. A high diversity of apical cell, trichome, heterocyte and akinete morphology, trichome fragmentation and reproduction was observed. Trichomes were grouped into three dominant morphotypes: the typical and the non-heterocytous morphotype of C. raciborskii and the typical morphotype of R. mediterranea. A morphometric comparison of the dominant morphotypes showed significant differences in mean values of cell and trichome sizes despite the high overlap in the range of the respective size values. Additionally, two new morphotypes representing developmental stages of the species are described while a new mode of reproduction involving a structurally distinct reproductive cell is described for the first time in planktic Nostocales. A putative life-cycle, common for C. raciborskii and R. mediterranea is proposed revealing that trichome reproduction of R. mediterranea gives rise both to R. mediterranea and C. raciborskii non-heterocytous morphotypes. The phylogenetic analysis of partial 16S rRNA gene (ca. 920 bp) of the co-existing Cylindrospermopsis and Raphidiopsis morphotypes revealed only one phylotype which showed 99.54% similarity to R. mediterranea HB2 (China) and 99.19% similarity to C. raciborskii form 1 (Australia). We propose that all morphotypes comprised stages of the life cycle of C. raciborkii whereas R. mediterranea from Lake Kastoria (its type locality) represents non-heterocytous stages of Cylindrospermopsis complex life cycle. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "eb52b00d6aec954e3c64f7043427709c",
"text": "The paper presents a ball on plate balancing system useful for various educational purposes. A touch-screen placed on the plate is used for ball's position sensing and two servomotors are employed for balancing the plate in order to control ball's Cartesian coordinates. The design of control embedded systems is demonstrated for different control algorithms in compliance with FreeRTOS real time operating system and dsPIC33 microcontroller. On-line visualizations useful for system monitoring are provided by a PC host application connected with the embedded application. The measurements acquired during real-time execution and the parameters of the system are stored in specific data files, as support for any desired additional analysis. Taking into account the properties of this controlled system (instability, fast dynamics) and the capabilities of the embedded architecture (diversity of the involved communication protocols, diversity of employed hardware components, usage of an open source real time operating system), this educational setup allows a good illustration of numerous theoretical and practical aspects related to system engineering and applied informatics.",
"title": ""
},
{
"docid": "15ad5044900511277e0cd602b0c07c5e",
"text": "Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls) were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM) across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.",
"title": ""
},
{
"docid": "9b11423260c2d3d175892f846cecced3",
"text": "Disturbances in fluid and electrolytes are among the most common clinical problems encountered in the intensive care unit (ICU). Recent studies have reported that fluid and electrolyte imbalances are associated with increased morbidity and mortality among critically ill patients. To provide optimal care, health care providers should be familiar with the principles and practice of fluid and electrolyte physiology and pathophysiology. Fluid resuscitation should be aimed at restoration of normal hemodynamics and tissue perfusion. Early goal-directed therapy has been shown to be effective in patients with severe sepsis or septic shock. On the other hand, liberal fluid administration is associated with adverse outcomes such as prolonged stay in the ICU, higher cost of care, and increased mortality. Development of hyponatremia in critically ill patients is associated with disturbances in the renal mechanism of urinary dilution. Removal of nonosmotic stimuli for vasopressin secretion, judicious use of hypertonic saline, and close monitoring of plasma and urine electrolytes are essential components of therapy. Hypernatremia is associated with cellular dehydration and central nervous system damage. Water deficit should be corrected with hypotonic fluid, and ongoing water loss should be taken into account. Cardiac manifestations should be identified and treated before initiating stepwise diagnostic evaluation of dyskalemias. Divalent ion deficiencies such as hypocalcemia, hypomagnesemia and hypophosphatemia should be identified and corrected, since they are associated with increased adverse events among critically ill patients.",
"title": ""
},
{
"docid": "57d3505a655e9c0efdc32101fd09b192",
"text": "POX is a Python based open source OpenFlow/Software Defined Networking (SDN) Controller. POX is used for faster development and prototyping of new network applications. POX controller comes pre installed with the mininet virtual machine. Using POX controller you can turn dumb openflow devices into hub, switch, load balancer, firewall devices. The POX controller allows easy way to run OpenFlow/SDN experiments. POX can be passed different parameters according to real or experimental topologies, thus allowing you to run experiments on real hardware, testbeds or in mininet emulator. In this paper, first section will contain introduction about POX, OpenFlow and SDN, then discussion about relationship between POX and Mininet. Final Sections will be regarding creating and verifying behavior of network applications in POX.",
"title": ""
},
{
"docid": "9ac00559a52851ffd2e33e376dd58b62",
"text": "ARM servers are becoming increasingly common, making server technologies such as virtualization for ARM of growing importance. We present the first study of ARM virtualization performance on server hardware, including multicore measurements of two popular ARM and x86 hypervisors, KVM and Xen. We show how ARM hardware support for virtualization can enable much faster transitions between VMs and the hypervisor, a key hypervisor operation. However, current hypervisor designs, including both Type 1 hypervisors such as Xen and Type 2 hypervisors such as KVM, are not able to leverage this performance benefit for real application workloads. We discuss the reasons why and show that other factors related to hypervisor software design and implementation have a larger role in overall performance. Based on our measurements, we discuss changes to ARM's hardware virtualization support that can potentially bridge the gap to bring its faster VM-to-hypervisor transition mechanism to modern Type 2 hypervisors running real applications. These changes have been incorporated into the latest ARM architecture.",
"title": ""
},
{
"docid": "4b5716be34ebf25bb6e713024e3b73fb",
"text": "The contents generated from different data sources are usually non-uniform, such as long texts produced by news websites and short texts produced by social media. Uncovering topics over large-scale non-uniform texts becomes an important task for analyzing network data. However, the existing methods may fail to recognize the difference between long texts and short texts. To address this problem, we propose a novel topic modeling method for non-uniform text topic modeling referred to as self-adaptive sliding window based topic model (SSWTM). Specifically, in all kinds of texts, relevant words have a closer distance to each other than irrelevant words. Based on this assumption, SSWTM extracts relevant words by using a selfadaptive sliding window and models on the whole corpus. The self-adaptive sliding window can filter noisy information and change the size of a window according to different text contents. Experimental results on short texts from Twitter and long texts from Chinese news articles demonstrate that our method can discover more coherent topics for non-uniform texts compared with state-of-the-art methods.",
"title": ""
},
{
"docid": "ee3d837390e1f53181cfb393a0af3cc6",
"text": "The telecommunications industry is highly competitive, which means that the mobile providers need a business intelligence model that can be used to achieve an optimal level of churners, as well as a minimal level of cost in marketing activities. Machine learning applications can be used to provide guidance on marketing strategies. Furthermore, data mining techniques can be used in the process of customer segmentation. The purpose of this paper is to provide a detailed analysis of the C.5 algorithm, within naive Bayesian modelling for the task of segmenting telecommunication customers behavioural profiling according to their billing and socio-demographic aspects. Results have been experimentally implemented.",
"title": ""
},
{
"docid": "eee9f9e1e8177b68a278eab025dae84b",
"text": "Herzberg et al. (1959) developed “Two Factors theory” to focus on working conditions necessary for employees to be motivated. Since Herzberg examined only white collars in his research, this article reviews later studies on motivation factors of blue collar workers verses white collars and suggests some hypothesis for further researches.",
"title": ""
},
{
"docid": "87eb54a981fca96475b73b3dfa99b224",
"text": "Cost-Sensitive Learning is a type of learning in data mining that takes the misclassification costs (and possibly other types of cost) into consideration. The goal of this type of learning is to minimize the total cost. The key difference between cost-sensitive learning and cost-insensitive learning is that cost-sensitive learning treats the different misclassifications differently. Costinsensitive learning does not take the misclassification costs into consideration. The goal of this type of learning is to pursue a high accuracy of classifying examples into a set of known classes.",
"title": ""
}
] |
scidocsrr
|
993e46995cd68116e6a198cfda636f35
|
Certified Defenses for Data Poisoning Attacks
|
[
{
"docid": "f226dccc4a7d83f2869fb3bd37b522e2",
"text": "Poisoning attack is identified as a severe security threat to machine learning algorithms. In many applications, for example, deep neural network (DNN) models collect public data as the inputs to perform re-training, where the input data can be poisoned. Although poisoning attack against support vector machines (SVM) has been extensively studied before, there is still very limited knowledge about how such attack can be implemented on neural networks (NN), especially DNNs. In this work, we first examine the possibility of applying traditional gradient-based method (named as the direct gradient method) to generate poisoned data against NNs by leveraging the gradient of the target model w.r.t. the normal data. We then propose a generative method to accelerate the generation rate of the poisoned data: an auto-encoder (generator) used to generate poisoned data is updated by a reward function of the loss, and the target NN model (discriminator) receives the poisoned data to calculate the loss w.r.t. the normal data. Our experiment results show that the generative method can speed up the poisoned data generation rate by up to 239.38× compared with the direct gradient method, with slightly lower model accuracy degradation. A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.",
"title": ""
},
{
"docid": "53a55e8aa8b3108cdc8d015eabb3476d",
"text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.",
"title": ""
},
{
"docid": "6042abbb698a8d8be6ea87690db9fbd2",
"text": "Machine learning is used in a number of security related applications such as biometric user authentication, speaker identification etc. A type of causative integrity attack against machine le arning called Poisoning attack works by injecting specially crafted data points in the training data so as to increase the false positive rate of the classifier. In the context of the biometric authentication, this means that more intruders will be classified as valid user, and in case of speaker identification system, user A will be classified user B. In this paper, we examine poisoning attack against SVM and introduce Curie a method to protect the SVM classifier from the poisoning attack. The basic idea of our method is to identify the poisoned data points injected by the adversary and filter them out. Our method is light weight and can be easily integrated into existing systems. Experimental results show that it works very well in filtering out the poisoned data.",
"title": ""
}
] |
[
{
"docid": "d5e573802d6519a8da402f2e66064372",
"text": "Targeted cyberattacks play an increasingly significant role in disrupting the online social and economic model, not to mention the threat they pose to nation-states. A variety of components and techniques come together to bring about such attacks.",
"title": ""
},
{
"docid": "1aa01ca2f1b7f5ea8ed783219fe83091",
"text": "This paper presents NetKit, a modular toolkit for classifica tion in networked data, and a case-study of its application to a collection of networked data sets use d in prior machine learning research. Networked data are relational data where entities are inter connected, and this paper considers the common case where entities whose labels are to be estimated a re linked to entities for which the label is known. NetKit is based on a three-component framewo rk, comprising a local classifier, a relational classifier, and a collective inference procedur . Various existing relational learning algorithms can be instantiated with appropriate choices for the se three components and new relational learning algorithms can be composed by new combinations of c omponents. The case study demonstrates how the toolkit facilitates comparison of differen t learning methods (which so far has been lacking in machine learning research). It also shows how the modular framework allows analysis of subcomponents, to assess which, whether, and when partic ul components contribute to superior performance. The case study focuses on the simple but im portant special case of univariate network classification, for which the only information avai lable is the structure of class linkage in the network (i.e., only links and some class labels are avail ble). To our knowledge, no work previously has evaluated systematically the power of class-li nkage alone for classification in machine learning benchmark data sets. The results demonstrate clea rly th t simple network-classification models perform remarkably well—well enough that they shoul d be used regularly as baseline classifiers for studies of relational learning for networked dat a. The results also show that there are a small number of component combinations that excel, and that different components are preferable in different situations, for example when few versus many la be s are known.",
"title": ""
},
{
"docid": "6d31096c16817f13641b23ae808b0dce",
"text": "In the competitive environment of the internet, retaining and growing one's user base is of major concern to most web services. Furthermore, the economic model of many web services is allowing free access to most content, and generating revenue through advertising. This unique model requires securing user time on a site rather than the purchase of good which makes it crucially important to create new kinds of metrics and solutions for growth and retention efforts for web services. In this work, we address this problem by proposing a new retention metric for web services by concentrating on the rate of user return. We further apply predictive analysis to the proposed retention metric on a service, as a means for characterizing lost customers. Finally, we set up a simple yet effective framework to evaluate a multitude of factors that contribute to user return. Specifically, we define the problem of return time prediction for free web services. Our solution is based on the Cox's proportional hazard model from survival analysis. The hazard based approach offers several benefits including the ability to work with censored data, to model the dynamics in user return rates, and to easily incorporate different types of covariates in the model. We compare the performance of our hazard based model in predicting the user return time and in categorizing users into buckets based on their predicted return time, against several baseline regression and classification methods and find the hazard based approach to be superior.",
"title": ""
},
{
"docid": "8cc3af1b9bb2ed98130871c7d5bae23a",
"text": "BACKGROUND\nAnimal experiments have convincingly demonstrated that prenatal maternal stress affects pregnancy outcome and results in early programming of brain functions with permanent changes in neuroendocrine regulation and behaviour in offspring.\n\n\nAIM\nTo evaluate the existing evidence of comparable effects of prenatal stress on human pregnancy and child development.\n\n\nSTUDY DESIGN\nData sources used included a computerized literature search of PUBMED (1966-2001); Psychlit (1987-2001); and manual search of bibliographies of pertinent articles.\n\n\nRESULTS\nRecent well-controlled human studies indicate that pregnant women with high stress and anxiety levels are at increased risk for spontaneous abortion and preterm labour and for having a malformed or growth-retarded baby (reduced head circumference in particular). Evidence of long-term functional disorders after prenatal exposure to stress is limited, but retrospective studies and two prospective studies support the possibility of such effects. A comprehensive model of putative interrelationships between maternal, placental, and fetal factors is presented.\n\n\nCONCLUSIONS\nApart from the well-known negative effects of biomedical risks, maternal psychological factors may significantly contribute to pregnancy complications and unfavourable development of the (unborn) child. These problems might be reduced by specific stress reduction in high anxious pregnant women, although much more research is needed.",
"title": ""
},
{
"docid": "4b78f107ee628cefaeb80296e4f9ae27",
"text": "On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection.\n In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low-diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.\n Our design is implemented in a few hundred lines of X10. On the binomial tree described in olivier:08}, the program achieve 87% efficiency on an Infiniband cluster of 1024 Power7 cores, with a peak throughput of 2.37 GNodes/sec. It achieves 87% efficiency on a Blue Gene/P with 2048 processors, and a peak throughput of 0.966 GNodes/s. All numbers are relative to single core sequential performance. This implementation has been refactored into a reusable global load balancing framework. Applications can use this framework to obtain global load balance with minimal code changes.\n In summary, we claim: (a) the first formulation of UTS that does not involve application level global termination detection, (b) the introduction of lifeline graphs to reduce failed steals (c) the demonstration of simple lifeline graphs based on k-hypercubes, (d) performance with superior efficiency (or the same efficiency but over a wider range) than published results on UTS. In particular, our framework can deliver the same or better performance as an unrestricted random work-stealing implementation, while reducing the number of attempted steals.",
"title": ""
},
{
"docid": "821cefef9933d6a02ec4b9098f157062",
"text": "Scientists debate whether people grow closer to their friends through social networking sites like Facebook, whether those sites displace more meaningful interaction, or whether they simply reflect existing ties. Combining server log analysis and longitudinal surveys of 3,649 Facebook users reporting on relationships with 26,134 friends, we find that communication on the site is associated with changes in reported relationship closeness, over and above effects attributable to their face-to-face, phone, and email contact. Tie strength increases with both one-on-one communication, such as posts, comments, and messages, and through reading friends' broadcasted content, such as status updates and photos. The effect is greater for composed pieces, such as comments, posts, and messages than for 'one-click' actions such as 'likes.' Facebook has a greater impact on non-family relationships and ties who do not frequently communicate via other channels.",
"title": ""
},
{
"docid": "b73526f1fb0abb4373421994dbd07822",
"text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.",
"title": ""
},
{
"docid": "2abd75766d4875921edd4d6d63d5d617",
"text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.",
"title": ""
},
{
"docid": "fdc16a2774921124576c8399de2701d4",
"text": "This paper discusses a method of frequency-shift keying (FSK) demodulation and Manchester-bit decoding using a digital signal processing (DSP) approach. The demodulator is implemented on a single-channel high-speed digital radio board. The board architecture contains a high-speed A/D converter, a digital receiver chip, a host DSP processing chip, and a back-end D/A converter [2]. The demodulator software is booted off an on-board EPROM and run on the DSP chip [3]. The algorithm accepts complex digital baseband data available from the front-end digital receiver chip [2]. The target FSK modulation is assumed to be in the RF range (VHF or UHF signals). A block diagram of the single-channel digital radio is shown in Figure 1 [2].",
"title": ""
},
{
"docid": "99c088268633c19a8c4789c58c4c9aca",
"text": "Executing agile quadrotor maneuvers with cablesuspended payloads is a challenging problem and complications induced by the dynamics typically require trajectory optimization. State-of-the-art approaches often need significant computation time and complex parameter tuning. We present a novel dynamical model and a fast trajectory optimization algorithm for quadrotors with a cable-suspended payload. Our first contribution is a new formulation of the suspended payload behavior, modeled as a link attached to the quadrotor with a combination of two revolute joints and a prismatic joint, all being passive. Differently from state of the art, we do not require the use of hybrid modes depending on the cable tension. Our second contribution is a fast trajectory optimization technique for the aforementioned system. Our model enables us to pose the trajectory optimization problem as a Mathematical Program with Complementarity Constraints (MPCC). Desired behaviors of the system (e.g., obstacle avoidance) can easily be formulated within this framework. We show that our approach outperforms the state of the art in terms of computation speed and guarantees feasibility of the trajectory with respect to both the system dynamics and control input saturation, while utilizing far fewer tuning parameters. We experimentally validate our approach on a real quadrotor showing that our method generalizes to a variety of tasks, such as flying through desired waypoints while avoiding obstacles, or throwing the payload toward a desired target. To the best of our knowledge, this is the first time that three-dimensional, agile maneuvers exploiting the system dynamics have been achieved on quadrotors with a cable-suspended payload. SUPPLEMENTARY MATERIAL This paper is accompanied by a video showcasing the experiments: https://youtu.be/s9zb5MRXiHA",
"title": ""
},
{
"docid": "2ca0c604b449e1495bd57d96381e0e1f",
"text": "The data ̄ow program graph execution model, or data ̄ow for short, is an alternative to the stored-program (von Neumann) execution model. Because it relies on a graph representation of programs, the strengths of the data ̄ow model are very much the complements of those of the stored-program one. In the last thirty or so years since it was proposed, the data ̄ow model of computation has been used and developed in very many areas of computing research: from programming languages to processor design, and from signal processing to recon®gurable computing. This paper is a review of the current state-of-the-art in the applications of the data ̄ow model of computation. It focuses on three areas: multithreaded computing, signal processing and recon®gurable computing. Ó 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "24041042e1216a3bbf6aab89fa6f0b93",
"text": "With the increasing demand for renewable energy, distributed power included in fuel cells have been studied and developed as a future energy source. For this system, a power conversion circuit is necessary to interface the generated power to the utility. In many cases, a high step-up DC/DC converter is needed to boost low input voltage to high voltage output. Conventional methods using cascade DC/DC converters cause extra complexity and higher cost. The conventional topologies to get high output voltage use flyback DC/DC converters. They have the leakage components that cause stress and loss of energy that results in low efficiency. This paper presents a high boost converter with a voltage multiplier and a coupled inductor. The secondary voltage of the coupled inductor is rectified using a voltage multiplier. High boost voltage is obtained with low duty cycle. Theoretical analysis and experimental results verify the proposed solutions using a 300 W prototype.",
"title": ""
},
{
"docid": "751644f811112a4ac7f1ead5f456056b",
"text": "Camera-based text processing has attracted considerable attention and numerous methods have been proposed. However, most of these methods have focused on the scene text detection problem and relatively little work has been performed on camera-captured document images. In this paper, we present a text-line detection algorithm for camera-captured document images, which is an essential step toward document understanding. In particular, our method is developed by incorporating state estimation (an extension of scale selection) into a connected component (CC)-based framework. To be precise, we extract CCs with the maximally stable extremal region algorithm and estimate the scales and orientations of CCs from their projection profiles. Since this state estimation facilitates a merging process (bottom-up clustering) and provides a stopping criterion, our method is able to handle arbitrarily oriented text-lines and works robustly for a range of scales. Finally, a text-line/non-text-line classifier is trained and non-text candidates (e.g., background clutters) are filtered out with the classifier. Experimental results show that the proposed method outperforms conventional methods on a standard dataset and works well for a new challenging dataset.",
"title": ""
},
{
"docid": "efba71635ca38b4588d3e4200d655fee",
"text": "BACKGROUND\nCircumcisions and cesarian sections are common procedures. Although complications to the newborn child fortunately are rare, it is important to emphasize the potential significance of this problem and its frequent iatrogenic etiology. The authors present 7 cases of genitourinary trauma in newborns, including surgical management and follow-up.\n\n\nMETHODS\nThe authors relate 7 recent cases of genitourinary trauma in newborns from a children's hospital in a major metropolitan area.\n\n\nRESULTS\nCase 1 and 2: Two infants suffered degloving injuries to both the prepuce and penile shaft from a Gomco clamp. Successful full-thickness skin grafting using the previously excised foreskin was used in 1 child. Case 3, 4, and 5: A Mogen clamp caused glans injuries in 3 infants. In 2, hemorrhage from the severed glans was controlled with topical epinephrine; the glans healed with a flattened appearance. Another infant sustained a laceration ventrally, requiring a delayed modified meatal advancement glanoplasty to correct the injury. Case 6: A male infant suffered a ventral slit and division of the ventral urethra before placement of a Gomco clamp. Formal hypospadias repair was required. Case 7: An emergent cesarean section resulted in a grade 4-perineal laceration in a female infant. The vaginal tear caused by the surgeon's finger, extended up to the posterior insertion of the cervix and into the rectum. The infant successfully underwent an emergent multilayered repair.\n\n\nCONCLUSIONS\nGenitourinary trauma in the newborn is rare but often necessitates significant surgical intervention. Circumcision often is the causative event. There has been only 1 prior report of a perineal injury similar to case 7, with a fatal outcome.",
"title": ""
},
{
"docid": "d0f14357e0d675c99d4eaa1150b9c55e",
"text": "Purpose – The purpose of this research is to investigate if, and in that case, how and what the egovernment field can learn from user participation concepts and theories in general IS research. We aim to contribute with further understanding of the importance of citizen participation and involvement within the e-government research body of knowledge and when developing public eservices in practice. Design/Methodology/Approach – The analysis in the article is made from a comparative, qualitative case study of two e-government projects. Three analysis themes are induced from the literature review; practice of participation, incentives for participation, and organization of participation. These themes are guiding the comparative analysis of our data with a concurrent openness to interpretations from the field. Findings – The main results in this article are that the e-government field can get inspiration and learn from methods and approaches in traditional IS projects concerning user participation, but in egovernment we also need methods to handle the challenges that arise when designing public e-services for large, heterogeneous user groups. Citizen engagement cannot be seen as a separate challenge in egovernment, but rather as an integrated part of the process of organizing, managing, and performing egovernment projects. Our analysis themes of participation generated from literature; practice, incentives and organization can be used in order to highlight, analyze, and discuss main issues regarding the challenges of citizen participation within e-government. This is an important implication based on our study that contributes both to theory on and practice of e-government. Practical implications – Lessons to learn from this study concern that many e-government projects have a public e-service as one outcome and an internal e-administration system as another outcome. A dominating internal, agency perspective in such projects might imply that citizens as the user group of the e-service are only seen as passive receivers of the outcome – not as active participants in the development. By applying the analysis themes, proposed in this article, citizens as active participants can be thoroughly discussed when initiating (or evaluating) an e-government project. Originality/value – This article addresses challenges regarding citizen participation in e-government development projects. User participation is well-researched within the IS discipline, but the egovernment setting implies new challenges, that are not explored enough.",
"title": ""
},
{
"docid": "56a4a9b20391f13e7ced38586af9743b",
"text": "The most common type of nasopharyngeal tumor is nasopharyngeal carcinoma. The etiology is multifactorial with race, genetics, environment and Epstein-Barr virus (EBV) all playing a role. While rare in Caucasian populations, it is one of the most frequent nasopharyngeal cancers in Chinese, and has endemic clusters in Alaskan Eskimos, Indians, and Aleuts. Interestingly, as native-born Chinese migrate, the incidence diminishes in successive generations, although still higher than the native population. EBV is nearly always present in NPC, indicating an oncogenic role. There are raised antibodies, higher titers of IgA in patients with bulky (large) tumors, EBERs (EBV encoded early RNAs) in nearly all tumor cells, and episomal clonal expansion (meaning the virus entered the tumor cell before clonal expansion). Consequently, the viral titer can be used to monitor therapy or possibly as a diagnostic tool in the evaluation of patients who present with a metastasis from an unknown primary. The effect of environmental carcinogens, especially those which contain a high levels of volatile nitrosamines are also important in the etiology of NPC. Chinese eat salted fish, specifically Cantonese-style salted fish, and especially during early life. Perhaps early life (weaning period) exposure is important in the ‘‘two-hit’’ hypothesis of cancer development. Smoking, cooking, and working under poor ventilation, the use of nasal oils and balms for nose and throat problems, and the use of herbal medicines have also been implicated but are in need of further verification. Likewise, chemical fumes, dusts, formaldehyde exposure, and radiation have all been implicated in this complicated disorder. Various human leukocyte antigens (HLA) are also important etiologic or prognostic indicators in NPC. While histocompatibility profiles of HLA-A2, HLA-B17 and HLA-Bw46 show increased risk for developing NPC, there is variable expression depending on whether they occur alone or jointly, further conferring a variable prognosis (B17 is associated with a poor and A2B13 with a good prognosis, respectively).",
"title": ""
},
{
"docid": "320925a50d9fe1e4f76180b7d141dd27",
"text": "extraction from documents J. Fan A. Kalyanpur D. C. Gondek D. A. Ferrucci Access to a large amount of knowledge is critical for success at answering open-domain questions for DeepQA systems such as IBM Watsoni. Formal representation of knowledge has the advantage of being easy to reason with, but acquisition of structured knowledge in open domains from unstructured data is often difficult and expensive. Our central hypothesis is that shallow syntactic knowledge and its implied semantics can be easily acquired and can be used in many areas of a question-answering system. We take a two-stage approach to extract the syntactic knowledge and implied semantics. First, shallow knowledge from large collections of documents is automatically extracted. Second, additional semantics are inferred from aggregate statistics of the automatically extracted shallow knowledge. In this paper, we describe in detail what kind of shallow knowledge is extracted, how it is automatically done from a large corpus, and how additional semantics are inferred from aggregate statistics. We also briefly discuss the various ways extracted knowledge is used throughout the IBM DeepQA system.",
"title": ""
},
{
"docid": "8bb5a38908446ca4e6acb4d65c4c817c",
"text": "Column-oriented database systems have been a real game changer for the industry in recent years. Highly tuned and performant systems have evolved that provide users with the possibility of answering ad hoc queries over large datasets in an interactive manner. In this paper we present the column-oriented datastore developed as one of the central components of PowerDrill. It combines the advantages of columnar data layout with other known techniques (such as using composite range partitions) and extensive algorithmic engineering on key data structures. The main goal of the latter being to reduce the main memory footprint and to increase the efficiency in processing typical user queries. In this combination we achieve large speed-ups. These enable a highly interactive Web UI where it is common that a single mouse click leads to processing a trillion values in the underlying dataset.",
"title": ""
},
{
"docid": "0793d82c1246c777dce673d8f3146534",
"text": "CONTEXT\nMedical schools are known to be stressful environments for students and hence medical students have been believed to experience greater incidences of depression than others. We evaluated the global prevalence of depression amongst medical students, as well as epidemiological, psychological, educational and social factors in order to identify high-risk groups that may require targeted interventions.\n\n\nMETHODS\nA systematic search was conducted in online databases for cross-sectional studies examining prevalences of depression among medical students. Studies were included only if they had used standardised and validated questionnaires to evaluate the prevalence of depression in a group of medical students. Random-effects models were used to calculate the aggregate prevalence and pooled odds ratios (ORs). Meta-regression was carried out when heterogeneity was high.\n\n\nRESULTS\nFindings for a total of 62 728 medical students and 1845 non-medical students were pooled across 77 studies and examined. Our analyses demonstrated a global prevalence of depression amongst medical students of 28.0% (95% confidence interval [CI] 24.2-32.1%). Female, Year 1, postgraduate and Middle Eastern medical students were more likely to be depressed, but the differences were not statistically significant. By year of study, Year 1 students had the highest rates of depression at 33.5% (95% CI 25.2-43.1%); rates of depression then gradually decreased to reach 20.5% (95% CI 13.2-30.5%) at Year 5. This trend represented a significant decline (B = - 0.324, p = 0.005). There was no significant difference in prevalences of depression between medical and non-medical students. The overall mean frequency of suicide ideation was 5.8% (95% CI 4.0-8.3%), but the mean proportion of depressed medical students who sought treatment was only 12.9% (95% CI 8.1-19.8%).\n\n\nCONCLUSIONS\nDepression affects almost one-third of medical students globally but treatment rates are relatively low. The current findings suggest that medical schools and health authorities should offer early detection and prevention programmes, and interventions for depression amongst medical students before graduation.",
"title": ""
}
] |
scidocsrr
|
fe406b504e817770fd8d592d6827b95f
|
Bitcoin Covenants
|
[
{
"docid": "112b9294f4d606a0112fe80742698184",
"text": "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their ka ma. A set of nodes, called a bank-set , keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing application.",
"title": ""
}
] |
[
{
"docid": "dbc7e759ce30307475194adb4ca37f1f",
"text": "Pharyngeal arches appear in the 4th and 5th weeks of development of the human embryo. The 1st pharyngeal arch develops into the incus and malleus, premaxilla, maxilla, zygomatic bone; part of the temporal bone, the mandible and it contributes to the formation of bones of the middle ear. The musculature of the 1st pharyngeal arch includes muscles of mastication, anterior belly of the digastric mylohyoid, tensor tympani and tensor palatini. The second pharyngeal arch gives rise to the stapes, styloid process of the temporal bone, stylohyoid ligament, the lesser horn and upper part of the body of the hyoid bone. The stapedius muscle, stylohyoid, posterior belly of the digastric, auricular and muscles of facial expressional all derive from the 2nd pharyngeal arch. Otocephaly has been classified as a defect of blastogenesis, with structural defects primarily involving the first and second branchial arch derivatives. It may also result in dysmorphogenesis of other midline craniofacial field structures, such as the forebrain and axial body structures.",
"title": ""
},
{
"docid": "ddca576f0ceea86dab6b85281e359f3a",
"text": "Fingerprint recognition plays an important role in many commercial applications and is used by millions of people every day, e.g. for unlocking mobile phones. Fingerprint image segmentation is typically the first processing step of most fingerprint algorithms and it divides an image into foreground, the region of interest, and background. Two types of error can occur during this step which both have a negative impact on the recognition performance: 'true' foreground can be labeled as background and features like minutiae can be lost, or conversely 'true' background can be misclassified as foreground and spurious features can be introduced. The contribution of this paper is threefold: firstly, we propose a novel factorized directional bandpass (FDB) segmentation method for texture extraction based on the directional Hilbert transform of a Butterworth bandpass (DHBB) filter interwoven with soft-thresholding. Secondly, we provide a manually marked ground truth segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a systematic performance comparison between the FDB method and four of the most often cited fingerprint segmentation algorithms showing that the FDB segmentation method clearly outperforms these four widely used methods. The benchmark and the implementation of the FDB method are made publicly available.",
"title": ""
},
{
"docid": "c9ecb6ac5417b5fea04e5371e4250361",
"text": "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.",
"title": ""
},
{
"docid": "78d7c61f7ca169a05e9ae1393712cd69",
"text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.",
"title": ""
},
{
"docid": "6179fabc2e7d0cb2fe065c2f6a580872",
"text": "High-throughput bioinformatic analyses increasingly rely on pipeline frameworks to process sequence and metadata. Modern implementations of these frameworks differ on three key dimensions: using an implicit or explicit syntax, using a configuration, convention or class-based design paradigm and offering a command line or workbench interface. Here I survey and compare the design philosophies of several current pipeline frameworks. I provide practical recommendations based on analysis requirements and the user base.",
"title": ""
},
{
"docid": "521b77c549c0fdb7edb609fbde7f6abc",
"text": "User recommender systems are a key component in any on-line social networking platform: they help the users growing their network faster, thus driving engagement and loyalty.\n In this paper we study link prediction with explanations for user recommendation in social networks. For this problem we propose WTFW (\"Who to Follow and Why\"), a stochastic topic model for link prediction over directed and nodes-attributed graphs. Our model not only predicts links, but for each predicted link it decides whether it is a \"topical\" or a \"social\" link, and depending on this decision it produces a different type of explanation.\n A topical link is recommended between a user interested in a topic and a user authoritative in that topic: the explanation in this case is a set of binary features describing the topic responsible of the link creation. A social link is recommended between users which share a large social neighborhood: in this case the explanation is the set of neighbors which are more likely to be responsible for the link creation.\n Our experimental assessment on real-world data confirms the accuracy of WTFW in the link prediction and the quality of the associated explanations.",
"title": ""
},
{
"docid": "6d0d8d1bc15674df114c47d9b8e06718",
"text": "For a wireless sensor network (WSN) with a large number of sensors, a decision fusion rule using the total number of detections reported by local sensors for hypothesis testing, is proposed and studied. Based on a signal attenuation model where the received signal power decays as the distance from the target increases, the system level detection performance, namely probabilities of detection and false alarms, are derived and calculated. Without the knowledge of local sensors’ performances and at low signal to noise ratio (SNR), this fusion rule can still achieve very good system level detection performance if the number of sensors is sufficiently large. The problem of designing an optimum local sensor level threshold is investigated. For various system parameters, the optimal thresholds are found numerically. Guidelines on selecting the optimal local threshold have been presented.",
"title": ""
},
{
"docid": "d009bc8940764cec4ebaac79aad99424",
"text": "Effective parallel programming for GPUs requires careful attention to several factors, including ensuring coalesced access of data from global memory. There is a need for tools that can provide feedback to users about statements in a GPU kernel where non-coalesced data access occurs, and assistance in fixing the problem. In this paper, we address both these needs. We develop a two-stage framework where dynamic analysis is first used to detect and characterize uncoalesced accesses in arbitrary PTX programs. Transformations to optimize global memory access by introducing coalesced access are then implemented, using feedback from the dynamic analysis or using a model-driven approach. Experimental results demonstrate the use of the tools on a number of benchmarks from the Rodinia and Polybench suites.",
"title": ""
},
{
"docid": "77ad72b404658a0f2aa506a108ab30fd",
"text": "BACKGROUND AND OBJECTIVE\nCapital market and consumer market interest in wearable devices has surged in recent years, however, their actual acceptance in the field of health monitoring is somewhat not as expected. This study aims to understand the perceptions of wearable devices of general consumers, analyze the review of the devices by users, and find existing problems associated with current wearable devices.\n\n\nMETHODS\nSojump.com, an on-line questionnaire tool, was used to generate the questionnaire, which focused on four aspects. The snowball sampling method was employed to collect questionnaires by making use of the author's social network.\n\n\nRESULTS\n(1) A total of 2058 valid questionnaires were received from the respondents from every province in China; of the respondents, 52.4% have used a wearable device. (2) The respondents had a low level of knowledge about wearable devices (2.79/5) but were optimistic with regard to the devices' future (3.86/5), and 84% recognized an acceptable price of less than 2000 RMB. Nearly half of the respondents were unwilling to continuously wear the device (47.1%) and share their health data (44.7%). (3) The functions of wearable devices that the respondents expected were mainly health management (63.5%), mobile phone accessories (61.9%), and location tracking (61.2%), and the promising hot future functions were mainly data analysis (74.2%), exercise coaching (60.5%), and child tracking (58.8%). Regarding the health monitoring functions, the respondents were most interested in heart health monitoring. (4) The respondents had different levels of emphasis regarding the existing problems of wearable devices at different use stages. Being easily damaged or lost (49.7%), being incapable of providing health recommendations based on data analysis (46.7%), and being uncomfortable to wear (45.8%) likely lead consumers to abandon the use of wearable devices.\n\n\nCONCLUSIONS\nConsumers are optimistic about the prospects of wearable devices; however, there is a large gap between the reliability of the measurement data, the ease of use, and the interpretation of measurement data of current wearable products and consumer expectations. Consumer demand for health management functions is higher than that for daily auxiliary-type functions, which is an issue that should be properly addressed and resolved by manufacturers.",
"title": ""
},
{
"docid": "888694da6ca23267f5a93128a63bf2b4",
"text": "Lower bounds on the cardinality of the maximum matchings of planar graphs, with a constraint on thr minimum degree, are established in terms of a linear polynomial of the number of vertices. The bounds depend upon the minimum degree and the connectivity of graphs. Some examples are given which show that all the lower bounds are best possible in the sense that neither the coefficients nor the constant terms can be improved.",
"title": ""
},
{
"docid": "9959a096db2fecd7c970fea21658b80b",
"text": "Driven by the progress in the field of single-trial analysis of EEG, there is a growing interest in brain computer interfaces (BCIs), i.e., systems that enable human subjects to control a computer only by means of their brain signals. In a pseudo-online simulation our BCI detects upcoming finger movements in a natural keyboard typing condition and predicts their laterality. This can be done on average 100–230ms before the respective key is actually pressed, i.e., long before the onset of EMG. Our approach is appealing for its short response time and high classification accuracy (>96%) in a binary decision where no human training is involved. We compare discriminative classifiers like Support Vector Machines (SVMs) and different variants of Fisher Discriminant that possess favorable regularization properties for dealing with high noise cases (inter-trial variablity).",
"title": ""
},
{
"docid": "9d3ca4966c26c6691398157a22531a1d",
"text": "Bipedal locomotion skills are challenging to develop. Control strategies often use local linearization of the dynamics in conjunction with reduced-order abstractions to yield tractable solutions. In these model-based control strategies, the controller is often not fully aware of many details, including torque limits, joint limits, and other non-linearities that are necessarily excluded from the control computations for simplicity. Deep reinforcement learning (DRL) offers a promising model-free approach for controlling bipedal locomotion which can more fully exploit the dynamics. However, current results in the machine learning literature are often based on ad-hoc simulation models that are not based on corresponding hardware. Thus it remains unclear how well DRL will succeed on realizable bipedal robots. In this paper, we demonstrate the effectiveness of DRL using a realistic model of Cassie, a bipedal robot. By formulating a feedback control problem as finding the optimal policy for a Markov Decision Process, we are able to learn robust walking controllers that imitate a reference motion with DRL. Controllers for different walking speeds are learned by imitating simple time-scaled versions of the original reference motion. Controller robustness is demonstrated through several challenging tests, including sensory delay, walking blindly on irregular terrain and unexpected pushes at the pelvis. We also show we can interpolate between individual policies and that robustness can be improved with an interpolated policy.",
"title": ""
},
{
"docid": "39fe1618fad28ec6ad72d326a1d00f24",
"text": "Popular real-time public events often cause upsurge of traffic in Twitter while the event is taking place. These posts range from real-time update of the event's occurrences highlights of important moments thus far, personal comments and so on. A large user group has evolved who seeks these live updates to get a brief summary of the important moments of the event so far. However, major social search engines including Twitter still present the tweets satisfying the Boolean query in reverse chronological order, resulting in thousands of low quality matches agglomerated in a prosaic manner. To get an overview of the happenings of the event, a user is forced to read scores of uninformative tweets causing frustration. In this paper, we propose a method for multi-tweet summarization of an event. It allows the search users to quickly get an overview about the important moments of the event. We have proposed a graph-based retrieval algorithm that identifies tweets with popular discussion points among the set of tweets returned by Twitter search engine in response to a query comprising the event related keywords. To ensure maximum coverage of topical diversity, we perform topical clustering of the tweets before applying the retrieval algorithm. Evaluation performed by summarizing the important moments of a real-world event revealed that the proposed method could summarize the proceeding of different segments of the event with up to 81.6% precision and up to 80% recall.",
"title": ""
},
{
"docid": "05f941acd4b2bd1188c7396d7edbd684",
"text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming",
"title": ""
},
{
"docid": "cd735ea51ec77944f0ab26a5cc0e6105",
"text": "Advances in smartphone technology have promoted the rapid development of mobile apps. However, the availability of a huge number of mobile apps in application stores has imposed the challenge of finding the right apps to meet the user needs. Indeed, there is a critical demand for personalized app recommendations. Along this line, there are opportunities and challenges posed by two unique characteristics of mobile apps. First, app markets have organized apps in a hierarchical taxonomy. Second, apps with similar functionalities are competing with each other. Although there are a variety of approaches for mobile app recommendations, these approaches do not have a focus on dealing with these opportunities and challenges. To this end, in this article, we provide a systematic study for addressing these challenges. Specifically, we develop a structural user choice model (SUCM) to learn fine-grained user preferences by exploiting the hierarchical taxonomy of apps as well as the competitive relationships among apps. Moreover, we design an efficient learning algorithm to estimate the parameters for the SUCM model. Finally, we perform extensive experiments on a large app adoption dataset collected from Google Play. The results show that SUCM consistently outperforms state-of-the-art Top-N recommendation methods by a significant margin.",
"title": ""
},
{
"docid": "3bb48e5bf7cc87d635ab4958553ef153",
"text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: malin.sundstrom@hb.se",
"title": ""
},
{
"docid": "3d0103c34fcc6a65ad56c85a9fe10bad",
"text": "This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space ‘interest points’ may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines.",
"title": ""
},
{
"docid": "d13b4d08c29049a89d98c410bd834421",
"text": "Sodium-ion batteries offer an attractive option for potential low cost and large scale energy storage due to the earth abundance of sodium. Red phosphorus is considered as a high capacity anode for sodium-ion batteries with a theoretical capacity of 2596 mAh/g. However, similar to silicon in lithium-ion batteries, several limitations, such as large volume expansion upon sodiation/desodiation and low electronic conductance, have severely limited the performance of red phosphorus anodes. In order to address the above challenges, we have developed a method to deposit red phosphorus nanodots densely and uniformly onto reduced graphene oxide sheets (P@RGO) to minimize the sodium ion diffusion length and the sodiation/desodiation stresses, and the RGO network also serves as electron pathway and creates free space to accommodate the volume variation of phosphorus particles. The resulted P@RGO flexible anode achieved 1165.4, 510.6, and 135.3 mAh/g specific charge capacity at 159.4, 31878.9, and 47818.3 mA/g charge/discharge current density in rate capability test, and a 914 mAh/g capacity after 300 deep cycles in cycling stability test at 1593.9 mA/g current density, which marks a significant performance improvement for red phosphorus anodes for sodium-ion chemistry and flexible power sources for wearable electronics.",
"title": ""
},
{
"docid": "1d3441ce9065ab004d04946528d92935",
"text": "General purpose object-oriented programs typically aren't embarrassingly parallel. For these applications, finding enough concurrency remains a challenge in program design. To address this challenge, in the Panini project we are looking at reconciling concurrent program design goals with modular program design goals. The main idea is that if programmers improve the modularity of their programs they should get concurrency for free. In this work we describe one of our directions to reconcile these two goals by enhancing Gang-of-Four (GOF) object-oriented design patterns. GOF patterns are commonly used to improve the modularity of object-oriented software. These patterns describe strategies to decouple components in design space and specify how these components should interact. Our hypothesis is that if these patterns are enhanced to also decouple components in execution space applying them will concomitantly improve the design and potentially available concurrency in software systems. To evaluate our hypothesis we have studied all 23 GOF patterns. For 18 patterns out of 23, our hypothesis has held true. Another interesting preliminary result reported here is that for 17 out of these 18 studied patterns, concurrency and synchronization concerns were completely encapsulated in our concurrent design pattern framework.",
"title": ""
},
{
"docid": "5900299f078030bbad5872750b1e5eeb",
"text": "Penile Mondor’s Disease (Superficial thrombophlebitis of the dorsal vein of the penis) is a rare and important disease that every clinician should be able to diagnose, which present with pain and in duration of the dorsal part of the penis. The various possible causes are trauma, excessive sexual activity neoplasms,, or abstinence. Diagnosis is mainly based on history and physical examination. Though diagnosis is mainly based on history and physical examination, Doppler ultrasound is considered as the imaging modality of choice. Sclerotizing lymphangitis and Peyronies disease must be considered in differential diagnosis. Accurate diagnosis and Propercounseling can help to relieve the anxiety experienced by the patients regarding this benign disease. We are describing the symptoms, diagnosis, and treatment of the superficial thrombophlebitis of the dorsal vein of the penis.",
"title": ""
}
] |
scidocsrr
|
6abfab59734fc4e64ec8a2e2c1c4b29b
|
Performance Prediction and Optimization of Solar Water Heater via a Knowledge-Based Machine Learning Method
|
[
{
"docid": "1cc4048067cc93c2f1e836c77c2e06dc",
"text": "Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.",
"title": ""
}
] |
[
{
"docid": "5f3cd175951abdef01a8f914f3868f6d",
"text": "This paper presents a novel bi-objective multi-product capacitated vehicle routing problem with uncertainty in demand of retailers and volume of products (UCVRP) and heterogeneous vehicle fleets. The first of two conflict fuzzy objective functions is to minimize the cost of the used vehicles, fuel consumption for full loaded vehicles and shortage of products. The second objective is to minimize the shortage of products for all retailers. In order to get closer to a real-world situation, the uncertainty in the demand of retailers is applied using fuzzy numbers. Additionally, the volume of products is applied using robust parameters, because the possible value of this parameter is not distinct and belongs to a bounded uncertainty set. The fuzzy-robust counterpart model may be larger than the deterministic form or the uncertain model with one approach and it has with further complexity; however, it provides a better efficient solution for this problem. The proposed fuzzy approach is used to solve the bi-objective mixed-integer linear problem to find the most preferred solution. Moreover, it is impossible to improve one of the objective functions without considering deterioration in the other objective functions. In order to show the conflict between two objective functions in an excellent fashion, a Pareto-optimal solution with the ε-constraint method is obtained. Some numerical test problems are used to demonstrate the efficiency and validity of the presented model.",
"title": ""
},
{
"docid": "8f4d228d03efcf161346a2a1c010ee7b",
"text": "This paper develops power control algorithms for energy efficiency (EE) maximization (measured in bit/Joule) in wireless networks. Unlike previous related works, minimum-rate constraints are imposed and the signal-to-interference-plus-noise ratio takes a more general expression, which allows one to encompass some of the most promising 5G candidate technologies. Both network-centric and user-centric EE maximizations are considered. In the network-centric scenario, the maximization of the global EE and the minimum EE of the network is performed. Unlike previous contributions, we develop centralized algorithms that are guaranteed to converge, with affordable computational complexity, to a Karush-Kuhn-Tucker point of the considered non-convex optimization problems. Moreover, closed-form feasibility conditions are derived. In the user-centric scenario, game theory is used to study the equilibria of the network and to derive convergent power control algorithms, which can be implemented in a fully decentralized fashion. Both scenarios above are studied under the assumption that single or multiple resource blocks are employed for data transmission. Numerical results assess the performance of the proposed solutions, analyzing the impact of minimum-rate constraints, and comparing the network-centric and user-centric approaches.",
"title": ""
},
{
"docid": "b59eb9d32ac4da7238d31da7985691cb",
"text": "This paper will describe CPE virtualization solution for home and enterprise environments by unifying Network Function Virtualization (NFV), Software Defined Networking (SDN) and Cloud technologies in Internet providers' networks and data centers. The goal of this solution approach is to reduce operational and capital costs, increase network and service flexibility for providers and to broaden service offering and experience for the end-users. This is achieved by leveraging state of the art technologies that provide automation, flexibility and simplify operations. Possible applications, implementation and potentials will be studied throughout this paper.",
"title": ""
},
{
"docid": "d51d916e4529a2dc92aa2f2809270f17",
"text": "In this paper, we propose to learn word embeddings based on the recent fixedsize ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation. We use FOFE to fully encode the left and right context of each word in a corpus to construct a novel word-context matrix, which is further weighted and factorized using truncated SVD to generate low-dimension word embedding vectors. We have evaluated this alternative method in encoding word-context statistics and show the new FOFE method has a notable effect on the resulting word embeddings. Experimental results on several popular word similarity tasks have demonstrated that the proposed method outperforms many recently popular neural prediction methods as well as the conventional SVD models that use canonical count based techniques to generate word context matrices.",
"title": ""
},
{
"docid": "1ce0d44502fd53c708b8ccab21151e79",
"text": "Exploration in multi-task reinforcement learning is critical in training agents to deduce the underlying MDP. Many of the existing exploration frameworks such as E, Rmax, Thompson sampling assume a single stationary MDP and are not suitable for system identification in the multi-task setting. We present a novel method to facilitate exploration in multi-task reinforcement learning using deep generative models. We supplement our method with a low dimensional energy model to learn the underlying MDP distribution and provide a resilient and adaptive exploration signal to the agent. We evaluate our method on a new set of environments and provide intuitive interpretation of our results.",
"title": ""
},
{
"docid": "fc289c7a9f08ff3f5dd41ae683ab77b3",
"text": "Approximate Newton methods are standard optimization tools which aim to maintain the benefits of Newton’s method, such as a fast rate of convergence, while alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the total expected reward, which is a standard objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton methods for MDPs. Like the Gauss-Newton method for non-linear least squares, these methods drop certain terms in the Hessian. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss-Newton algorithm is closely related to both the EMalgorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains.",
"title": ""
},
{
"docid": "6d657b6445bbd60f779624104b2dc0b0",
"text": "High-quality urban reconstruction requires more than multi-view reconstruction and local optimization. The structure of facades depends on the general layout, which has to be optimized globally. Shape grammars are an established method to express hierarchical spatial relationships, and are therefore suited as representing constraints for semantic facade interpretation. Usually inference uses numerical approximations, or hard-coded grammar schemes. Existing methods inspired by classical grammar parsing are not applicable on real-world images due to their prohibitively high complexity. This work provides feasible generic facade reconstruction by combining low-level classifiers with mid-level object detectors to infer an irregular lattice. The irregular lattice preserves the logical structure of the facade while reducing the search space to a manageable size. We introduce a novel method for handling symmetry and repetition within the generic grammar. We show competitive results on two datasets, namely the Paris 2010 and the Graz 50. The former includes only Hausmannian, while the latter includes Classicism, Biedermeier, Historicism, Art Nouveau and post-modern architectural styles.",
"title": ""
},
{
"docid": "fa7d7672301fdb3cdf3a6f7624165df1",
"text": "We present a 2-mm diameter, 35-μm-thick disk resonator gyro (DRG) fabricated in <;111> silicon with integrated 0.35-μm CMOS analog front-end circuits. The device is fabricated in the commercial InvenSense Fabrication MEMSCMOS integrated platform, which incorporates a wafer-level vacuum seal, yielding a quality factor (Q) of 2800 at the DRGs 78-kHz resonant frequency. After performing electrostatic tuning to enable mode-matched operation, this DRG achieves a 55 μV/°/s sensitivity. Resonator vibration in the sense and drive axes is sensed using capacitive transduction, and amplified using a lownoise, on-chip integrated circuit. This allows the DRG to achieve Brownian noise-limited performance. The angle random walk is measured to be 0.008°/s/√(Hz) and the bias instability is 20°/h.",
"title": ""
},
{
"docid": "50a7afb889657c646ecbc2620b77065d",
"text": "Exploiting on-the-fly computation, Data Stream Processing (DSP) applications are widely used to process unbounded streams of data and extract valuable information in a near real-time fashion. As such, they enable the development of new intelligent and pervasive services that can improve our everyday life. To keep up with the high volume of daily produced data, the operators that compose a DSP application can be replicated and placed on multiple, possibly distributed, computing nodes, so to process the incoming data flow in parallel. Moreover, to better exploit the abundance of diffused computational resources (e.g., Fog computing), recent trends investigate the possibility of decentralizing the DSP application placement.\n In this paper, we present and evaluate a general formulation of the optimal DSP replication and placement (ODRP) as an integer linear programming problem, which takes into account the heterogeneity of application requirements and infrastructural resources. We integrate ODRP as prototype scheduler in the Apache Storm DSP framework. By leveraging on the DEBS 2015 Grand Challenge as benchmark application, we show the benefits of a joint optimization of operator replication and placement and how ODRP can optimize different QoS metrics, namely response time, internode traffic, cost, availability, and a combination thereof.",
"title": ""
},
{
"docid": "33296736553ceaab2e113b62c05a803c",
"text": "In cases of child abuse, usually, the parents are initial suspects. A common explanation of the parents is that the injuries were caused by a sibling. Child-on-child violence is reported to be very rare in children less than 5 years of age, and thorough investigation by the police, child protective services, and medicolegal examinations are needed to proof or disproof the parents' statement. We report two cases of physical abuse of infants by small children.",
"title": ""
},
{
"docid": "6f0f6bf051ff36907b3184501cecbf19",
"text": "American divorce rates rose from the 1950s to the 1970s, peaked around 1980, and have fallen ever since. The mean age at marriage also substantially increased after 1970. Using data from the Survey of Income and Program Participation, 1979 National Longitudinal Survey of Youth, and National Survey of Family Growth, I explore the extent to which the rise in age at marriage can explain the rapid decrease in divorce rates for cohorts marrying after 1980. Three different empirical approaches all suggest that the increase in women’s age at marriage was the main proximate cause of the fall in divorce. ∗Email: drotz@mathematica-mpr.com. I would like to thank Roland Fryer, Claudia Goldin, and Larry Katz for continued guidance and support on this project, as well as Timothy Bond, Richard Freeman, Stephanie Hurder, Jeff Liebman, Claudia Olivetti, Amanda Pallais, Laszlo Sandor, Emily Glassberg Sands, Alessandra Voena, Justin Wolfers, and seminar participants at Case Western Reserve University, Harvard University, Mathematica Policy Research, UCLA, University of Arizona, University of Illinois-Chicago, University of Iowa, University of Texas-Austin, and the US Census Bureau for helpful comments and discussions. I am also grateful to Larry Katz and Phillip Levine for providing data on oral contraceptive pill access and abortion rates respectively. All remaining errors are my own. This research has been supported by the NSF-IGERT program, \"Multidisciplinary Program in Inequality and Social Policy\" at Harvard University (Grant No. 0333403). The views expressed herein are those of the author and not necessarily those of Mathematica Policy Research.",
"title": ""
},
{
"docid": "389b67dd4a63f5052c5c6320bb691ab8",
"text": "The purpose of this article is to provide a tutorial overview of information consensus in multivehicle cooperative control. Theoretical results regarding consensus-seeking under both time invariant and dynamically changing communication topologies are summarized. Several specific applications of consensus algorithms to multivehicle coordination are described",
"title": ""
},
{
"docid": "3ec26d404b5aaa5636c995e188ae6b52",
"text": "This paper presents a study of using ellipsoidal decision regions for motif-based patterned fabric defect detection, the result of which is found to improve the original detection success using max–min decision region of the energy-variance values. In our previous research, max–min decision region was found to be effective in distinct cases but ill detect the ambiguous false-positive and false-negative cases. To alleviate this problem, we first assume that the energy-variance values can be described by a Gaussian mixture model. Second, we apply k-means clustering to roughly identify the various clusters that make up the entire data population. Third, convex hull of each cluster is employed as a basis for fitting an ellipsoidal decision region over it. Defect detection is then based on these ellipsoidal regions. To validate the method, three wallpaper groups are evaluated using the new ellipsoidal regions, and compared with those results obtained using the max–min decision region. For the p2 group, success rate improves from 93.43% to 100%. For the pmm group, success rate improves from 95.9% to 96.72%, while the p4 m group records the same success rate at 90.77%. This demonstrates the superiority of using ellipsoidal decision regions in motif-based defect detection. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "418a5ef9f06f8ba38e63536671d605c1",
"text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.",
"title": ""
},
{
"docid": "a267fadc2875fc16b69635d4592b03ae",
"text": "We investigated neural correlates of human visual orienting using event-related functional magnetic resonance imaging (fMRI). When subjects voluntarily directed attention to a peripheral location, we recorded robust and sustained signals uniquely from the intraparietal sulcus (IPs) and superior frontal cortex (near the frontal eye field, FEF). In the ventral IPs and FEF only, the blood oxygen level dependent signal was modulated by the direction of attention. The IPs and FEF also maintained the most sustained level of activation during a 7-sec delay, when subjects maintained attention at the peripheral cued location (working memory). Therefore, the IPs and FEF form a dorsal network that controls the endogenous allocation and maintenance of visuospatial attention. A separate right hemisphere network was activated by the detection of targets at unattended locations. Activation was largely independent of the target's location (visual field). This network included among other regions the right temporo-parietal junction and the inferior frontal gyrus. We propose that this cortical network is important for reorienting to sensory events.",
"title": ""
},
{
"docid": "671952f18fb9041e7335f205666bf1f5",
"text": "This new handbook is an efficient way to keep up with the continuing advances in antenna technology and applications. The handbook is uniformly well written, up-to-date, and filled with a wealth of practical information. This makes it a useful reference for most antenna engineers and graduate students.",
"title": ""
},
{
"docid": "a2aa3c023f2cf2363bac0b97b3e1e65c",
"text": "Digital data collected for forensics analysis often contain valuable information about the suspects’ social networks. However, most collected records are in the form of unstructured textual data, such as e-mails, chat messages, and text documents. An investigator often has to manually extract the useful information from the text and then enter the important pieces into a structured database for further investigation by using various criminal network analysis tools. Obviously, this information extraction process is tedious and errorprone. Moreover, the quality of the analysis varies by the experience and expertise of the investigator. In this paper, we propose a systematic method to discover criminal networks from a collection of text documents obtained from a suspect’s machine, extract useful information for investigation, and then visualize the suspect’s criminal network. Furthermore, we present a hypothesis generation approach to identify potential indirect relationships among the members in the identified networks. We evaluated the effectiveness and performance of the method on a real-life cybercrimine case and some other datasets. The proposed method, together with the implemented software tool, has received positive feedback from the digital forensics team of a law enforcement unit in Canada. a 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "244dd6e8f6c4d8d9180ee0509e14ce5b",
"text": "The adoption of hashtags in major social networks including Twitter, Facebook, and Google+ is a strong evidence of its importance in facilitating information diffusion and social chatting. To understand the factors (e.g., user interest, posting time and tweet content) that may affect hashtag annotation in Twitter and to capture the implicit relations between latent topics in tweets and their corresponding hashtags, we propose two PLSA-style topic models to model the hashtag annotation behavior in Twitter. Content-Pivoted Model (CPM) assumes that tweet content guides the generation of hashtags while Hashtag-Pivoted Model (HPM) assumes that hashtags guide the generation of tweet content. Both models jointly incorporate user, time, hashtag and tweet content in a probabilistic framework. The PLSA-style models also enable us to verify the impact of social factor on hashtag annotation by introducing social network regularization in the two models. We evaluate the proposed models using perplexity and demonstrate their effectiveness in two applications: retrospective hashtag annotation and related hashtag discovery. Our results show that HPM outperforms CPM by perplexity and both user and time are important factors that affect model performance. In addition, incorporating social network regularization does not improve model performance. Our experimental results also demonstrate the effectiveness of our models in both applications compared with baseline methods.",
"title": ""
},
{
"docid": "34d024643d687d092c0859497ab0001c",
"text": "BACKGROUND\nHealth IT is expected to have a positive impact on the quality and efficiency of health care. But reports on negative impact and patient harm continue to emerge. The obligation of health informatics is to make sure that health IT solutions provide as much benefit with as few negative side effects as possible. To achieve this, health informatics as a discipline must be able to learn, both from its successes as well as from its failures.\n\n\nOBJECTIVES\nTo present motivation, vision, and history of evidence-based health informatics, and to discuss achievements, challenges, and needs for action.\n\n\nMETHODS\nReflections on scientific literature and on own experiences.\n\n\nRESULTS\nEight challenges on the way towards evidence-based health informatics are identified and discussed: quality of studies; publication bias; reporting quality; availability of publications; systematic reviews and meta-analysis; training of health IT evaluation experts; translation of evidence into health practice; and post-market surveillance. Identified needs for action comprise: establish health IT study registers; increase the quality of publications; develop a taxonomy for health IT systems; improve indexing of published health IT evaluation papers; move from meta-analysis to meta-summaries; include health IT evaluation competencies in curricula; develop evidence-based implementation frameworks; and establish post-marketing surveillance for health IT.\n\n\nCONCLUSIONS\nThere has been some progress, but evidence-based health informatics is still in its infancy. Building evidence in health informatics is our obligation if we consider medical informatics a scientific discipline.",
"title": ""
},
{
"docid": "424f871e0e2eabf8b1e636f73d0b1c7d",
"text": "Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.",
"title": ""
}
] |
scidocsrr
|
14baf2024d0243d0703e5809e1ec8e52
|
Discovering Stylistic Variations in Distributional Vector Space Models via Lexical Paraphrases
|
[
{
"docid": "9ae655deaf9d12f1c17b28c657ad5fd5",
"text": "Recent work has shown that neuralembedded word representations capture many relational similarities, which can be recovered by means of vector arithmetic in the embedded space. We show that Mikolov et al.’s method of first adding and subtracting word vectors, and then searching for a word similar to the result, is equivalent to searching for a word that maximizes a linear combination of three pairwise word similarities. Based on this observation, we suggest an improved method of recovering relational similarities, improving the state-of-the-art results on two recent word-analogy datasets. Moreover, we demonstrate that analogy recovery is not restricted to neural word embeddings, and that a similar amount of relational similarities can be recovered from traditional distributional word representations.",
"title": ""
},
{
"docid": "b37a2f3acae914632d6990df427be2c2",
"text": "Word embeddings are ubiquitous in NLP and information retrieval, but it is unclear what they represent when the word is polysemous. Here it is shown that multiple word senses reside in linear superposition within the word embedding and simple sparse coding can recover vectors that approximately capture the senses. The success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (Arora et al., 2016). A novel aspect of our technique is that each extracted word sense is accompanied by one of about 2000 “discourse atoms” that gives a succinct description of which other words co-occur with that word sense. Discourse atoms can be of independent interest, and make the method potentially more useful. Empirical tests are used to verify and support the theory.",
"title": ""
},
{
"docid": "c90cefcf9de5560fca431c70b763df15",
"text": "There has been relatively little work focused on determining the formality level of individual lexical items. This study applies information from large mixedgenre corpora, demonstrating that significant improvement is possible over simple word-length metrics, particularly when multiple sources of information, i.e. word length, word counts, and word association, are integrated. Our best hybrid system reaches 86% accuracy on an English near-synonym formality identification task, and near perfect accuracy when comparing words with extreme formality differences. We also test our word association method in Chinese, a language where word length is not an appropriate metric for formality.",
"title": ""
}
] |
[
{
"docid": "ebf6caae5106f0328ac39fb5e0052d9c",
"text": "William James’ theory of emotion has been controversial since its inception, and a basic analysis of Cannon’s (1927) critique is provided. Research on the impact of facial expressions, expressive behaviors, and visceral responses on emotional feelings are each reviewed. A good deal of evidence supports James’ theory that these types of bodily feedback, along with perceptions of situational cues, are each important parts of emotional feelings. Extensions to James’ theory are also reviewed, including evidence of individual differences in the effect of bodily responses on emotional experience.",
"title": ""
},
{
"docid": "821d68aef4b665a2ae754759748f6657",
"text": "In recent years, consumer-centric cloud computing paradigm has emerged as the development of smart electronic devices combined with the emerging cloud computing technologies. A variety of cloud services are delivered to the consumers with the premise that an effective and efficient cloud search service is achieved. For consumers, they want to find the most relevant products or data, which is highly desirable in the \"pay-as-you use\" cloud computing paradigm. As sensitive data (such as photo albums, emails, personal health records, financial records, etc.) are encrypted before outsourcing to cloud, traditional keyword search techniques are useless. Meanwhile, existing search approaches over encrypted cloud data support only exact or fuzzy keyword search, but not semantics-based multi-keyword ranked search. Therefore, how to enable an effective searchable system with support of ranked search remains a very challenging problem. This paper proposes an effective approach to solve the problem of multi-keyword ranked search over encrypted cloud data supporting synonym queries. The main contribution of this paper is summarized in two aspects: multi-keyword ranked search to achieve more accurate search results and synonym-based search to support synonym queries. Extensive experiments on real-world dataset were performed to validate the approach, showing that the proposed solution is very effective and efficient for multikeyword ranked searching in a cloud environment.",
"title": ""
},
{
"docid": "189cc09c72686ae7282eef04c1b365f1",
"text": "With the rapid growth of the internet as well as increasingly more accessible mobile devices, the amount of information being generated each day is enormous. We have many popular websites such as Yelp, TripAdvisor, Grubhub etc. that offer user ratings and reviews for different restaurants in the world. In most cases, though, the user is just interested in a small subset of the available information, enough to get a general overview of the restaurant and its popular dishes. In this paper, we present a way to mine user reviews to suggest popular dishes for each restaurant. Specifically, we propose a method that extracts and categorize dishes from Yelp restaurant reviews, and then ranks them to recommend the most popular dishes.",
"title": ""
},
{
"docid": "eb34879a227b5e3e2374bbb5a85a2c08",
"text": "According to the Taiwan Ministry of Education statistics, about one million graduates each year, some of them will go to countries, high schools or tertiary institutions to continue to attend, and some will be ready to enter the workplace employment. During the course of study, the students' all kinds of excellent performance certificates, score transcripts, diplomas, etc., will become an important reference for admitting new schools or new works. As schools make various awards or diplomas, only the names of the schools and the students are input. Due to the lack of effective anti-forge mechanism, events that cause the graduation certificate to be forged often get noticed. In order to solve the problem of counterfeiting certificates, the digital certificate system based on blockchain technology would be proposed. By the unmodifiable property of blockchain, the digital certificate with anti-counterfeit and verifiability could be made. The procedure of issuing the digital certificate in this system is as follows. First, generate the electronic file of a paper certificate accompanying other related data into the database, meanwhile calculate the electronic file for its hash value. Finally, store the hash value into the block in the chain system. The system will create a related QR-code and inquiry string code to affix to the paper certificate. It will provide the demand unit to verify the authenticity of the paper certificate through mobile phone scanning or website inquiries. Through the unmodifiable properties of the blockchain, the system not only enhances the credibility of various paper-based certificates, but also electronically reduces the loss risks of various types of certificates.",
"title": ""
},
{
"docid": "40525527409abf3702690ed2eb51b200",
"text": "Remote storage delivers a cost effective solution for data storage. If data is of a sensitive nature, it should be encrypted prior to outsourcing to ensure confidentiality; however, searching then becomes challenging. Searchable encryption is a well-studied solution to this problem. Many schemes only consider the scenario where users can search over the entirety of the encrypted data. In practice, sensitive data is likely to be classified according to an access control policy and different users should have different access rights. It is unlikely that all users have unrestricted access to the entire data set. Current schemes that consider multi-level access to searchable encryption are predominantly based on asymmetric primitives. We investigate symmetric solutions to multi-level access in searchable encryption where users have different access privileges to portions of the encrypted data and are not permitted to search over, or learn information about, data for which they are not authorised.",
"title": ""
},
{
"docid": "ddd704cb92e6f563a19b6928cdf41c4d",
"text": "Convolutional neural networks have achieved astonishing results in different application areas. Various methods which allow us to use these models on mobile and embedded devices have been proposed. Especially binary neural networks seem to be a promising approach for these devices with low computational power. However, understanding binary neural networks and training accurate models for practical applications remains a challenge. In our work, we focus on increasing our understanding of the training process and making it accessible to everyone. We publish our code and models based on BMXNet for everyone to use. Within this framework, we systematically evaluated different network architectures and hyperparameters to provide useful insights on how to train a binary neural network. Further, we present how we improved accuracy by increasing the number of connections in the network.",
"title": ""
},
{
"docid": "13950622dd901145f566359cc5c00703",
"text": "The Indian buffet process is a stochastic process defining a probability distribution over equivalence classes of sparse binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features, or that involve bipartite graphs in which the size of at least one class of nodes is unknown. We give a detailed derivation of this distribution, and illustrate its use as a prior in an infinite latent feature model. We then review recent applications of the Indian buffet process in machine learning, discuss its extensions, and summarize its connections to other stochastic processes.",
"title": ""
},
{
"docid": "a5447f6bf7dbbab55d93794b47d46d12",
"text": "The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework.",
"title": ""
},
{
"docid": "e6f905747586d37246a996eb0addf2f2",
"text": "DNA vaccination is a disruptive technology that offers the promise of a new rapidly deployed vaccination platform to treat human and animal disease with gene-based materials. Innovations such as electroporation, needle free jet delivery and lipid-based carriers increase transgene expression and immunogenicity through more effective gene delivery. This review summarizes complementary vector design innovations that, when combined with leading delivery platforms, further enhance DNA vaccine performance. These next generation vectors also address potential safety issues such as antibiotic selection, and increase plasmid manufacturing quality and yield in exemplary fermentation production processes. Application of optimized constructs in combination with improved delivery platforms tangibly improves the prospect of successful application of DNA vaccination as prophylactic vaccines for diverse human infectious disease targets or as therapeutic vaccines for cancer and allergy.",
"title": ""
},
{
"docid": "289942ca889ccea58d5b01dab5c82719",
"text": "Concepts of basal ganglia organization have changed markedly over the past decade, due to significant advances in our understanding of the anatomy, physiology and pharmacology of these structures. Independent evidence from each of these fields has reinforced a growing perception that the functional architecture of the basal ganglia is essentially parallel in nature, regardless of the perspective from which these structures are viewed. This represents a significant departure from earlier concepts of basal ganglia organization, which generally emphasized the serial aspects of their connectivity. Current evidence suggests that the basal ganglia are organized into several structurally and functionally distinct 'circuits' that link cortex, basal ganglia and thalamus, with each circuit focused on a different portion of the frontal lobe. In this review, Garrett Alexander and Michael Crutcher, using the basal ganglia 'motor' circuit as the principal example, discuss recent evidence indicating that a parallel functional architecture may also be characteristic of the organization within each individual circuit.",
"title": ""
},
{
"docid": "7251323ff16deac24d6154d8c3eda9f5",
"text": "Modern search engines have to be fast to satisfy users, so there are hard back-end latency requirements. The set of features useful for search ranking functions, though, continues to grow, making feature computation a latency bottleneck. As a result, not all available features can be used for ranking, and in fact, much of the time, only a small percentage of these features can be used. Thus, it is crucial to have a feature selection mechanism that can find a subset of features that both meets latency requirements and achieves high relevance. To this end, we explore different feature selection methods using boosted regression trees, including both greedy approaches (selecting the features with highest relative importance as computed by boosted trees; discounting importance by feature similarity and a randomized approach. We evaluate and compare these approaches using data from a commercial search engine. The experimental results show that the proposed randomized feature selection with feature-importance-based backward elimination outperforms greedy approaches and achieves a comparable relevance with 30 features to a full-feature model trained with 419 features and the same modeling parameters.",
"title": ""
},
{
"docid": "2c166ea3eb548135f44cc6afead34d61",
"text": "Yelp has been one of the most popular sites for users to rate and review local businesses. Businesses organize their own listings while users rate the business from 1− 5 stars and write text reviews. Users can also vote on other helpful or funny reviews written by other users. Using this enormous amount of data that Yelp has collected over the years, it would be meaningful if we could learn to predict ratings based on review‘s text alone, because free-text reviews are difficult for computer systems to understand, analyze and aggregate [1]. The idea can be extended to many other applications where assessment has traditionally been in the format of text and assigning a quick numerical rating is difficult. Examples include predicting movie or book ratings based on news articles or blogs [2], assigning ratings to YouTube videos based on viewers‘comments, and even more general sentiment analysis, sometimes also referred to as opinion mining.",
"title": ""
},
{
"docid": "b82adc75ccdf7bd437f969d226bc29a1",
"text": "Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to concave boundaries, however, have limited their utility. This paper develops a new external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. The resultant field has a large capture range and forces active contours into concave regions. Examples on simulated images and one real image are presented.",
"title": ""
},
{
"docid": "fa0eebbf9c97942a5992ed80fd66cf10",
"text": "The increasing popularity of Facebook among adolescents has stimulated research to investigate the relationship between Facebook use and loneliness, which is particularly prevalent in adolescence. The aim of the present study was to improve our understanding of the relationship between Facebook use and loneliness. Specifically, we examined how Facebook motives and two relationship-specific forms of adolescent loneliness are associated longitudinally. Cross-lagged analysis based on data from 256 adolescents (64% girls, M(age) = 15.88 years) revealed that peer-related loneliness was related over time to using Facebook for social skills compensation, reducing feelings of loneliness, and having interpersonal contact. Facebook use for making new friends reduced peer-related loneliness over time, whereas Facebook use for social skills compensation increased peer-related loneliness over time. Hence, depending on adolescents' Facebook motives, either the displacement or the stimulation hypothesis is supported. Implications and suggestions for future research are discussed.",
"title": ""
},
{
"docid": "f50b8631ce2e4023c145bf09eae9974b",
"text": "We present a completely revised generation of a modular micro-NMR detector, featuring an active sample volume of ∼ 100 nL, and an improvement of 87% in probe efficiency. The detector is capable of rapidly screening different samples using exchangeable, application-specific, MEMS-fabricated, microfluidic sample containers. In contrast to our previous design, the sample holder chips can be simply sealed with adhesive tape, with excellent adhesion due to the smooth surfaces surrounding the fluidic ports, and so withstand pressures of ∼2.5 bar, while simultaneously enabling high spectral resolution up to 0.62 Hz for H2O, due to its optimised geometry. We have additionally reworked the coil design and fabrication processes, replacing liquid photoresists by dry film stock, whose final thickness does not depend on accurate volume dispensing or precise levelling during curing. We further introduced mechanical alignment structures to avoid time-intensive optical alignment of the chip stacks during assembly, while we exchanged the laser-cut, PMMA spacers by diced glass spacers, which are not susceptible to melting during cutting. Doing so led to an overall simplification of the entire fabrication chain, while simultaneously increasing the yield, due to an improved uniformity of thickness of the individual layers, and in addition, due to more accurate vertical positioning of the wirebonded coils, now delimited by a post base plateau. We demonstrate the capability of the design by acquiring a 1H spectrum of ∼ 11 nmol sucrose dissolved in D2O, where we achieved a linewidth of 1.25 Hz for the TSP reference peak. Chemical shift imaging experiments were further recorded from voxel volumes of only ∼ 1.5 nL, which corresponded to amounts of just 1.5 nmol per voxel for a 1 M concentration. To extend the micro-detector to other nuclei of interest, we have implemented a trap circuit, enabling heteronuclear spectroscopy, demonstrated by two 1H/13C 2D HSQC experiments.",
"title": ""
},
{
"docid": "19a697a6c02d0519c3ed619763db5c73",
"text": "Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast eachnode can receive the complete information, or equivalently, what the information rate arriving at eachnode is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.",
"title": ""
},
{
"docid": "e4944af5f589107d1b42a661458fcab5",
"text": "This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014. Mobile Augmented Reality has largely evolved over the last decade, as well as the interpretation itself of what is Mobile Augmented Reality. The first instance of Mobile AR can certainly be associated with the development of wearable AR, in a sense of experiencing AR during locomotion (mobile as a motion). With the transformation and miniaturization of physical devices and displays, the concept of mobile AR evolved towards the notion of ”mobile device”, aka AR on a mobile device. In this history of mobile AR we considered both definitions and the evolution of the term over time. Major parts of the list were initially compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2009 (author list in alphabetical order) for the ISMAR society. More recent work was added in 2013 and during preparation of this report. Permission is granted to copy and modify. Please email the first author if you find any errors.",
"title": ""
},
{
"docid": "71c39b7a45a7bef11c642441191a12e1",
"text": "Scoliosis is a medical condition in which a person's spine is curved from side to side. Current methodology of diagnosis of scoliosis: The doctors analyze an X-ray image and determine the cobb angle and vertebral twist. These two parameters are critical in the treatment of scoliosis. Bottlenecks associated with current methodology are inherent errors associated with manual measurement of cobb angle and vertebral twist from X-rays by the concerned doctors and the treatment that is meted out to a particular case of 'cobb angle' and vertebral twist by different doctors may differ with varying results. Hence it becomes imperative to select the best treatment procedure for attaining the best results. Highlights of the new methodology proposed: An X-ray image is accepted as input, Cobb angle is measured by the computer which is programmed to do so, thus eliminating the errors associated with the doctors interpretation.",
"title": ""
},
{
"docid": "f4503626420d2f17e0716312a7c325ad",
"text": "Segmentation of left ventricular (LV) endocardium from 3D echocardiography is important for clinical diagnosis because it not only can provide some clinical indices (e.g. ventricular volume and ejection fraction) but also can be used for the analysis of anatomic structure of ventricle. In this work, we proposed a new full-automatic method, combining the deep learning and deformable model, for the segmentation of LV endocardium. We trained convolutional neural networks to generate a binary cuboid to locate the region of interest (ROI). And then, using ROI as the input, we trained stacked autoencoder to infer the LV initial shape. At last, we adopted snake model initiated by inferred shape to segment the LV endocardium. In the experiments, we used 3DE data, from CETUS challenge 2014 for training and testing by segmentation accuracy and clinical indices. The results demonstrated the proposed method is accuracy and efficiency respect to expert's measurements.",
"title": ""
},
{
"docid": "aab538d0b2872297f9e4b566d3f6554a",
"text": "Does free access to journal articles result in greater diffusion of scientific knowledge? Using a randomized controlled trial of open access publishing, involving 36 participating journals in the sciences, social sciences, and humanities, we report on the effects of free access on article downloads and citations. Articles placed in the open access condition (n=712) received significantly more downloads and reached a broader audience within the first year, yet were cited no more frequently, nor earlier, than subscription-access control articles (n=2533) within 3 yr. These results may be explained by social stratification, a process that concentrates scientific authors at a small number of elite research universities with excellent access to the scientific literature. The real beneficiaries of open access publishing may not be the research community but communities of practice that consume, but rarely contribute to, the corpus of literature.",
"title": ""
}
] |
scidocsrr
|
4029d9058e110d989aeaeb21affcf233
|
Multilevel image thresholding using elephant herding optimization algorithm
|
[
{
"docid": "15ef258e08dcc0fe0298c089fbf5ae1c",
"text": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.",
"title": ""
}
] |
[
{
"docid": "6981389a38bf3b0f97911d225562322f",
"text": "While the importance of microRNAs (miRNAs) in cancer treatment or manipulation of genetic expression has been increasingly recognized for developing miRNA-based therapies, the controlled delivery of miRNAs into specific cells constitutes a challenging task. This report describes preliminary findings from an investigation of the conjugation of gold nanoparticles with miRNAs (miRNA-AuNPs) and their cell transfection. The immobilization of miRNAs on the AuNPs was detected, and the surface stability was substantiated by gel electrophoresis assessment of the highly charged characteristics of miRNA-AuNPs and their surface-exchange inactivity with a highly charged surfactant. The miRNA-AuNPs were tested in cell transfection using multiple myeloma cells, demonstrating efficient knockdown in the functional luciferase assay. The findings have important implications for understanding the mechanistic details of cell transfection involving miRNA-conjugated nanoparticles as biosensing or targeting probes.",
"title": ""
},
{
"docid": "3bbc633650b9010ef5c76ea1d634a495",
"text": "It is well known that significant metabolic change take place as cells are transformed from normal to malignant. This review focuses on the use of different bioinformatics tools in cancer metabolomics studies. The article begins by describing different metabolomics technologies and data generation techniques. Overview of the data pre-processing techniques is provided and multivariate data analysis techniques are discussed and illustrated with case studies, including principal component analysis, clustering techniques, self-organizing maps, partial least squares, and discriminant function analysis. Also included is a discussion of available software packages.",
"title": ""
},
{
"docid": "7464cc07f32de5b9ed2465e4f89c019e",
"text": "It is completely amazing! Fake news and click-baits have totally invaded the cyber space. Let us face it: everybody hates them for three simple reasons. Reason #2 will absolutely amaze you. What these can achieve at the time of election will completely blow your mind! Now, we all agree, this cannot go on, you know, somebody has to stop it. So, we did this research on fake news/click-bait detection and trust us, it is totally great research, it really is! Make no mistake. This is the best research ever! Seriously, come have a look, we have it all: neural networks, attention mechanism, sentiment lexicons, author profiling, you name it. Lexical features, semantic features, we absolutely have it all. And we have totally tested it, trust us! We have results, and numbers, really big numbers. The best numbers ever! Oh, and analysis, absolutely top notch analysis. Interested? Come read the shocking truth about fake news and click-bait in the Bulgarian cyber space. You won’t believe what we have found!",
"title": ""
},
{
"docid": "d5d452fe209bc69f3f1064a2871e992c",
"text": "Three years ago, we released the Omniglot dataset for developing more human-like learning algorithms. Omniglot is a one-shot learning challenge, inspired by how people can learn a new concept from just one or a few examples. Along with the dataset, we proposed a suite of five challenge tasks and a computational model based on probabilistic program induction that addresses them. The computational model, although powerful, was not meant to be the final word on Omniglot; we hoped that the machine learning community would both build on our work and develop novel approaches to tackling the challenge. In the time since, we have been pleased to see the wide adoption of Omniglot and notable technical progress. There has been genuine progress on one-shot classification, but it has been difficult to measure since researchers have adopted different splits and training procedures that make the task easier. The other four tasks, while essential components of human conceptual understanding, have received considerably less attention. We review the progress so far and conclude that neural networks are still far from human-like concept learning on Omniglot, a challenge that requires performing all of the tasks with a single model. We also discuss new tasks to stimulate further progress.",
"title": ""
},
{
"docid": "4688caf6a80463579f293b2b762da5b5",
"text": "To accelerate the implementation of network functions/middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan/latency of the overall VNFs' schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators' revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%-20% in the simulated scenarios.",
"title": ""
},
{
"docid": "e10dbbc6b3381f535ff84a954fcc7c94",
"text": "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of Shotton et al. [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skeletal representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skeletal representation lies in the Lie group SE(3)×.. .×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skeletal representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.",
"title": ""
},
{
"docid": "da7eae0fc41a9f956a2666a42a30691e",
"text": "Selected findings from the study: – Generally high correlations (.30 – 1.00). – Most correlations very intutive, very few unintuitive. – Some theoretical dimensions hard to separate. – More convincing correlates most with overall quality (.64). – Thought through shows the highest ratings (overall quality 1.8). – Off-topic shows the lowest ratings (overall quality 1.1). www.webis.de Bauhaus-Universität Weimar * www.cs.toronto.edu/compling University of Toronto ** www.ukp.tu-darmstadt.de Technische Universität Darmstadt *** ie.ibm.com IBM Research Ireland ****",
"title": ""
},
{
"docid": "359c5322961b43cec07c8a172ad043bb",
"text": "A deadlock-free routing algorithm can be generated for arbitrary interconnection networks using the concept of virtual channels. A necessary and sufficient condition for deadlock-free routing is the absence of cycles in a channel dependency graph. Given an arbitrary network and a routing function, the cycles of the channel dependency graph can be removed by splitting physical channels into groups of virtual channels. This method is used to develop deadlock-free routing algorithms for k-ary n-cubes, for cube-connected cycles, and for shuffle-exchange networks.",
"title": ""
},
{
"docid": "a3f6781adeca64763156ac41dff32c82",
"text": "A multilayer bandpass filter (BPF) with harmonic suppression using meander line inductor and interdigital capacitor (MLI-IDC) resonant structure is presented in this letter. The BPF is fabricated with three unit cells and its measured passband center frequency is 2.56 GHz with a bandwidth of 0.38 GHz and an insertion loss of 1.5 dB. The harmonics are suppressed up to 11 GHz. A diplexer using the proposed BPF is also presented. The proposed diplexer consists of 4.32 mm sized unit cells to couple 2.5 GHz signal into port 2, and 3.65 mm sized unit cells to couple 3.7 GHz signal into port 3. The notch circuit is placed on the output lines of the diplexer to improve isolation. The proposed diplexer has demonstrated insertion loss of 1.35 dB with 0.45 GHz bandwidth in port 2 and 1.73 dB insertion loss with 0.44 GHz bandwidth in port 3. The isolation is better than 18 dB in the first passband with 38 dB maximum isolation at 2.5 GHz. The isolation in the second passband is better than 26 dB with 45 dB maximum isolation at 3.7 GHz.",
"title": ""
},
{
"docid": "2b1002037b717f65e97defbf802d5fcd",
"text": "BACKGROUND\nDeletions of chromosome 19 have rarely been reported, with the exception of some patients with deletion 19q13.2 and Blackfan-Diamond syndrome due to haploinsufficiency of the RPS19 gene. Such a paucity of patients might be due to the difficulty in detecting a small rearrangement on this chromosome that lacks a distinct banding pattern. Array comparative genomic hybridisation (CGH) has become a powerful tool for the detection of microdeletions and microduplications at high resolution in patients with syndromic mental retardation.\n\n\nMETHODS AND RESULTS\nUsing array CGH, this study identified three interstitial overlapping 19q13.11 deletions, defining a minimal critical region of 2.87 Mb, associated with a clinically recognisable syndrome. The three patients share several major features including: pre- and postnatal growth retardation with slender habitus, severe postnatal feeding difficulties, microcephaly, hypospadias, signs of ectodermal dysplasia, and cutis aplasia over the posterior occiput. Interestingly, these clinical features have also been described in a previously reported patient with a 19q12q13.1 deletion. No recurrent breakpoints were identified in our patients, suggesting that no-allelic homologous recombination mechanism is not involved in these rearrangements.\n\n\nCONCLUSIONS\nBased on these results, the authors suggest that this chromosomal abnormality may represent a novel clinically recognisable microdeletion syndrome caused by haploinsufficiency of dosage sensitive genes in the 19q13.11 region.",
"title": ""
},
{
"docid": "a13ff1e2192c9a7e4bcfdf5e1ac39538",
"text": "Before graduating from X as Waymo, Google's self-driving car project had been using custom lidars for several years. In their latest revision, the lidars are designed to meet the challenging requirements we discovered in autonomously driving 2 million highly-telemetered miles on public roads. Our goal is to approach price points required for advanced driver assistance systems (ADAS) while meeting the performance needed for safe self-driving. This talk will review some history of the project and describe a few use-cases for lidars on Waymo cars. Out of that will emerge key differences between lidars for self-driving and traditional applications (e.g. mapping) which may provide opportunities for semiconductor lasers.",
"title": ""
},
{
"docid": "19672ead8c41fa723099b30d152fb466",
"text": "-Fractal dimension is an interesting parameter to characterize roughness in an image. It can be used in texture segmentation, estimation of three-dimensional (3D) shape and other information. A new method is proposed to estimate fractal dimension in a two-dimensional (2D) image which can readily be extended to a 3D image as well. The method has been compared with other existing methods to show that our method is both efficient and accurate. Fractal dimension Texture analysis Image roughness measure Image segmentation Computer vision",
"title": ""
},
{
"docid": "263b09d1a593212ea9a52214bbc899c1",
"text": "In this paper, we propose an interactive evolutionary programming based recommendation system for online shopping that estimates the human preference based on eye movement analysis. Given a set of images of different clothes, the eye movement patterns of the human subjects while looking at the clothes they like differ from clothes they do not like. Therefore, in the proposed system, human preference is measured from the way the human subjects look at the images of different clothes. In other words, the human preference can be measured by using the fixation count and the fixation length using an eye tracking system. Based on the level of human preference, the evolutionary programming suggests new clothes that close the human preference by operations such as selection and mutation. The proposed recommendation is tested with several human subjects and the experimental results are demonstrated.",
"title": ""
},
{
"docid": "b723616272d078bdbaaae1cf650ace20",
"text": "Most of industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. In this paper is proposed an accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers. These accelerometers are attached to the human arms, capturing its behavior (gestures and postures). An Artificial Neural Network (ANN) trained with a back-propagation algorithm was used to recognize arm gestures and postures, which then will be used as input in the control of the robot. The aim is that the robot starts the movement almost at the same time as the user starts to perform a gesture or posture (low response time). The results show that the system allows the control of an industrial robot in an intuitive way. However, the achieved recognition rate of gestures and postures (92%) should be improved in future, keeping the compromise with the system response time (160 milliseconds). Finally, the results of some tests performed with an industrial robot are presented and discussed.",
"title": ""
},
{
"docid": "46410be2730753051c4cb919032fad6f",
"text": "categories. That is, since cue validity is the probability of being in some category given some property, this probability will increase (or at worst not decrease) as the size of the category increases (e.g. the probability of being an animal given the property of flying is greater than the probability of bird given flying, since there must be more animals that fly than birds that fly).6 The idea that cohesive categories maximize the probability of particular properties given the category fares no better. In this case, the most specific categories will always be picked out. Medin (1982) has analyzed a variety of formal measures of category cohe siveness and pointed out problems with all of them. For example, one possible principle is to have concepts such that they minimize the similarity between contrasting categories; but minimizing between-category similarity will always lead one to sort a set of n objects into exactly two categories. Similarly, functions based on maximizing within-category similarity while minimizing between-category similarity lead to a variety of problems and counterintuitive expectations about when to accept new members into existent categories versus when to set up new categories. At a less formal but still abstract level, Sternberg (1982) has tried to translate some of Goodman's (e.g. 1983) ideas about induction into possible constraints on natural concepts. Sternberg suggests that the apparent naturalness of a concept increases with the familiarity of the concept (where familiarity is related to Goodman's notion of entrenchment), and decreases with the number of transformations specified in the concept (e.g. aging specifies certain trans",
"title": ""
},
{
"docid": "c4ab0af91f664aa6d7674f986608ab06",
"text": "Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.",
"title": ""
},
{
"docid": "ba27fff04cd942ae5e1126ed6c18cd61",
"text": "OBJECTIVE\nTo determine, using cone-beam computed tomography (CBCT), the residual ridge height (RRH), sinus floor membrane thickness (MT), and ostium patency (OP) in patients being evaluated for implant placement in the posterior maxilla.\n\n\nMATERIALS AND METHODS\nCBCT scans of 128 patients (199 sinuses) with ≥1 missing teeth in the posterior maxilla were examined. RRH and MT corresponding to each edentulous site were measured. MT >2 mm was considered pathological and categorized by degree of thickening (2-5, 5-10 mm, and >10 mm). Mucosal appearance was classified as \"normal\", \"flat thickening\", or \"polypoid thickening\", and OP was classified as \"patent\" or \"obstructed\". Descriptive and bivariate statistical analyses were performed.\n\n\nRESULTS\nMT >2 mm was observed in 60.6% patients and 53.6% sinuses. Flat and polypoid mucosal thickening had a prevalence of 38.1% and 15.5%, respectively. RRH ≤4 mm was observed in 46.9% and 48.9% of edentulous first and second molar sites, respectively. Ostium obstruction was observed in 13.1% sinuses and was associated with MT of 2-5 mm (6.7%), 5-10 mm (24%), and >10 mm (35.3%, P < 0.001). Polypoid mucosal lesions were more frequently associated with ostium obstruction than flat thickenings (26.7% vs. 17.6%, P < 0.001).\n\n\nCONCLUSION\nThickened sinus membranes (>2 mm) and reduced residual ridge heights (≤4 mm) were highly prevalent in this sample of patients with missing posterior maxillary teeth. Membrane thickening >5 mm, especially of a polypoid type, is associated with an increased risk for ostium obstruction. In the presence of these findings, an ENT referral may be beneficial prior to implant-related sinus floor elevation.",
"title": ""
},
{
"docid": "418e29af01be9655c06df63918f41092",
"text": "A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the metalearned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"title": ""
},
{
"docid": "50e386a5b8d3fc086419e730ed91b068",
"text": "Mobile apps are notorious for collecting a wealth of private information from users. Despite significant effort from the research community in developing privacy leak detection tools based on data flow tracking inside the app or through network traffic analysis, it is still unclear whether apps and ad libraries can hide the fact that they are leaking private information. In fact, all existing analysis tools have limitations: data flow tracking suffers from imprecisions that cause false positives, as well as false negatives when the data flow from a source of private information to a network sink is interrupted; on the other hand, network traffic analysis cannot handle encryption or custom encoding. We propose a new approach to privacy leak detection that is not affected by such limitations, and it is also resilient to obfuscation techniques, such as encoding, formatting, encryption, or any other kind of transformation performed on private information before it is leaked. Our work is based on blackbox differential analysis, and it works in two steps: first, it establishes a baseline of the network behavior of an app; then, it modifies sources of private information, such as the device ID and location, and detects leaks by observing deviations in the resulting network traffic. The basic concept of black-box differential analysis is not novel, but, unfortunately, it is not practical enough to precisely analyze modern mobile apps. In fact, their network traffic contains many sources of non-determinism, such as random identifiers, timestamps, and server-assigned session identifiers, which, when not handled properly, cause too much noise to correlate output changes with input changes. The main contribution of this work is to make black-box differential analysis practical when applied to modern Android apps. In particular, we show that the network-based non-determinism can often be explained and eliminated, and it is thus possible to reliably use variations in the network traffic as a strong signal to detect privacy leaks. We implemented this approach in a tool, called AGRIGENTO, and we evaluated it on more than one thousand Android apps. Our evaluation shows that our approach works well in practice and outperforms current state-of-the-art techniques. We conclude our study by discussing several case studies that show how popular apps and ad libraries currently exfiltrate data by using complex combinations of encoding and encryption mechanisms that other approaches fail to detect. Our results show that these apps and libraries seem to deliberately hide their data leaks from current approaches and clearly demonstrate the need for an obfuscation-resilient approach such as ours.",
"title": ""
}
] |
scidocsrr
|
f0fcb0413b7c4a3f438143b8385d6f4f
|
A Handful of Heuristics and Some Propositions for Understanding Resilience in Social-Ecological Systems
|
[
{
"docid": "64817e403b2d80b96bc7ad4a4e456e41",
"text": "The concept of resilience has evolved considerably since Holling’s (1973) seminal paper. Different interpretations of what is meant by resilience, however, cause confusion. Resilience of a system needs to be considered in terms of the attributes that govern the system’s dynamics. Three related attributes of social– ecological systems (SESs) determine their future trajectories: resilience, adaptability, and transformability. Resilience (the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks) has four components—latitude, resistance, precariousness, and panarchy—most readily portrayed using the metaphor of a stability landscape. Adaptability is the capacity of actors in the system to influence resilience (in a SES, essentially to manage it). There are four general ways in which this can be done, corresponding to the four aspects of resilience. Transformability is the capacity to create a fundamentally new system when ecological, economic, or social structures make the existing system untenable. The implications of this interpretation of SES dynamics for sustainability science include changing the focus from seeking optimal states and the determinants of maximum sustainable yield (the MSY paradigm), to resilience analysis, adaptive resource management, and adaptive governance. INTRODUCTION An inherent difficulty in the application of these concepts is that, by their nature, they are rather imprecise. They fall into the same sort of category as “justice” or “wellbeing,” and it can be counterproductive to seek definitions that are too narrow. Because different groups adopt different interpretations to fit their understanding and purpose, however, there is confusion in their use. The confusion then extends to how a resilience approach (Holling 1973, Gunderson and Holling 2002) can contribute to the goals of sustainable development. In what follows, we provide an interpretation and an explanation of how these concepts are reflected in the adaptive cycles of complex, multi-scalar SESs. We need a better scientific basis for sustainable development than is generally applied (e.g., a new “sustainability science”). The “Consortium for Sustainable Development” (of the International Council for Science, the Initiative on Science and Technology for Sustainability, and the Third World Academy of Science), the US National Research Council (1999, 2002), and the Millennium Ecosystem Assessment (2003), have all focused increasing attention on such notions as robustness, vulnerability, and risk. There is good reason for this, as it is these characteristics of social–ecological systems (SESs) that will determine their ability to adapt to and benefit from change. In particular, the stability dynamics of all linked systems of humans and nature emerge from three complementary attributes: resilience, adaptability, and transformability. The purpose of this paper is to examine these three attributes; what they mean, how they interact, and their implications for our future well-being. There is little fundamentally new theory in this paper. What is new is that it uses established theory of nonlinear stability (Levin 1999, Scheffer et al. 2001, Gunderson and Holling 2002, Berkes et al. 2003) to clarify, explain, and diagnose known examples of regional development, regional poverty, and regional CSIRO Sustainable Ecosystems; University of Wisconsin-Madison; Arizona State University Ecology and Society 9(2): 5. http://www.ecologyandsociety.org/vol9/iss2/art5 sustainability. These include, among others, the Everglades and the Wisconsin Northern Highlands Lake District in the USA, rangelands and an agricultural catchment in southeastern Australia, the semi-arid savanna in southeastern Zimbabwe, the Kristianstad “Water Kingdom” in southern Sweden, and the Mae Ping valley in northern Thailand. These regions provide examples of both successes and failures of development. Some from rich countries have generated several pulses of solutions over a span of a hundred years and have generated huge costs of recovery (the Everglades). Some from poor countries have emerged in a transformed way but then, in some cases, have been dragged back by higher-level autocratic regimes (Zimbabwe). Some began as localscale solutions and then developed as transformations across scales from local to regional (Kristianstad and northern Wisconsin). In all of them, the outcomes were determined by the interplay of their resilience, adaptability, and transformability. There is a major distinction between resilience and adaptability, on the one hand, and transformability on the other. Resilience and adaptability have to do with the dynamics of a particular system, or a closely related set of systems. Transformability refers to fundamentally altering the nature of a system. As with many terms under the resilience rubric, the dividing line between “closely related” and “fundamentally altered” can be fuzzy, and subject to interpretation. So we begin by first offering the most general, qualitative set of definitions, without reference to conceptual frameworks, that can be used to describe these terms. We then use some examples and the literature on “basins of attraction” and “stability landscapes” to further refine our definitions. Before giving the definitions, however, we need to briefly introduce the concept of adaptive cycles. Adaptive Cycles and Cross-scale Effects The dynamics of SESs can be usefully described and analyzed in terms of a cycle, known as an adaptive cycle, that passes through four phases. Two of them— a growth and exploitation phase (r) merging into a conservation phase (K)—comprise a slow, cumulative forward loop of the cycle, during which the dynamics of the system are reasonably predictable. As the K phase continues, resources become increasingly locked up and the system becomes progressively less flexible and responsive to external shocks. It is eventually, inevitably, followed by a chaotic collapse and release phase (Ω) that rapidly gives way to a phase of reorganization (α), which may be rapid or slow, and during which, innovation and new opportunities are possible. The Ω and α phases together comprise an unpredictable backloop. The α phase leads into a subsequent r phase, which may resemble the previous r phase or be significantly different. This metaphor of the adaptive cycle is based on observed system changes, and does not imply fixed, regular cycling. Systems can move back from K toward r, or from r directly into Ω, or back from α to Ω. Finally (and importantly), the cycles occur at a number of scales and SESs exist as “panarchies”— adaptive cycles interacting across multiple scales. These cross-scale effects are of great significance in the dynamics of SESs.",
"title": ""
}
] |
[
{
"docid": "487b003ca1b0484df194ba8f3dbc50eb",
"text": "Recent years have seen an explosion in the rate of discovery of genetic defects linked to Parkinson's disease. These breakthroughs have not provided a direct explanation for the disease process. Nevertheless, they have helped transform Parkinson's disease research by providing tangible clues to the neurobiology of the disorder.",
"title": ""
},
{
"docid": "071b34508ab6aa0eefbc9f5966a127ee",
"text": "Existing single view, 3D face reconstruction methods can produce beautifully detailed 3D results, but typically only for near frontal, unobstructed viewpoints. We describe a system designed to provide detailed 3D reconstructions of faces viewed under extreme conditions, out of plane rotations, and occlusions. Motivated by the concept of bump mapping, we propose a layered approach which decouples estimation of a global shape from its mid-level details (e.g., wrinkles). We estimate a coarse 3D face shape which acts as a foundation and then separately layer this foundation with details represented by a bump map. We show how a deep convolutional encoder-decoder can be used to estimate such bump maps. We further show how this approach naturally extends to generate plausible details for occluded facial regions. We test our approach and its components extensively, quantitatively demonstrating the invariance of our estimated facial details. We further provide numerous qualitative examples showing that our method produces detailed 3D face shapes in viewing conditions where existing state of the art often break down.",
"title": ""
},
{
"docid": "d7d0fa6279b356d37c2f64197b3d721d",
"text": "Estimating the pose of a human in 3D given an image or a video has recently received significant attention from the scientific community. The main reasons for this trend are the ever increasing new range of applications (e.g., humanrobot interaction, gaming, sports performance analysis) which are driven by current technological advances. Although recent approaches have dealt with several challenges and have reported remarkable results, 3D pose estimation remains a largely unsolved problem because real-life applications impose several challenges which are not fully addressed by existing methods. For example, estimating the 3D pose of multiple people in an outdoor environment remains a largely unsolved problem. In this paper, we review the recent advances in 3D human pose estimation from RGB images or image sequences. We propose a taxonomy of the approaches based on the input (e.g., single image or video, monocular or multi-view) and in each case we categorize the methods according to their key characteristics. To provide an overview of the current capabilities, we conducted an extensive experimental evaluation of state-of-the-art approaches in a synthetic dataset created specifically for this task, which along with its ground truth is made publicly available for research purposes. Finally, we provide an in-depth discussion of the insights obtained from reviewing the literature and the results of our experiments. Future directions and challenges are identified.",
"title": ""
},
{
"docid": "c1f43e4ad1f72e56327a2afdc740c8b9",
"text": "An increasing number of developers of virtual classrooms offer keyboard support and additional features for improving accessibility. Especially blind users encounter barriers when participating in visually dominated synchronous learning sessions . The existent accessibility features facilitate their participation, but cannot guarantee an equal use in comparison to non-disabled users. This paper summarizes a requirements analysis including an evaluation of virtual classrooms concerning their conformance to common accessibility guidelines and support of non-visual work techniques. It concludes with a presentation of a functional requirements catalogue for accessible virtual classrooms for blind users derived from a user survey, the requirements analysis described and additional findings from literature reviews.",
"title": ""
},
{
"docid": "dc867072ef34de6bb6aafe34fa310d97",
"text": "This paper discusses the use of voice coil actuators to enhance the performance of shift-by-wire systems. This innovative purely electric actuation approach was implemented and applied to a Formula SAE race car. The result was more compact and faster than conventional solutions, which usually employ pneumatic actuators. The designed shift-by-wire system incorporates a control unit based on a digital signal processor, which runs the control algorithms developed for both gear shifting and launching the car. The system was successfully validated through laboratory and on-track tests. In addition, a comparative test with an equivalent pneumatic counterpart was carried out. This showed that an effective use of voice coil actuators enabled the upshift time to be almost halved, thus proving that these actuators are a viable solution to improving shift-by-wire system performance.",
"title": ""
},
{
"docid": "1317212ff37a2ec3f0085fa2f8df9a6e",
"text": "This paper presents an approach for real-time car parking occupancy detection that uses a Convolutional Neural Network (CNN) classifier running on-board of a smart camera with limited resources. Experiments show that our technique is very effective and robust to light condition changes, presence of shadows, and partial occlusions. The detection is reliable, even when tests are performed using images captured from a viewpoint different than the viewpoint used for training. In addition, it also demonstrates its robustness when training and tests are executed on different parking lots. We have tested and compared our solution against state of the art techniques, using a reference benchmark for parking occupancy detection. We have also produced and made publicly available an additional dataset that contains images of the parking lot taken from different viewpoints and in different days with different light conditions. The dataset captures occlusion and shadows that might disturb the classification of the parking spaces status.",
"title": ""
},
{
"docid": "88478e315049f2c155bb611d797e8eb1",
"text": "In this paper we analyze aspects of the intellectual property strategies of firms in the global cosmetics and toilet preparations industry. Using detailed data on all 4,205 EPO patent grants in the relevant IPC class between 1980 and 2001, we find that about 15 percent of all patents are challenged in EPO opposition proceedings, a rate about twice as high as in the overall population of EPO patents. Moreover, opposition in this sector is more frequent than in chemicals-based high technology industries such as biotechnology and pharmaceuticals. About one third of the opposition cases involve multiple opponents. We search for rationales that could explain this surprisingly strong “IP litigation” activity. In a first step, we use simple probability models to analyze the likelihood of opposition as a function of characteristics of the attacked patent. We then introduce owner firm variables and find that major differences across firms in the likelihood of having their patents opposed prevail even after accounting for other influences. Aggressive opposition in the past appears to be associated with a reduction of attacks on own patents. In future work we will look at the determinants of outcomes and duration of these oppositions, in an attempt to understand the firms’ strategies more fully. Acknowledgements This version of the paper was prepared for presentation at the Productivity Program meetingsof the NBER Summer Institute. An earlier version of the paper was presented in February 2002 at the University of Maastricht Workshop on Strategic Management, Innovation and Econometrics, held at Chateau St. Gerlach, Valkenburg. We would like to thank the participants and in particular Franz Palm and John Hagedoorn for their helpful comments.",
"title": ""
},
{
"docid": "05237a9da2d94be2b85011ec2af972ba",
"text": "BACKGROUND\nStrong evidence shows that physical inactivity increases the risk of many adverse health conditions, including major non-communicable diseases such as coronary heart disease, type 2 diabetes, and breast and colon cancers, and shortens life expectancy. Because much of the world's population is inactive, this link presents a major public health issue. We aimed to quantify the eff ect of physical inactivity on these major non-communicable diseases by estimating how much disease could be averted if inactive people were to become active and to estimate gain in life expectancy at the population level.\n\n\nMETHODS\nFor our analysis of burden of disease, we calculated population attributable fractions (PAFs) associated with physical inactivity using conservative assumptions for each of the major non-communicable diseases, by country, to estimate how much disease could be averted if physical inactivity were eliminated. We used life-table analysis to estimate gains in life expectancy of the population.\n\n\nFINDINGS\nWorldwide, we estimate that physical inactivity causes 6% (ranging from 3·2% in southeast Asia to 7·8% in the eastern Mediterranean region) of the burden of disease from coronary heart disease, 7% (3·9-9·6) of type 2 diabetes, 10% (5·6-14·1) of breast cancer, and 10% (5·7-13·8) of colon cancer. Inactivity causes 9% (range 5·1-12·5) of premature mortality, or more than 5·3 million of the 57 million deaths that occurred worldwide in 2008. If inactivity were not eliminated, but decreased instead by 10% or 25%, more than 533 000 and more than 1·3 million deaths, respectively, could be averted every year. We estimated that elimination of physical inactivity would increase the life expectancy of the world's population by 0·68 (range 0·41-0·95) years.\n\n\nINTERPRETATION\nPhysical inactivity has a major health eff ect worldwide. Decrease in or removal of this unhealthy behaviour could improve health substantially.\n\n\nFUNDING\nNone.",
"title": ""
},
{
"docid": "e4460054208dff5bcc8bfc5a56a2011c",
"text": "A biobank may be defined as the long-term storage of biological samples for research or clinical purposes. In addition to storage facilities, a biobank may comprise a complete organization with biological samples, data, personnel, policies, and procedures for handling specimens and performing other services, such as the management of the database and the planning of scientific studies. This combination of facilities, policies, and processes may also be called a biological resource center (BRC) ( www.iarc.fr ). Research using specimens from biobanks is regulated by European Union (EU) recommendations (Recommendations on Research on Human Biological Materials. The draft recommendation on research on human biological materials was approved by CDBI at its plenary meeting on 20 October 2005) and by voluntary best practices from the U.S. National Cancer Institute (NCI) ( http://biospecimens.cancer.gov ) and other organizations. Best practices for the management of research biobanks vary according to the institution and differing international regulations and standards. However, there are many areas of agreement that have resulted in best practices that should be followed in order to establish a biobank for the custodianship of high-quality specimens and data.",
"title": ""
},
{
"docid": "136deaa8656bdb1c2491de4effd09838",
"text": "The fabrication technology advancements lead to place more logic on a silicon die which makes verification more challenging task than ever. The large number of resources is required because more than 70% of the design cycle is used for verification. Universal Verification Methodology was developed to provide a well structured and reusable verification environment which does not interfere with the device under test (DUT). This paper contrasts the reusability of I2C using UVM and introduces how the verification environment is constructed and test cases are implemented for this protocol.",
"title": ""
},
{
"docid": "013325b5f83e73efdbaa2d0b9ac14afb",
"text": "Electricity prices are known to be very volatile and subject to frequent jumps due to system breakdown, demand shocks, and inelastic supply. Appropriate pricing, portfolio, and risk management models should incorporate these spikes. We develop a framework to price European-style options that are consistent with the possibility of market spikes. The pricing framework is based on a regime jump model that disentangles mean-reversion from the spikes. In the model the spikes are truly time-specific events and therefore independent from the meanreverting price process. This closely resembles the characteristics of electricity prices, as we show with Dutch APX spot price data in the period January 2001 till June 2002. Thanks to the independence of the two price processes in the model, we break derivative prices down in a mean-reverting value and a spike value. We use this result to show how the model can be made consistent with forward prices in the market and present closed-form formulas for European-style options. 5001-6182 Business 5601-5689 4001-4280.7 Accountancy, Bookkeeping Finance Management, Business Finance, Corporation Finance Library of Congress Classification (LCC) HG 6024+ Options M Business Administration and Business Economics M 41 G 3 Accounting Corporate Finance and Governance Journal of Economic Literature (JEL) G 19 General Financial Markets: Other 85 A Business General 225 A 220 A Accounting General Financial Management European Business Schools Library Group (EBSLG) 220 R Options market Gemeenschappelijke Onderwerpsontsluiting (GOO) 85.00 Bedrijfskunde, Organisatiekunde: algemeen 85.25 85.30 Accounting Financieel management, financiering Classification GOO 85.30 Financieel management, financiering Bedrijfskunde / Bedrijfseconomie Accountancy, financieel management, bedrijfsfinanciering, besliskunde",
"title": ""
},
{
"docid": "e135bc086a8e8c5e4abbfe4a5b77feb1",
"text": "http://rer.sagepub.com/content/82/1/61 The online version of this article can be found at: DOI: 10.3102/0034654312436980 February 2012 2012 82: 61 originally published online 1 REVIEW OF EDUCATIONAL RESEARCH Benedict Lai, Zeus Simeoni, Matthew Tran and Mariya Yukhymenko Michael F. Young, Stephen Slota, Andrew B. Cutter, Gerard Jalette, Greg Mullin, for Education Our Princess Is in Another Castle : A Review of Trends in Serious Gaming",
"title": ""
},
{
"docid": "602e15657994b4330926ce4822cba71c",
"text": "In many decision support applications, it is important to guarantee the expressive power, easy formalization and interpretability of Mamdani-type fuzzy inference systems (FIS), while ensuring the computational efficiency and accuracy of Sugeno-type FIS. Hence, in this paper we present an approach to transform a Mamdani-type FIS into a Sugeno-type FIS. We consider the problem of mapping Mamdani FIS to Sugeno FIS as an optimization problem and by determining the first order Sugeno parameters, the transformation is achieved. To solve this optimization problem we compare three methods: least squares, genetic algorithms and an adaptive neuro-fuzzy inference system. An illustrative example is presented to discuss the approaches.",
"title": ""
},
{
"docid": "f923b4d061ca3e33805e90208822fd1e",
"text": "Networks provide a powerful way to study complex systems of interacting objects. Detecting network communities-groups of objects that often correspond to functional modules-is crucial to understanding social, technological, and biological systems. Revealing communities allows for analysis of system properties that are invisible when considering only individual objects or the entire system, such as the identification of module boundaries and relationships or the classification of objects according to their functional roles. However, in networks where objects can simultaneously belong to multiple modules at once, the decomposition of a network into overlapping communities remains a challenge. Here we present a new paradigm for uncovering the modular structure of complex networks, based on a decomposition of a network into any combination of overlapping, nonoverlapping, and hierarchically organized communities. We demonstrate on a diverse set of networks coming from a wide range of domains that our approach leads to more accurate communities and improved identification of community boundaries. We also unify two fundamental organizing principles of complex networks: the modularity of communities and the commonly observed core-periphery structure. We show that dense network cores form as an intersection of many overlapping communities. We discover that communities in social, information, and food web networks have a single central dominant core while communities in protein-protein interaction (PPI) as well as product copurchasing networks have small overlaps and form many local cores.",
"title": ""
},
{
"docid": "945902f8d3dabb4e12143783a65457bd",
"text": "Authentication is a mechanism to verify identity of users. Those who can present valid credential are considered as authenticated identities. In this paper, we introduce an adaptive authentication system called Unified Authentication Platform (UAP) which incorporates adaptive control to identify high-risk and suspicious illegitimate login attempts. The system evaluates comprehensive set of known information about the users from the past login history to define their normal behavior profile. The system leverages this information that has been previously stored to determine the security risk and level of assurance of current login attempt.",
"title": ""
},
{
"docid": "e2649203ae3e8648c8ec1eafb7a19d6e",
"text": "This paper describes an algorithm to extract adaptive and quality quadrilateral/hexahedral meshes directly from volumetric data. First, a bottom-up surface topology preserving octree-based algorithm is applied to select a starting octree level. Then the dual contouring method is used to extract a preliminary uniform quad/hex mesh, which is decomposed into finer quads/hexes adaptively without introducing any hanging nodes. The positions of all boundary vertices are recalculated to approximate the boundary surface more accurately. Mesh adaptivity can be controlled by a feature sensitive error function, the regions that users are interested in, or finite element calculation results. Finally, a relaxation based technique is deployed to improve mesh quality. Several demonstration examples are provided from a wide variety of application domains. Some extracted meshes have been extensively used in finite element simulations.",
"title": ""
},
{
"docid": "67ca2df3c7d660600298e517020fe974",
"text": "The recent trend to design more efficient and versatile ships has increased the variety in hybrid propulsion and power supply architectures. In order to improve performance with these architectures, intelligent control strategies are required, while mostly conventional control strategies are applied currently. First, this paper classifies ship propulsion topologies into mechanical, electrical and hybrid propulsion, and power supply topologies into combustion, electrochemical, stored and hybrid power supply. Then, we review developments in propulsion and power supply systems and their control strategies, to subsequently discuss opportunities and challenges for these systems and the associated control. We conclude that hybrid architectures with advanced control strategies can reduce fuel consumption and emissions up to 10–35%, while improving noise, maintainability, manoeuvrability and comfort. Subsequently, the paper summarises the benefits and drawbacks, and trends in application of propulsion and power supply technologies, and it reviews the applicability and benefits of promising advanced control strategies. Finally, the paper analyses which control strategies can improve performance of hybrid systems for future smart and autonomous ships and concludes that a combination of torque, angle of attack, and Model Predictive Control with dynamic settings could improve performance of future smart and more",
"title": ""
},
{
"docid": "428069c804c035e028e9047d6c1f70f7",
"text": "We present a co-designed scheduling framework and platform architecture that together support compositional scheduling of real-time systems. The architecture is built on the Xen virtualization platform, and relies on compositional scheduling theory that uses periodic resource models as component interfaces. We implement resource models as periodic servers and consider enhancements to periodic server design that significantly improve response times of tasks and resource utilization in the system while preserving theoretical schedulability results. We present an extensive evaluation of our implementation using workloads from an avionics case study as well as synthetic ones.",
"title": ""
},
{
"docid": "8e770bdbddbf28c1a04da0f9aad4cf16",
"text": "This paper presents a novel switch-mode power amplifier based on a multicell multilevel circuit topology. The total output voltage of the system is formed by series connection of several switching cells having a low dc-link voltage. Therefore, the cells can be realized using modern low-voltage high-current power MOSFET devices and the dc link can easily be buffered by rechargeable batteries or “super” capacitors to achieve very high amplifier peak output power levels (“flying-battery” concept). The cells are operated in a phase-shifted interleaved pulsewidth-modulation mode, which, in connection with the low partial voltage of each cell, reduces the filtering effort at the output of the total amplifier to a large extent and, consequently, improves the dynamic system behavior. The paper describes the operating principle of the system, analyzes the fundamental relationships being relevant for the circuit design, and gives guidelines for the dimensioning of the control circuit. Furthermore, simulation results as well as results of measurements taken from a laboratory setup are presented.",
"title": ""
},
{
"docid": "3e064a2a984998fe07dde451325505bb",
"text": "Whereas some educational designers believe that students should learn new concepts through explorative problem solving within dedicated environments that constrain key parameters of their search and then support their progressive appropriation of empowering disciplinary forms, others are critical of the ultimate efficacy of this discovery-based pedagogical philosophy, citing an inherent structural challenge of students constructing historically achieved conceptual structures from their ingenuous notions. This special issue presents six educational research projects that, while adhering to principles of discovery-based learning, are motivated by complementary philosophical stances and theoretical constructs. The editorial introduction frames the set of projects as collectively exemplifying the viability and breadth of discovery-based learning, even as these projects: (a) put to work a span of design heuristics, such as productive failure, surfacing implicit know-how, playing epistemic games, problem posing, or participatory simulation activities; (b) vary in their target content and skills, including building electric circuits, solving algebra problems, driving safely in traffic jams, and performing martial-arts maneuvers; and (c) employ different media, such as interactive computer-based modules for constructing models of scientific phenomena or mathematical problem situations, networked classroom collective ‘‘video games,’’ and intercorporeal master–student training practices. The authors of these papers consider the potential generativity of their design heuristics across domains and contexts.",
"title": ""
}
] |
scidocsrr
|
5a9e3b2561807f9b635a9d696959694a
|
ElSe: ellipse selection for robust pupil detection in real-world environments
|
[
{
"docid": "a92aa1ea6faf19a2257dce1dda9cd0d0",
"text": "This paper introduces a novel content-adaptive image downscaling method. The key idea is to optimize the shape and locations of the downsampling kernels to better align with local image features. Our content-adaptive kernels are formed as a bilateral combination of two Gaussian kernels defined over space and color, respectively. This yields a continuum ranging from smoothing to edge/detail preserving kernels driven by image content. We optimize these kernels to represent the input image well, by finding an output image from which the input can be well reconstructed. This is technically realized as an iterative maximum-likelihood optimization using a constrained variation of the Expectation-Maximization algorithm. In comparison to previous downscaling algorithms, our results remain crisper without suffering from ringing artifacts. Besides natural images, our algorithm is also effective for creating pixel art images from vector graphics inputs, due to its ability to keep linear features sharp and connected.",
"title": ""
},
{
"docid": "1705ba479a7ff33eef46e0102d4d4dd0",
"text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.",
"title": ""
}
] |
[
{
"docid": "b2cd02622ec0fc29b54e567c7f10a935",
"text": "Performance and high availability have become increasingly important drivers, amongst other drivers, for user retention in the context of web services such as social networks, and web search. Exogenic and/or endogenic factors often give rise to anomalies, making it very challenging to maintain high availability, while also delivering high performance. Given that service-oriented architectures (SOA) typically have a large number of services, with each service having a large set of metrics, automatic detection of anomalies is nontrivial. Although there exists a large body of prior research in anomaly detection, existing techniques are not applicable in the context of social network data, owing to the inherent seasonal and trend components in the time series data. To this end, we developed two novel statistical techniques for automatically detecting anomalies in cloud infrastructure data. Specifically, the techniques employ statistical learning to detect anomalies in both application, and system metrics. Seasonal decomposition is employed to filter the trend and seasonal components of the time series, followed by the use of robust statistical metrics – median and median absolute deviation (MAD) – to accurately detect anomalies, even in the presence of seasonal spikes. We demonstrate the efficacy of the proposed techniques from three different perspectives, viz., capacity planning, user behavior, and supervised learning. In particular, we used production data for evaluation, and we report Precision, Recall, and F-measure in each case.",
"title": ""
},
{
"docid": "6ee55ac672b1d87d4f4947655d321fb8",
"text": "Federated identity providers, e.g., Facebook and PayPal, offer a convenient means for authenticating users to third-party applications. Unfortunately such cross-site authentications carry privacy and tracking risks. For example, federated identity providers can learn what applications users are accessing; meanwhile, the applications can know the users' identities in reality.\n This paper presents Crypto-Book, an anonymizing layer enabling federated identity authentications while preventing these risks. Crypto-Book uses a set of independently managed servers that employ a (t,n)-threshold cryptosystem to collectively assign credentials to each federated identity (in the form of either a public/private keypair or blinded signed messages). With the credentials in hand, clients can then leverage anonymous authentication techniques such as linkable ring signatures or partially blind signatures to log into third-party applications in an anonymous yet accountable way.\n We have implemented a prototype of Crypto-Book and demonstrated its use with three applications: a Wiki system, an anonymous group communication system, and a whistleblower submission system. Crypto-Book is practical and has low overhead: in a deployment within our research group, Crypto-Book group authentication took 1.607s end-to-end, an overhead of 1.2s compared to traditional non-privacy-preserving federated authentication.",
"title": ""
},
{
"docid": "3f8f09645f4a5a922b8a82e3a54c613d",
"text": "Computers are ubiquitous and have been shown to be contaminated with potentially pathogenic bacteria in some communities. There is no economical way to test all the keyboards and mouse out there, but there are common-sense ways to prevent bacterial contamination or eliminate it if it exists. In this study, swabs specimens were collected from surfaces of 250 computer keyboards and mouse and plated on different bacteriological media. Organisms growing on the media were purified and identified using microbiological standards. It was found that all the tested computer keyboards and mouse devices, were positive for microbial contamination. The percentages of isolated bacteria (Staphylococcus spp., Escherichia spp., Pseudomonas spp. and Bacillus spp.) were 43.3, 40.9, 30.7, 34.1, 18.3, 18.2, 7.7 and 6.8% for computer keyboards and mouse respectively. The isolated bacteria were tested against the 6 different disinfectants (Dettol, Isol, Izal, JIK, Purit and Septol ® ). Antibacterial effects of the disinfectants were also concentration dependent. The agar well diffusion technique for determining Minimum Inhibitory Concentration (MIC) was employed. The Killing rate (K) and Decimal Reduction Time (DRT) of the disinfectants on the organism were also determined. The overall result of this study showed that Dettol ® , followed by JIK ® was highly effective against all the bacterial isolates tested while Septol and Izal ® were least effective. Isol and Purit ® showed moderate antibacterial effects. Keyboards and mouse should be disinfected daily. However, it is recommended that heightened surveillance of the microbial examination of computer keyboards should be undertaken at predetermined intervals.",
"title": ""
},
{
"docid": "64e99944158284edb4474a2d0481f67b",
"text": "Synthesizing face sketches from real photos and its inverse have many applications. However, photo/sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled/paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https://github.com/lidan1/PhotoSketchMAN.",
"title": ""
},
{
"docid": "3882687dfa4f053d6ae128cf09bb8994",
"text": "In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and lowlevel features. The proposed TDM architecture provides a significant boost on the COCO benchmark, achieving 28.6 AP for VGG16 and 35.2 AP for ResNet101 networks. Using InceptionResNetv2, our TDM model achieves 37.3 AP, which is the best single-model performance to-date on the COCO testdev benchmark, without any bells and whistles.",
"title": ""
},
{
"docid": "fbe8c71c588e0865b82dd36385ec5bc2",
"text": "OBJECTIVE\nTo evaluate the frequency and the nature of genital trauma in female children in Jordan, and to stress the role of forensics.\n\n\nMETHODS\nThis is a cross-sectional study conducted between March 2008 and December 2011 in Jordan University Hospital, Amman, Jordan. Sixty-three female children were examined for genital trauma after immediate admission. The mechanism of injury was categorized and reported by the examiners as either straddle, non-straddle blunt, or penetrating.\n\n\nRESULTS\nStraddle injury was the cause of injuries in 90.5% of patients, and contusions were the significant type of injury in 34% of patients, followed by abrasions in both labia majora and labia minora. Only one case suffered from non-intact hymen and 2 had hematuria. These 3 cases (4.7%) required surgical intervention and follow-up after 2 weeks.\n\n\nCONCLUSION\nStraddle injuries were the main cause of genital trauma and rarely affect the hymen; however, due to the sensitivity of the subject and the severity of the traumas, forensic physicians should provide consultation and cooperate with gynecologists to exclude or confirm hymenal injuries, where empathy is necessary to mitigate tension associated with such injuries for the sake of the child and the parents as well, along with good management of the injury type.",
"title": ""
},
{
"docid": "3e0a52bc1fdf84279dee74898fcd93bf",
"text": "A variety of abnormal imaging findings of the petrous apex are encountered in children. Many petrous apex lesions are identified incidentally while images of the brain or head and neck are being obtained for indications unrelated to the temporal bone. Differential considerations of petrous apex lesions in children include “leave me alone” lesions, infectious or inflammatory lesions, fibro-osseous lesions, neoplasms and neoplasm-like lesions, as well as a few rare miscellaneous conditions. Some lesions are similar to those encountered in adults, and some are unique to children. Langerhans cell histiocytosis (LCH) and primary and metastatic pediatric malignancies such as neuroblastoma, rhabomyosarcoma and Ewing sarcoma are more likely to be encountered in children. Lesions such as petrous apex cholesterol granuloma, cholesteatoma and chondrosarcoma are more common in adults and are rarely a diagnostic consideration in children. We present a comprehensive pictorial review of CT and MRI appearances of pediatric petrous apex lesions.",
"title": ""
},
{
"docid": "17bf5c037090b90c01b619d821e03839",
"text": "Telling a great story often involves a deliberate alteration of emotions. In this paper, we objectively measure and analyze the narrative trajectories of stories in public speaking and their impact on subjective ratings. We conduct the analysis using the transcripts of over 2000 TED talks and estimate potential audience response using over 5 million spontaneous annotations from the viewers. We use IBM Watson Tone Analyzer to extract sentence-wise emotion, language, and social scores. Our study indicates that it is possible to predict (with AUC as high as 0.88) the subjective ratings of the audience by analyzing the narrative trajectories. Additionally, we find that some trajectories (for example, a flat trajectory of joy) correlate well with some specific ratings (e.g. \"Longwinded') assigned by the viewers. Such an association could be useful in forecasting audience responses using objective analysis.",
"title": ""
},
{
"docid": "5109aa9328094af5e552ed1cab62f09a",
"text": "In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action3D dataset and our algorithm outperforms Li et al. [25] on most of the cases.",
"title": ""
},
{
"docid": "4d42e42469fcead51969f3e642920abc",
"text": "In this paper, we present a dual-band antenna for Long Term Evolution (LTE) handsets. The proposed antenna is composed of a meandered monopole operating in the 700 MHz band and a parasitic element which radiates in the 2.5–2.7 GHz band. Two identical antennas are then closely positioned on the same 120×50 mm2 ground plane (Printed Circuit Board) which represents a modern-size PDA-mobile phone. To enhance the port-to-port isolation of the antennas, a neutralization technique is implemented between them. Scattering parameters, radiations patterns and total efficiencies are presented to illustrate the performance of the antenna-system.",
"title": ""
},
{
"docid": "98b0ce9e943ab1a22c4168ba1c79ceb6",
"text": "Along with rapid advancement of power semiconductors, voltage multipliers have introduced new series of pulsed power generators. In this paper, current topologies of capacitor-diode voltage multipliers (CDVM) are investigated. Alternative structures for voltage multiplier based on power electronics switches are presented in high voltage pulsed power supplies application. The new topology is able to generate the desired high voltage output without increasing the voltage rating of semiconductor devices as well as capacitors. Finally, a comparative analysis is carried out between different CDVM topologies. Experimental and simulation results are presented to verify the analysis.",
"title": ""
},
{
"docid": "9049805c56c9b7fc212fdb4c7f85dfe1",
"text": "Intentions (6) Do all the important errands",
"title": ""
},
{
"docid": "266f89564a34239cf419ed9e83a2c988",
"text": "The potential of high-resolution IKONOS and QuickBird satellite imagery for mapping and analysis of land and water resources at local scales in Minnesota is assessed in a series of three applications. The applications and accuracies evaluated include: (1) classification of lake water clarity (r = 0.89), (2) mapping of urban impervious surface area (r = 0.98), and (3) aquatic vegetation surveys of emergent and submergent plant groups (80% accuracy). There were several notable findings from these applications. For example, modeling and estimation approaches developed for Landsat TM data for continuous variables such as lake water clarity and impervious surface area can be applied to high-resolution satellite data. The rapid delivery of spatial data can be coupled with current GPS and field computer technologies to bring the imagery into the field for cover type validation. We also found several limitations in working with this data type. For example, shadows can influence feature classification and their effects need to be evaluated. Nevertheless, high-resolution satellite data has excellent potential to extend satellite remote sensing beyond what has been possible with aerial photography and Landsat data, and should be of interest to resource managers as a way to create timely and reliable assessments of land and water resources at a local scale. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "3275d3ee0dbd5f8a279a802a4e83d7a1",
"text": "Past. Data curation – the process of discovering, integrating, and cleaning data – is one of the oldest data management problems. Unfortunately, it is still the most time consuming and least enjoyable work of data scientists. So far, successful data curation stories are mainly ad-hoc solutions that are either domain-specific (for example, ETL rules) or task-specific (for example, entity resolution). Present. In the era of big data, data curation plays the critical role of taking the value of big data to a new level. However, the power of current data curation solutions are not keeping up with the ever changing data ecosystem in terms of volume, velocity, variety and veracity, mainly due to the high human cost, instead of machine cost, needed for providing the ad-hoc solutions mentioned above. Meanwhile, deep learning is making strides in achieving remarkable successes in areas such as image recognition, natural language processing, and speech recognition. This is largely due to its ability of (automatically) understanding data (features) that are neither domain-specific nor task-specific. Future. Data curation solutions need to keep the pace with the fast-changing data ecosystem, where the main hope is to devise domain-agnostic and task-agnostic solutions. To this end, we start a new, five-year research project, called AutoDC, to unleash the potential of deep learning towards self-driving data curation. We will discuss how different deep learning concepts (for example, distributed representations, model pre-training, transfer learning, and neural program synthesis) can be adapted and extended to solve various data curation problems. We will also showcase some low-hanging fruits about the early encounters between deep learning and data curation happening in AutoDC. We believe that the directions pointed out by this work will not only drive AutoDC towards democratizing data curation, but also serve as a cornerstone for researchers and practitioners to move to a new realm of data curation solutions. PVLDB Reference Format: Saravanan Thirumuruganathan, Nan Tang & Mourad Ouzzani. Data Curation with Deep Learning. PVLDB, 11 (5): xxxx-yyyy, 2018. DOI: https://doi.org/TBD",
"title": ""
},
{
"docid": "49d714c778b820fca5946b9a587d1e17",
"text": "The current Web of Data is producing increasingly large RDF datasets. Massive publication efforts of RDF data driven by initiatives like the Linked Open Data movement, and the need to exchange large datasets has unveiled the drawbacks of traditional RDF representations, inspired and designed by a documentcentric and human-readable Web. Among the main problems are high levels of verbosity/redundancy and weak machine-processable capabilities in the description of these datasets. This scenario calls for efficient formats for publication and exchange. This article presents a binary RDF representation addressing these issues. Based on a set of metrics that characterizes the skewed structure of real-world RDF data, we develop a proposal of an RDF representation that modularly partitions and efficiently represents three components of RDF datasets: Header information, a Dictionary, and the actual Triples structure (thus called HDT). Our experimental evaluation shows that datasets in HDT format can be compacted by more than fifteen times as compared to current naive representations, improving both parsing and processing while keeping a consistent publication scheme. Specific compression techniques over HDT further improve these compression rates and prove to outperform existing compression solutions for efficient RDF exchange. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "362301e0a25d8e14054b2eee20d9ba31",
"text": "Preterm birth is “a birth which takes place after at least 20, but less than 37, completed weeks of gestation. This includes both live births, and stillbirths” [15]. Preterm birth may cause problems such as perinatal mortality, serious neonatal morbidity and moderate to severe childhood disability. Between 6-10% of all births in Western countries are preterm and preterm deaths are the cause for more than two-third of all perinatal deaths [9]. While the recent advances in neonatal medicine has greatly increase the chance of survival of infants born after 20 weeks of gestation, these infants still frequently suffer from lifelong handicaps, and their care can exceed a million dollars during the first year of life [5 as cited in 6]. As a first step for preventing preterm birth, decision support tools are needed to help doctors predict preterm birth [6].",
"title": ""
},
{
"docid": "9e3a7af7b8773f43ba32d30f3610af40",
"text": "Several attempts to enhance statistical parametric speech synthesis have contemplated deep-learning-based postfil-ters, which learn to perform a mapping of the synthetic speech parameters to the natural ones, reducing the gap between them. In this paper, we introduce a new pre-training approach for neural networks, applied in LSTM-based postfilters for speech synthesis, with the objective of enhancing the quality of the synthesized speech in a more efficient manner. Our approach begins with an auto-regressive training of one LSTM network, whose is used as an initialization for postfilters based on a denoising autoencoder architecture. We show the advantages of this initialization on a set of multi-stream postfilters, which encompass a collection of denoising autoencoders for the set of MFCC and fundamental frequency parameters of the artificial voice. Results show that the initialization succeeds in lowering the training time of the LSTM networks and achieves better results in enhancing the statistical parametric speech in most cases, when compared to the common random-initialized approach of the networks.",
"title": ""
},
{
"docid": "b0989fb1775c486317b5128bc1c31c76",
"text": "Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations.",
"title": ""
},
{
"docid": "499f3d46aff5196eff4f7550f8374b67",
"text": "The main task for any level of coach is to construct a training program that will ensure continual progression of an athlete whilst avoiding injury. This is a particularly challenging task with athletes who have had several training years behind them. According to the principles of training, to ensure adaptation, overload in the form of manipulating frequency, volume and intensity must be applied. Furthermore, training exercises must be specific to the target task to ensure a carry over effect. Biomechanics is a sports science sub-discipline which is able to quantify the potential effect of training exercises, rather than leaving it to the coaches \"gut feel\".",
"title": ""
},
{
"docid": "c9d137a71c140337b3f8345efdac17ab",
"text": "For more than 30 years, many authors have attempted to synthesize the knowledge about how an enterprise should structure its business processes, the people that execute them, the Information Systems that support both of these and the IT layer on which such systems operate, in such a way that they will be aligned with the business strategy. This is the challenge of Enterprise Architecture design, which is the theme of this paper. We will provide a brief review of the literature on this subject, with an emphasis on more recent proposals and methods that have been applied in practice. We also select approaches that propose some sort of framework that provides a general Enterprise Architecture in a given domain that can be reused as a basis for specific designs in such a domain. Then we present our proposal for Enterprise Architecture design, which is based on general domain models that we call Enterprise Architecture Patterns.",
"title": ""
}
] |
scidocsrr
|
da5a23224c4ba2beb2436f010e9e892e
|
A New CNN-Based Method for Multi-Directional Car License Plate Detection
|
[
{
"docid": "a1c917d7a685154060ddd67d631ea061",
"text": "In this paper, for finding the place of plate, a real time and fast method is expressed. In our suggested method, the image is taken to HSV color space; then, it is broken into blocks in a stable size. In frequent process, each block, in special pattern is probed. With the appearance of pattern, its neighboring blocks according to geometry of plate as a candidate are considered and increase blocks, are omitted. This operation is done for all of the uncontrolled blocks of images. First, all of the probable candidates are exploited; then, the place of plate is obtained among exploited candidates as density and geometry rate. In probing every block, only its lip pixel is studied which consists 23.44% of block area. From the features of suggestive method, we can mention the lack of use of expensive operation in image process and its low dynamic that it increases image process speed. This method is examined on the group of picture in background, distance and point of view. The rate of exploited plate reached at 99.33% and character recognition rate achieved 97%.",
"title": ""
},
{
"docid": "971a0e51042e949214fd75ab6203e36a",
"text": "This paper presents an automatic recognition method for color text characters extracted from scene images, which is robust to strong distortions, complex background, low resolution and non uniform lightning. Based on a specific architecture of convolutional neural networks, the proposed system automatically learns how to recognize characters without making any assumptions, without applying any preprocessing or post-processing and without using tunable parameters. For this purpose, we use a training set of scene text images extracted from the ICDAR 2003 public training database. The proposed method is compared to recent character recognition techniques for scene images based on the ICDAR 2003 public samples dataset in order to contribute to the state-of-the-art method comparison efforts initiated in ICDAR 2003. Experimental results show an encouraging average recognition rate of 84.53%, ranging from 93.47% for clear images to 67.86% for seriously distorted images.",
"title": ""
}
] |
[
{
"docid": "a4d2ca4e14991df8495efc51edc7168a",
"text": "In this paper, the theory of reciprocal screws is reviewed. Reciprocal screw systems associated with some frequently used kinematic pairs and chains are developed. Then, the application of reciprocal screw systems for the Jacobian analysis of parallel manipulators is described. The Jacobian and singular conditions of a six-dof parallel manipulator are analyzed.",
"title": ""
},
{
"docid": "7554941bfcde72c640419ab591f02bbc",
"text": "An always-growing number of cars are equipped with radars, mainly used for drivers and passengers' safety. In particular, according to European Telecommunications Standards Institute (ETSI) one specific frequency band is dedicated to automatic cruise control long-range radar operating around 77 GHz (W-band). After the discussion of the Mie scattering formulation applied to a weather radar working in the W-band, the proposal of a new Z-R equation to be used for correct rain estimation is given. Functional requirements to adapt an automatic cruise control long-range radar to a mini-weather radar are analyzed and the technical specifications are evaluated. Results provide the basis for the use of a 77 GHz automotive anti-collision radar for meteorological purposes.",
"title": ""
},
{
"docid": "18fb10d9ee35423fb2b637e4dc10ae47",
"text": "New emerging micro gas turbine generator sets and turbocompressor systems push the speed limits of rotating machinery. To directly connect to these applications, ultra-high-speed electrical drive systems are needed. Therefore a 1 kW, 500000 rpm machine and the according power and control electronics are designed and built. This paper includes design considerations for the mechanical and electromagnetic machine design. Furthermore, a voltage source inverter with an additional dc-dc converter is described, and sensorless rotor position detection and digital control is used to drive the machine. Finally, the hardware and experimental results are presented.",
"title": ""
},
{
"docid": "dce63433a9900b9b4e6d9d420713b38d",
"text": "Pathogenic microorganisms must cope with extremely low free-iron concentrations in the host's tissues. Some fungal pathogens rely on secreted haemophores that belong to the Common in Fungal Extracellular Membrane (CFEM) protein family, to extract haem from haemoglobin and to transfer it to the cell's interior, where it can serve as a source of iron. Here we report the first three-dimensional structure of a CFEM protein, the haemophore Csa2 secreted by Candida albicans. The CFEM domain adopts a novel helical-basket fold that consists of six α-helices, and is uniquely stabilized by four disulfide bonds formed by its eight signature cysteines. The planar haem molecule is bound between a flat hydrophobic platform located on top of the helical basket and a peripheral N-terminal ‘handle’ extension. Exceptionally, an aspartic residue serves as the CFEM axial ligand, and so confers coordination of Fe3+ haem, but not of Fe2+ haem. Histidine substitution mutants of this conserved Asp acquired Fe2+ haem binding and retained the capacity to extract haem from haemoglobin. However, His-substituted CFEM proteins were not functional in vivo and showed disturbed haem exchange in vitro, which suggests a role for the oxidation-state-specific Asp coordination in haem acquisition by CFEM proteins.",
"title": ""
},
{
"docid": "bb19e122737f08997585999575d2a394",
"text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "511342f43f7b5f546e72e8651ae4e313",
"text": "With the introduction of the Microsoft Kinect for Windows v2 (Kinect v2), an exciting new sensor is available to robotics and computer vision researchers. Similar to the original Kinect, the sensor is capable of acquiring accurate depth images at high rates. This is useful for robot navigation as dense and robust maps of the environment can be created. Opposed to the original Kinect working with the structured light technology, the Kinect v2 is based on the time-of-flight measurement principle and might also be used outdoors in sunlight. In this paper, we evaluate the application of the Kinect v2 depth sensor for mobile robot navigation. The results of calibrating the intrinsic camera parameters are presented and the minimal range of the depth sensor is examined. We analyze the data quality of the measurements for indoors and outdoors in overcast and direct sunlight situations. To this end, we introduce empirically derived noise models for the Kinect v2 sensor in both axial and lateral directions. The noise models take the measurement distance, the angle of the observed surface, and the sunlight incidence angle into account. These models can be used in post-processing to filter the Kinect v2 depth images for a variety of applications.",
"title": ""
},
{
"docid": "63663dbc320556f7de09b5060f3815a6",
"text": "There has been a long history of applying AI technologies to address software engineering problems especially on tool automation. On the other hand, given the increasing importance and popularity of AI software, recent research efforts have been on exploring software engineering solutions to improve the productivity of developing AI software and the dependability of AI software. The emerging field of intelligent software engineering is to focus on two aspects: (1) instilling intelligence in solutions for software engineering problems; (2) providing software engineering solutions for intelligent software. This extended abstract shares perspectives on these two aspects of intelligent software engineering.",
"title": ""
},
{
"docid": "05dc82e180514733bfc1f0bf5638178e",
"text": "There is growing interest in improving the design of deep network architectures to be both accurate and low cost. This paper explores semantic specialization as a mechanism for improving the computational efficiency (accuracy-per-unit-cost) of inference in the context of image classification. Specifically, we propose a network architecture template called HydraNet, which enables state-of-the-art architectures for image classification to be transformed into dynamic architectures which exploit conditional execution for efficient inference. HydraNets are wide networks containing distinct components specialized to compute features for visually similar classes, but they retain efficiency by dynamically selecting only a small number of components to evaluate for any one input image. This design is made possible by a soft gating mechanism that encourages component specialization during training and accurately performs component selection during inference. We evaluate the HydraNet approach on both the CIFAR-100 and ImageNet classification tasks. On CIFAR, applying the HydraNet template to the ResNet and DenseNet family of models reduces inference cost by 2-4× while retaining the accuracy of the baseline architectures. On ImageNet, applying the HydraNet template improves accuracy up to 2.5% when compared to an efficient baseline architecture with similar inference cost.",
"title": ""
},
{
"docid": "a8f27679e13572d00d5eae3496cec014",
"text": "Today, we are forward to meeting an older people society in the world. The elderly people have become a high risk of dementia or depression. In recent years, with the rapid development of internet of things (IoT) techniques, it has become a feasible solution to build a system that combines IoT and cloud techniques for detecting and preventing the elderly dementia or depression. This paper proposes an IoT-based elderly behavioral difference warning system for early depression and dementia warning. The proposed system is composed of wearable smart glasses, a BLE-based indoor trilateration position, and a cloud-based service platform. As a result, the proposed system can not only reduce human and medical costs, but also improve the cure rate of depression or delay the deterioration of dementia.",
"title": ""
},
{
"docid": "25c14589a19c2d1dea78f222d4a328ab",
"text": "BACKGROUND\nParkinson's disease (PD) is the most prevalent movement disorder of the central nervous system, and affects more than 6.3 million people in the world. The characteristic motor features include tremor, bradykinesia, rigidity, and impaired postural stability. Current therapy based on augmentation or replacement of dopamine is designed to improve patients' motor performance but often leads to levodopa-induced adverse effects, such as dyskinesia and motor fluctuation. Clinicians must regularly monitor patients in order to identify these effects and other declines in motor function as soon as possible. Current clinical assessment for Parkinson's is subjective and mostly conducted by brief observations made during patient visits. Changes in patients' motor function between visits are hard to track and clinicians are not able to make the most informed decisions about the course of therapy without frequent visits. Frequent clinic visits increase the physical and economic burden on patients and their families.\n\n\nOBJECTIVE\nIn this project, we sought to design, develop, and evaluate a prototype mobile cloud-based mHealth app, \"PD Dr\", which collects quantitative and objective information about PD and would enable home-based assessment and monitoring of major PD symptoms.\n\n\nMETHODS\nWe designed and developed a mobile app on the Android platform to collect PD-related motion data using the smartphone 3D accelerometer and to send the data to a cloud service for storage, data processing, and PD symptoms severity estimation. To evaluate this system, data from the system were collected from 40 patients with PD and compared with experts' rating on standardized rating scales.\n\n\nRESULTS\nThe evaluation showed that PD Dr could effectively capture important motion features that differentiate PD severity and identify critical symptoms. For hand resting tremor detection, the sensitivity was .77 and accuracy was .82. For gait difficulty detection, the sensitivity was .89 and accuracy was .81. In PD severity estimation, the captured motion features also demonstrated strong correlation with PD severity stage, hand resting tremor severity, and gait difficulty. The system is simple to use, user friendly, and economically affordable.\n\n\nCONCLUSIONS\nThe key contribution of this study was building a mobile PD assessment and monitoring system to extend current PD assessment based in the clinic setting to the home-based environment. The results of this study proved feasibility and a promising future for utilizing mobile technology in PD management.",
"title": ""
},
{
"docid": "a9d948498c0ad0d99759636ea3ba4d1a",
"text": "Recently, Real Time Location Systems (RTLS) have been designed to provide location information of positioning target. The kernel of RTLS is localization algorithm, range-base localization algorithm is concerned as high precision. This paper introduces real-time range-based indoor localization algorithms, including Time of Arrival, Time Difference of Arrival, Received Signal Strength Indication, Time of Flight, and Symmetrical Double Sided Two Way Ranging. Evaluation criteria are proposed for assessing these algorithms, namely positioning accuracy, scale, cost, energy efficiency, and security. We also introduce the latest some solution, compare their Strengths and weaknesses. Finally, we give a recommendation about selecting algorithm from the viewpoint of the practical application need.",
"title": ""
},
{
"docid": "b8b4e582fbcc23a5a72cdaee1edade32",
"text": "In recent years, research into the mining of user check-in behavior for point-of-interest (POI) recommendations has attracted a lot of attention. Existing studies on this topic mainly treat such recommendations in a traditional manner—that is, they treat POIs as items and check-ins as ratings. However, users usually visit a place for reasons other than to simply say that they have visited. In this article, we propose an approach referred to as Urban POI-Walk (UPOI-Walk), which takes into account a user's social-triggered intentions (SI), preference-triggered intentions (PreI), and popularity-triggered intentions (PopI), to estimate the probability of a user checking-in to a POI. The core idea of UPOI-Walk involves building a HITS-based random walk on the normalized check-in network, thus supporting the prediction of POI properties related to each user's preferences. To achieve this goal, we define several user--POI graphs to capture the key properties of the check-in behavior motivated by user intentions. In our UPOI-Walk approach, we propose a new kind of random walk model—Dynamic HITS-based Random Walk—which comprehensively considers the relevance between POIs and users from different aspects. On the basis of similitude, we make an online recommendation as to the POI the user intends to visit. To the best of our knowledge, this is the first work on urban POI recommendations that considers user check-in behavior motivated by SI, PreI, and PopI in location-based social network data. Through comprehensive experimental evaluations on two real datasets, the proposed UPOI-Walk is shown to deliver excellent performance.",
"title": ""
},
{
"docid": "ca4f93646f4975239771a2f49c108569",
"text": "In this report we describe a case of the Zoon's balanitis in a boy with HIV (AIDS B2). The clinical presentation, failure of topical treatment, cure by circumcision, and the histopathology findings are presented.",
"title": ""
},
{
"docid": "316c106ae8830dcf8a3cf64775f56ebe",
"text": "Friendship is the cornerstone to build a social network. In online social networks, statistics show that the leading reason for user to create a new friendship is due to recommendation. Thus the accuracy of recommendation matters. In this paper, we propose a Bayesian Personalized Ranking Deep Neural Network (BayDNN) model for friend recommendation in social networks. With BayDNN, we achieve significant improvement on two public datasets: Epinions and Slashdot. For example, on Epinions dataset, BayDNN significantly outperforms the state-of-the-art algorithms, with a 5% improvement on NDCG over the best baseline.\n The advantages of the proposed BayDNN mainly come from its underlying convolutional neural network (CNN), which offers a mechanism to extract latent deep structural feature representations of the complicated network data, and a novel Bayesian personalized ranking idea, which precisely captures the users' personal bias based on the extracted deep features. To get good parameter estimation for the neural network, we present a fine-tuned pre-training strategy for the proposed BayDNN model based on Poisson and Bernoulli probabilistic models.",
"title": ""
},
{
"docid": "bb1081f8c28c3ebcfd37a4d7a3c09757",
"text": "There is increasing interest in using Field Programmable Gate Arrays (FPGAs) as platforms for computer architecture simulation. This paper is concerned with modeling superscalar processors with FPGAs. To be transformative, the FPGA modeling framework should meet three criteria. (1) Configurable: The framework should be able to model diverse superscalar processors, like a software model. In particular, it should be possible to vary superscalar parameters such as fetch, issue, and retire widths, depths of pipeline stages, queue sizes, etc. (2) Automatic: The framework should be able to automatically and efficiently map any one of its superscalar processor configurations to the FPGA. (3) Realistic: The framework should model a modern superscalar microarchitecture in detail, ideally with prototype quality, to enable a new era and depth of microarchitecture research. A framework that meets these three criteria will enjoy the convenience of a software model, the speed of an FPGA model, and the experience of a prototype. This paper describes FPGA-Sim, a configurable, automatically FPGA-synthesizable, and register-transfer-level (RTL) model of an out-of-order superscalar processor. FPGA-Sim enables FPGA modeling of diverse superscalar processors out-of-the-box. Moreover, its direct RTL implementation yields the fidelity of a hardware prototype.",
"title": ""
},
{
"docid": "b058b1b8a00dff42425f189693fb16b0",
"text": "Transition-metal dichalcogenides like molybdenum disulphide have attracted great interest as two-dimensional materials beyond graphene due to their unique electronic and optical properties. Solution-phase processes can be a viable method for producing printable single-layer chalcogenides. Molybdenum disulphide can be exfoliated into monolayer flakes using organolithium reduction chemistry; unfortunately, the method is hampered by low yield, submicron flake size and long lithiation time. Here we report a high-yield exfoliation process using lithium, potassium and sodium naphthalenide where an intermediate ternary Li(x)MX(n) crystalline phase (X=selenium, sulphur, and so on) is produced. Using a two-step expansion and intercalation method, we produce high-quality single-layer molybdenum disulphide sheets with unprecedentedly large flake size, that is up to 400 μm(2). Single-layer dichalcogenide inks prepared by this method may be directly inkjet-printed on a wide range of substrates.",
"title": ""
},
{
"docid": "714843ca4a3c99bfc95e89e4ff82aeb1",
"text": "The development of new technologies for mapping structural and functional brain connectivity has led to the creation of comprehensive network maps of neuronal circuits and systems. The architecture of these brain networks can be examined and analyzed with a large variety of graph theory tools. Methods for detecting modules, or network communities, are of particular interest because they uncover major building blocks or subnetworks that are particularly densely connected, often corresponding to specialized functional components. A large number of methods for community detection have become available and are now widely applied in network neuroscience. This article first surveys a number of these methods, with an emphasis on their advantages and shortcomings; then it summarizes major findings on the existence of modules in both structural and functional brain networks and briefly considers their potential functional roles in brain evolution, wiring minimization, and the emergence of functional specialization and complex dynamics.",
"title": ""
},
{
"docid": "58fffa67053a82875177f32e126c2e43",
"text": "Cracking-resistant password vaults have been recently proposed with the goal of thwarting offline attacks. This requires the generation of synthetic password vaults that are statistically indistinguishable from real ones. In this work, we establish a conceptual link between this problem and steganography, where the stego objects must be undetectable among cover objects. We compare the two frameworks and highlight parallels and differences. Moreover, we transfer results obtained in the steganography literature into the context of decoy generation. Our results include the infeasibility of perfectly secure decoy vaults and the conjecture that secure decoy vaults are at least as hard to construct as secure steganography.",
"title": ""
},
{
"docid": "d9471b93ddb5cedfeebd514f9ed6f9af",
"text": "Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.",
"title": ""
},
{
"docid": "138c14c92a7e9545299d96df7fc80aea",
"text": "In this chapter, supervised machine learning methods are described in the context of microarray applications. The most widely used families of machine learning methods are described, along with various approaches to learner assessment. The Bioconductor interfaces to machine learning tools are described and illustrated. Key problems of model selection and interpretation are reviewed in examples.",
"title": ""
}
] |
scidocsrr
|
301bcd4222a0c21bb75b8e4714797acf
|
Visual data mining in software archives
|
[
{
"docid": "d45e3bace2d24dd2b33b13328eacc499",
"text": "A sequential pattern in data mining is a finite series of elements such as A → B → C → D where A, B, C, and D are elements of the same domain. The mining of sequential patterns is designed to find patterns of discrete events that frequently happen in the same arrangement along a timeline. Like association and clustering, the mining of sequential patterns is among the most popular knowledge discovery techniques that apply statistical measures to extract useful information from large datasets. As our computers become more powerful, we are able to mine bigger datasets and obtain hundreds of thousands of sequential patterns in full detail. With this vast amount of data, we argue that neither data mining nor visualization by itself can manage the information and reflect the knowledge effectively. Subsequently, we apply visualization to augment data mining in a study of sequential patterns in large text corpora. The result shows that we can learn more and more quickly in an integrated visual datamining environment.",
"title": ""
}
] |
[
{
"docid": "074fd9d0c7bd9e5f31beb77c140f61d0",
"text": "In this chapter, we examine the self and identity by considering the different conditions under which these are affected by the groups to which people belong. From a social identity perspective we argue that group commitment, on the one hand, and features of the social context, on the other hand, are crucial determinants of central identity concerns. We develop a taxonomy of situations to reflect the different concerns and motives that come into play as a result of threats to personal and group identity and degree of commitment to the group. We specify for each cell in this taxonomy how these issues of self and social identity impinge upon a broad variety of responses at the perceptual, affective, and behavioral level.",
"title": ""
},
{
"docid": "20b6d457acf80a2171880ca312def57f",
"text": "Recent evidence points to a possible overlap in the neural systems underlying the distressing experience that accompanies physical pain and social rejection (Eisenberger et al., 2003). The present study tested two hypotheses that stem from this suggested overlap, namely: (1) that baseline sensitivity to physical pain will predict sensitivity to social rejection and (2) that experiences that heighten social distress will heighten sensitivity to physical pain as well. In the current study, participants' baseline cutaneous heat pain unpleasantness thresholds were assessed prior to the completion of a task that manipulated feelings of social distress. During this task, participants played a virtual ball-tossing game, allegedly with two other individuals, in which they were either continuously included (social inclusion condition) or they were left out of the game by either never being included or by being overtly excluded (social rejection conditions). At the end of the game, three pain stimuli were delivered and participants rated the unpleasantness of each. Results indicated that greater baseline sensitivity to pain (lower pain unpleasantness thresholds) was associated with greater self-reported social distress in response to the social rejection conditions. Additionally, for those in the social rejection conditions, greater reports of social distress were associated with greater reports of pain unpleasantness to the thermal stimuli delivered at the end of the game. These results provide additional support for the hypothesis that pain distress and social distress share neurocognitive substrates. Implications for clinical populations are discussed.",
"title": ""
},
{
"docid": "45252c6ffe946bf0f9f1984f60ffada6",
"text": "Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In this work we reparameterize discrete variational auto-encoders using the Gumbel-Max perturbation model that represents the Gibbs distribution using the arg max of randomly perturbed encoder. We subsequently apply the direct loss minimization technique to propagate gradients through the reparameterized arg max. The resulting gradient is estimated by the difference of the encoder gradients that are evaluated in two arg max predictions.",
"title": ""
},
{
"docid": "076adb210e56d34225d302baa0183c1c",
"text": "It has long been recognised that sleep is more than a passive recovery state or mere inconvenience. Sleep plays an active role in maintaining a healthy body, brain and mind, and numerous studies suggest that sleep is integral to learning and memory. The importance of sleep for cognition is clear in studies of those experiencing sleep deprivation, who show consistent deficits across cognitive domains in relation to nonsleep-deprived controls, particularly in tasks that tax attention or executive functions (1). Poor sleep has been associated with poor grades, and academic performance suffers when sleep is sacrificed for extra study (2). Thus, it is perhaps unsurprising that children with developmental disorders of learning and cognition often suffer from sleep disturbances. These have been well documented in children with autism and attention-deficit/hyperactivity disorder (ADHD), where sleep problems can be particularly severe. However, a growing body of evidence suggests that sleep can be atypical across a spectrum of learning disorders. Understanding the ways in which sleep is affected in different developmental disorders can not only support the design and implementation of effective assessment and remediation programs, but can also inform theories of how sleep supports cognition in typical development. The study by Carotenuto et al. (3) in this issue makes a valuable contribution to this literature by looking at sleep disturbances in children with developmental dyslexia. Dyslexia is the most common specific learning disorder, affecting around one in 10 children in our classrooms. It is characterised by difficulties with reading and spelling and is primarily caused by a deficit in phonological processing. However, dyslexia often co-occurs with other developmental disorders, such as ADHD and specific language impairment, and there can be striking heterogeneity between children. This has led to the suggestion that dyslexia can result from complex combinations of multiple risk factors and impairments (4). Consequently, research attention is turning towards the wider constellation of subclinical difficulties often experienced by children with dyslexia, including potential sleep problems. Two preliminary studies have found differences in the sleep architecture of children with dyslexia in comparison with typical peers, using overnight sleep EEG recordings (polysomnography) (5,6). Notably, children with dyslexia showed unusually long periods of slow wave sleep and an increased number of sleep spindles. Slow wave sleep and spindles are related to language learning, most notably through promoting the consolidation of new vocabulary (7). Children with dyslexia have pronounced deficits in learning new oral vocabulary, providing a plausible theoretical link between sleep disturbances and language difficulties. If sleep problems do in fact exacerbate the learning difficulties associated with dyslexia, as well as impacting on daily cognitive function, this could have important implications for intervention and support programs. However, an important first step is to establish the nature and extent of sleep disturbances in dyslexia. Previous studies (5,6) have used small samples (N = <30) and examined a large array of sleep parameters on a small number of unusual nights (where children were wearing sleep recording equipment), as opposed to looking at global patterns over time. As such, how representative these findings are is questionable, and consequently these studies should be viewed as hypothesis-generating rather than hypothesis-testing. Carotenuto et al. (3) address some of these concerns, administering questionnaire measures of sleep habits to the parents of 147 children with dyslexia and 766 children without dyslexia, aged 8–12 years. A sample of this size allows for a robust analysis of sleep characteristics. Therefore, their findings that children with dyslexia showed higher rates of several markers of sleep disorders lend significant weight to suggestions that dyslexia might be associated with an increased risk for sleep problems. Importantly, the sleep questionnaire used by Carotenuto et al. (3) allows for a breakdown of sleep disturbances. It is interesting to note that they found the greatest difficulties in initiating and maintaining sleep, sleep breathing disorders and disorders of arousal. This closely mirrors the types of sleep problem documented in children with ADHD (8). While Carotenuto et al. (3) took care to exclude children with comorbid diagnoses, many children with dyslexia show subtle features of attention disorders that do not reach clinical thresholds. Future studies that can establish whether sleep disturbances are associated with subclinical attention problems or dyslexia per se will be particularly informative for understanding which cognitive skills most critically relate to sleep. This is also vital information for",
"title": ""
},
{
"docid": "bd21815804115f2c413265660a78c203",
"text": "Outsourcing, internationalization, and complexity characterize today's aerospace supply chains, making aircraft manufacturers structurally dependent on each other. Despite several complexity-related supply chain issues reported in the literature, aerospace supply chain structure has not been studied due to a lack of empirical data and suitable analytical toolsets for studying system structure. In this paper, we assemble a large-scale empirical data set on the supply network of Airbus and apply the new science of networks to analyze how the industry is structured. Our results show that the system under study is a network, formed by communities connected by hub firms. Hub firms also tend to connect to each other, providing cohesiveness, yet making the network vulnerable to disruptions in them. We also show how network science can be used to identify firms that are operationally critical and that are key to disseminating information.",
"title": ""
},
{
"docid": "3e1023f2ff554d7cb3e5e02ba4181237",
"text": "Convolutional neural network (CNN) offers significant accuracy in image detection. To implement image detection using CNN in the Internet of Things (IoT) devices, a streaming hardware accelerator is proposed. The proposed accelerator optimizes the energy efficiency by avoiding unnecessary data movement. With unique filter decomposition technique, the accelerator can support arbitrary convolution window size. In addition, max-pooling function can be computed in parallel with convolution by using separate pooling unit, thus achieving throughput improvement. A prototype accelerator was implemented in TSMC 65-nm technology with a core size of 5 mm2. The accelerator can support major CNNs and achieve 152GOPS peak throughput and 434GOPS/W energy efficiency at 350 mW, making it a promising hardware accelerator for intelligent IoT devices.",
"title": ""
},
{
"docid": "f765a0c29c6d553ae1c7937b48416e9c",
"text": "Although the topic of psychological well-being has generated considerable research, few studies have investigated how adults themselves define positive functioning. To probe their conceptions of well-being, interviews were conducted with a community sample of 171 middle-aged (M = 52.5 years, SD = 8.7) and older (M = 73.5 years, SD = 6.1) men and women. Questions pertained to general life evaluations, past life experiences, conceptions of well-being, and views of the aging process. Responses indicated that both age groups and sexes emphasized an \"others orientation\" (being a caring, compassionate person, and having good relationships) in defining well-being. Middle-aged respondents stressed self-confidence, self-acceptance, and self-knowledge, whereas older persons cited accepting change as an important quality of positive functioning. In addition to attention to positive relations with others as an index of well-being, lay views pointed to a sense of humor, enjoying life, and accepting change as criteria of successful aging.",
"title": ""
},
{
"docid": "52f4b881941ba82bd8505aca6326821c",
"text": "Labview and National Instruments hardware is used to measure, analyze and solve multiple Industry problems, mostly in small mechatronics systems or fixed manipulators. myRIO have been used worldwide in the last few years to provide a reliable data acquisition. While in Industry and in Universities myRIO is vastly used, Arduino is still the most common tool for hobby or student based projects, therefore Mobile Robotics platforms integrate Arduino more often than myRIO. In this study, an overall hardware description will be presented, together with the software designed for autonomous and remote navigation in unknown scenarios. The designed robot was used in EuroSkills 2016 competition in Sweden.",
"title": ""
},
{
"docid": "e0db3c5605ea2ea577dda7d549e837ae",
"text": "This paper presents a system based on new operators for handling sets of propositional clauses represented by means of ZBDDs. The high compression power of such data structures allows efficient encodings of structured instances. A specialized operator for the distribution of sets of clauses is introduced and used for performing multi-resolution on clause sets. Cut eliminations between sets of clauses of exponential size may then be performed using polynomial size data structures. The ZRES system, a new implementation of the Davis-Putnam procedure of 1960, solves two hard problems for resolution, that are currently out of the scope of the best SAT provers.",
"title": ""
},
{
"docid": "7709c76755f61920182653774721a47b",
"text": "Game-based learning (GBL) combines pedagogy and interactive entertainment to create a virtual learning environment in an effort to motivate and regain the interest of a new generation of ‘digital native’ learners. However, this approach is impeded by the limited availability of suitable ‘serious’ games and high-level design tools to enable domain experts to develop or customise serious games. Model Driven Engineering (MDE) goes some way to provide the techniques required to generate a wide variety of interoperable serious games software solutions whilst encapsulating and shielding the technicality of the full software development process. In this paper, we present our Game Technology Model (GTM) which models serious game software in a manner independent of any hardware or operating platform specifications for use in our Model Driven Serious Game Development Framework.",
"title": ""
},
{
"docid": "e49515145975eadccc20b251d56f0140",
"text": "High mortality of nestling cockatiels (Nymphicus hollandicus) was observed in one breeding flock in Slovakia. The nestling mortality affected 50% of all breeding pairs. In general, all the nestlings in affected nests died. Death occurred suddenly in 4to 6-day-old birds, most of which had full crops. No feather disorders were diagnosed in this flock. Two dead nestlings were tested by nested PCR for the presence of avian polyomavirus (APV) and Chlamydophila psittaci and by single-round PCR for the presence of beak and feather disease virus (BFDV). After the breeding season ended, a breeding pair of cockatiels together with their young one and a fledgling budgerigar (Melopsittacus undulatus) were examined. No clinical alterations were observed in these birds. Haemorrhages in the proventriculus and irregular foci of yellow liver discoloration were found during necropsy in the young cockatiel and the fledgling budgerigar. Microscopy revealed liver necroses and acute haemolysis in the young cockatiel and confluent liver necroses and heart and kidney haemorrhages in the budgerigar. Two dead cockatiel nestlings, the young cockatiel and the fledgling budgerigar were tested positive for APV, while the cockatiel adults were negative. The presence of BFDV or Chlamydophila psittaci DNA was detected in none of the birds. The specificity of PCR was confirmed by the sequencing of PCR products amplified from the samples from the young cockatiel and the fledgling budgerigar. The sequences showed 99.6–100% homology with the previously reported sequences. To our knowledge, this is the first report of APV infection which caused a fatal disease in parent-raised cockatiel nestlings and merely subclinical infection in budgerigar nestlings.",
"title": ""
},
{
"docid": "7f84e215df3d908249bde3be7f2b3cab",
"text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.",
"title": ""
},
{
"docid": "77f5216ede8babf4fb3b2bcbfc9a3152",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "515e2b726f0e5e7ceb5938fa5d917694",
"text": "Text preprocessing and segmentation are critical tasks in search and text mining applications. Due to the huge amount of documents that are exclusively presented in PDF format, most of the Data Mining (DM) and Information Retrieval (IR) systems must extract content from the PDF files. In some occasions this is a difficult task: the result of the extraction process from a PDF file is plain text, and it should be returned in the same order as a human would read the original PDF file. However, current tools for PDF text extraction fail in this objective when working with complex documents with multiple columns. For instance, this is the case of official government bulletins with legal information. In this task, it is mandatory to get correct and ordered text as a result of the application of the PDF extractor. It is very usual that a legal article in a document refers to a previous article and they should be offered in the right sequential order. To overcome these difficulties we have designed a new method for extraction of text in PDFs that simulates the human reading order. We evaluated our method and compared it against other PDF extraction tools and algorithms. Evaluation of our approach shows that it significantly outperforms the results of the existing tools and algorithms.",
"title": ""
},
{
"docid": "70a7aa831b2036a50de1751ed1ace6d9",
"text": "Short stature and later maturation of youth artistic gymnasts are often attributed to the effects of intensive training from a young age. Given limitations of available data, inadequate specification of training, failure to consider other factors affecting growth and maturation, and failure to address epidemiological criteria for causality, it has not been possible thus far to establish cause-effect relationships between training and the growth and maturation of young artistic gymnasts. In response to this ongoing debate, the Scientific Commission of the International Gymnastics Federation (FIG) convened a committee to review the current literature and address four questions: (1) Is there a negative effect of training on attained adult stature? (2) Is there a negative effect of training on growth of body segments? (3) Does training attenuate pubertal growth and maturation, specifically, the rate of growth and/or the timing and tempo of maturation? (4) Does training negatively influence the endocrine system, specifically hormones related to growth and pubertal maturation? The basic information for the review was derived from the active involvement of committee members in research on normal variation and clinical aspects of growth and maturation, and on the growth and maturation of artistic gymnasts and other youth athletes. The committee was thus thoroughly familiar with the literature on growth and maturation in general and of gymnasts and young athletes. Relevant data were more available for females than males. Youth who persisted in the sport were a highly select sample, who tended to be shorter for chronological age but who had appropriate weight-for-height. Data for secondary sex characteristics, skeletal age and age at peak height velocity indicated later maturation, but the maturity status of gymnasts overlapped the normal range of variability observed in the general population. Gymnasts as a group demonstrated a pattern of growth and maturation similar to that observed among short-, normal-, late-maturing individuals who were not athletes. Evidence for endocrine changes in gymnasts was inadequate for inferences relative to potential training effects. Allowing for noted limitations, the following conclusions were deemed acceptable: (1) Adult height or near adult height of female and male artistic gymnasts is not compromised by intensive gymnastics training. (2) Gymnastics training does not appear to attenuate growth of upper (sitting height) or lower (legs) body segment lengths. (3) Gymnastics training does not appear to attenuate pubertal growth and maturation, neither rate of growth nor the timing and tempo of the growth spurt. (4) Available data are inadequate to address the issue of intensive gymnastics training and alterations within the endocrine system.",
"title": ""
},
{
"docid": "04065494023ed79211af3ba0b5bc4c7e",
"text": "The glucagon-like peptides include glucagon, GLP-1, and GLP-2, and exert diverse actions on nutrient intake, gastrointestinal motility, islet hormone secretion, cell proliferation and apoptosis, nutrient absorption, and nutrient assimilation. GIP, a related member of the glucagon peptide superfamily, also regulates nutrient disposal via stimulation of insulin secretion. The actions of these peptides are mediated by distinct members of the glucagon receptor superfamily of G protein-coupled receptors. These receptors exhibit unique patterns of tissue-specific expression, exhibit considerable amino acid sequence identity, and share similar structural and functional properties with respect to ligand binding and signal transduction. This article provides an overview of the biology of these receptors with an emphasis on understanding the unique actions of glucagon-related peptides through studies of the biology of their cognate receptors.",
"title": ""
},
{
"docid": "81667ba5e59bd04d979b2206b54b5b32",
"text": "Parallelism is an important rhetorical device. We propose a machine learning approach for automated sentence parallelism identification in student essays. We b uild an essay dataset with sentence level parallelism annotated. We derive features by combining gen eralized word alignment strategies and the alignment measures between word sequences. The experiment al r sults show that sentence parallelism can be effectively identified with a F1 score of 82% at pair-wise level and 72% at parallelism chunk l evel. Based on this approach, we automatically identify sentence parallelism in more than 2000 student essays and study the correlation between the use of sentence parall elism and the types and quality of essays.",
"title": ""
}
] |
scidocsrr
|
4d33181562123c4c953a3c9b5e852002
|
A new intuitionism: Meaning, memory, and development in Fuzzy-Trace Theory.
|
[
{
"docid": "4fa7ee44cdc4b0cd439723e9600131bd",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "e8a9dffcb6c061fe720e7536387f5116",
"text": "The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral dataaccuracy, mean response times, and response time distributionsinto components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.",
"title": ""
}
] |
[
{
"docid": "1c94a04fdeb39ba00357e4dcc87d3862",
"text": "Automatic segmentation of speech is an important problem that is useful in speech recognition, synthesis and coding. We explore in this paper, the robust parameter set, weighting function and distance measure for reliable segmentation of noisy speech. It is found that the MFCC parameters, successful in speech recognition. holds the best promise for robust segmentation also. We also explored a variety of symmetric and asymmetric weighting lifters. from which it is found that a symmetric lifter of the form 1 + A sin1/2(πn/L), 0 ≤ n ≤ L − 1, for MFCC dimension L, is most effective. With regard to distance measure, the direct L2 norm is found adequate.",
"title": ""
},
{
"docid": "4ba3ac9a0ef8f46fe92401843b1eaba7",
"text": "This paper explores gender-based differences in multimodal deception detection. We introduce a new large, gender-balanced dataset, consisting of 104 subjects with 520 different responses covering multiple scenarios, and perform an extensive analysis of different feature sets extracted from the linguistic, physiological, and thermal data streams recorded from the subjects. We describe a multimodal deception detection system, and show how the two genders achieve different detection rates for different individual and combined feature sets, with accuracy figures reaching 80%. Our experiments and results allow us to make interesting observations concerning the differences in the multimodal detection of deception in males and females.",
"title": ""
},
{
"docid": "0521059457af9e8e770e1a0ea523374d",
"text": "This paper presents a novel method for incorporating a capacitive touch interface into existing passive RFID tag architectures without additional parts or changes to the manufacturing process. Our approach employs the tag's antenna as a dual function element in which the antenna simultaneously acts as both a low-frequency capacitive fringing electric field sensor and also as an RF antenna. To demonstrate the feasibility of our approach, we have prototyped a passive UHF tag with capacitive sensing capability integrated into the antenna port using the WISP tag. Finally, we describe how this technology can be used for touch interfaces as well as other applications with the addition of a LED for user feedback.",
"title": ""
},
{
"docid": "de5331af1c27428379c16d6009eaa7c8",
"text": "The problem of computing good graph colorings arises in many diverse applications , such as in the estimation of sparse Jacobians and in the development of eecient, parallel iterative methods for solving sparse linear systems. In this paper we present an asynchronous graph coloring heuristic well suited to distributed memory parallel computers. We present experimental results obtained on an Intel iPSC/860 which demonstrate that, for graphs arising from nite element applications , the heuristic exhibits scalable performance and generates colorings usually within three or four colors of the best-known linear time sequential heuristics. For bounded degree graphs, we show that the expected running time of the heuristic under the PRAM computation model is bounded by EO(log(n)= log log(n)). This bound is an improvement over the previously known best upper bound for the expected running time of a random heuristic for the graph coloring problem.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "40525527409abf3702690ed2eb51b200",
"text": "Remote storage delivers a cost effective solution for data storage. If data is of a sensitive nature, it should be encrypted prior to outsourcing to ensure confidentiality; however, searching then becomes challenging. Searchable encryption is a well-studied solution to this problem. Many schemes only consider the scenario where users can search over the entirety of the encrypted data. In practice, sensitive data is likely to be classified according to an access control policy and different users should have different access rights. It is unlikely that all users have unrestricted access to the entire data set. Current schemes that consider multi-level access to searchable encryption are predominantly based on asymmetric primitives. We investigate symmetric solutions to multi-level access in searchable encryption where users have different access privileges to portions of the encrypted data and are not permitted to search over, or learn information about, data for which they are not authorised.",
"title": ""
},
{
"docid": "1638f79eff48774b65051468dc9d4167",
"text": "Past research suggests that a lower waist-to-chest ratio (WCR) in men (i.e., narrower waist and broader chest) is viewed as attractive by women. However, little work has directly examined why low WCRs are preferred. The current work merged insights from theory and past research to develop a model examining perceived dominance, fitness, and protection ability as mediators of to WCR-attractiveness relationship. These mediators and their link to both short-term (sexual) and long-term (relational) attractiveness were simultaneously tested by having 151 women rate one of 15 avatars, created from 3D body scans. Men with lower WCR were perceived as more physically dominant, physically fit, and better able to protect loved ones; these characteristics differentially mediated the effect of WCR on short-term, long-term, and general attractiveness ratings. Greater understanding of the judgments women form regarding WCR may yield insights into motivations by men to manipulate their body image.",
"title": ""
},
{
"docid": "d486fca984c9cf930a4d1b4367949016",
"text": "In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.",
"title": ""
},
{
"docid": "d21a5cfa20b1b0cc667243f1df47229d",
"text": "The Segment Maxima Method for calculating gamut boundary descriptors of both colour reproduction media and colour images is introduced. Methods for determining the gamut boundary along a given line of mapping used by gamut mapping algorithms are then described, whereby these methods use the Gamut Boundary Descriptor obtained using the Segment Maxima Method. Throughout the article, the focus is both on colour reproduction media and colour images as well as on the suitability of the methods for use in gamut mapping. © 2000 John Wiley & Sons, Inc. Col Res Appl, 25, 394–401, 2000",
"title": ""
},
{
"docid": "e0f88ddc85cfe4cdcbe761b85d2781d8",
"text": "Intermodal Transportation Systems (ITS) are logistics networks integrating different transportation services, designed to move goods from origin to destination in a timely manner and using intermodal transportation means. This paper addresses the problem of the modeling and management of ITS at the operational level considering the impact that the new Information and Communication Technologies (ICT) tools can have on management and control of these systems. An effective ITS model at the operational level should focus on evaluating performance indices describing activities, resources and concurrency, by integrating information and financial flows. To this aim, ITS are regarded as discrete event systems and are modeled in a Petri net framework. We consider as a case study the ferry terminal of Trieste (Italy) that is described and simulated in different operative conditions characterized by different types of ICT solutions and information. The simulation results show that ICT have a huge potential for efficient real time management and operation of ITS, as well as an effective impact on the infrastructures.",
"title": ""
},
{
"docid": "8dc493568e94d94370f78e663da7df96",
"text": "Expertise in C++, C, Perl, Haskell, Linux system administration. Technical experience in compiler design and implementation, release engineering, network administration, FPGAs, hardware design, probabilistic inference, machine learning, web search engines, cryptography, datamining, databases (SQL, Oracle, PL/SQL, XML), distributed knowledge bases, machine vision, automated web content generation, 2D and 3D graphics, distributed computing, scientific and numerical computing, optimization, virtualization (Xen, VirtualBox). Also experience in risk analysis, finance, game theory, firm behavior, international economics. Familiar with Java, C++ Standard Template Library, Java Native Interface, Java Foundation Classes, Android development, MATLAB, CPLEX, NetPBM, Cascading Style Sheets (CSS), Tcl/Tk, Windows system administration, Mac OS X system administration, ElasticSearch, modifying the Ubuntu installer.",
"title": ""
},
{
"docid": "acbf633cbf612cd0d203d9c191a156da",
"text": "In this work an efficient parallel implementation of the Chirp Scaling Algorithm for Synthetic Aperture Radar processing is presented. The architecture selected for the implementation is the general purpose graphic processing unit, as it is well suited for scientific applications and real-time implementation of algorithms. The analysis of a first implementation led to several improvements which resulted in an important speed-up. Details of the issues found are explained, and the performance improvement of their correction explicitly shown.",
"title": ""
},
{
"docid": "89d91df8511c0b0f424dd5fa20fcd212",
"text": "We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.",
"title": ""
},
{
"docid": "6537921976c2779d1e7d921c939ec64d",
"text": "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.",
"title": ""
},
{
"docid": "728cd1808e1a37d5fe6758f795f912cb",
"text": "We introduce Sapienz, an approach to Android testing that uses multi-objective search-based testing to automatically explore and optimise test sequences, minimising length, while simultaneously maximising coverage and fault revelation. Sapienz combines random fuzzing, systematic and search-based exploration, exploiting seeding and multi-level instrumentation. Sapienz significantly outperforms (with large effect size) both the state-of-the-art technique Dynodroid and the widely-used tool, Android Monkey, in 7/10 experiments for coverage, 7/10 for fault detection and 10/10 for fault-revealing sequence length. When applied to the top 1,000 Google Play apps, Sapienz found 558 unique, previously unknown crashes. So far we have managed to make contact with the developers of 27 crashing apps. Of these, 14 have confirmed that the crashes are caused by real faults. Of those 14, six already have developer-confirmed fixes.",
"title": ""
},
{
"docid": "6c153f12481f365039f47c252acbe4ee",
"text": "DNA methylation has emerged as promising epigenetic markers for disease diagnosis. Both the differential mean (DM) and differential variability (DV) in methylation have been shown to contribute to transcriptional aberration and disease pathogenesis. The presence of confounding factors in large scale EWAS may affect the methylation values and hamper accurate marker discovery. In this paper, we propose a exible framework called methylDMV which allows for confounding factors adjustment and enables simultaneous characterization and identification of CpGs exhibiting DM only, DV only and both DM and DV. The proposed framework also allows for prioritization and selection of candidate features to be included in the prediction algorithm. We illustrate the utility of methylDMV in several TCGA datasets. An R package methylDMV implementing our proposed method is available at http://www.ams.sunysb.edu/~pfkuan/softwares.html#methylDMV.",
"title": ""
},
{
"docid": "ad5b8a1bcea8265351669be4f4c49476",
"text": "Software startups are newly created companies with little operating history and oriented towards producing cutting-edge products. As their time and resources are extremely scarce, and one failed project can put them out of business, startups need effective practices to face with those unique challenges. However, only few scientific studies attempt to address characteristics of failure, especially during the earlystage. With this study we aim to raise our understanding of the failure of early-stage software startup companies. This state-of-practice investigation was performed using a literature review followed by a multiple-case study approach. The results present how inconsistency between managerial strategies and execution can lead to failure by means of a behavioral framework. Despite strategies reveal the first need to understand the problem/solution fit, actual executions prioritize the development of the product to launch on the market as quickly as possible to verify product/market fit, neglecting the necessary learning process.",
"title": ""
},
{
"docid": "c05bf2dedcb7837f877c7a3e257f4222",
"text": "In this letter, we propose a tunable patch antenna made of a slotted rectangular patch loaded by a number of posts close to the patch edge. The posts are short circuited to the ground plane via a set of PIN diode switches. Simulations and measurements verify the possibility of tuning the antenna in subbands from 620 to 1150 MHz. Good matching has been achieved over most of the bands. Other performed designs show that more than one octave can be achieved using the proposed structure.",
"title": ""
},
{
"docid": "745a7d7e606b3a26fa7f8e970ac33f84",
"text": "Countless studies have recently purported to demonstrate effects of goal priming; however, it is difficult to muster unambiguous support for the claims of these studies because of the lack of clear criteria for determining whether goals, as opposed to alternative varieties of mental representations, have indeed been activated. Therefore, the authors offer theoretical guidelines that may help distinguish between semantic, procedural, and goal priming. Seven principles that are hallmarks of self-regulatory processes are proposed: Goal-priming effects (a) involve value, (b) involve postattainment decrements in motivation, (c) involve gradients as a function of distance to the goal, (d) are proportional to the product of expectancy and value, (e) involve inhibition of conflicting goals, (f) involve self-control, and (g) are moderated by equifinality and multifinality. How these principles might help distinguish between automatic activation of goals and priming effects that do not involve goals is discussed.",
"title": ""
},
{
"docid": "7b8fc21d27c9eb7c8e1df46eec7d6b6d",
"text": "This paper examines two methods - magnet shifting and optimizing the magnet pole arc - for reducing cogging torque in permanent magnet machines. The methods were applied to existing machine designs and their performance was calculated using finite-element analysis (FEA). Prototypes of the machine designs were constructed and experimental results obtained. It is shown that the FEA predicted the cogging torque to be nearly eliminated using the two methods. However, there was some residual cogging in the prototypes due to manufacturing difficulties. In both methods, the back electromotive force was improved by reducing harmonics while preserving the magnitude.",
"title": ""
}
] |
scidocsrr
|
419add2f3cf8b9a0843cb0984cd7fd70
|
Ontology-based semantic similarity: A new feature-based approach
|
[
{
"docid": "70574bc8ad9fece3328ca77f17eec90f",
"text": "Five different proposed measures of similarity or semantic distance in WordNet were experimentally compared by examining their performance in a real-word spelling correction system. It was found that Jiang and Conrath’s measure gave the best results overall. That of Hirst and St-Onge seriously over-related, that of Resnik seriously under-related, and those of Lin and of Leacock and Chodorow fell in between.",
"title": ""
}
] |
[
{
"docid": "0e4ab7e416ec8293865d8c12b8ba34c4",
"text": "Estimation techniques in computer vision applications must estimate accurate model parameters despite small-scale noise in the data, occasional large-scale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are least-median of squares (LMS) [P. J. Rousseeuw, J. Amer. Statist. Assoc., 79 (1984), pp. 871–880] and M-estimators [Robust Statistics: The Approach Based on Influence Functions, F. R. Hampel et al., John Wiley, 1986; Robust Statistics, P. J. Huber, John Wiley, 1981]. LMS handles large fractions of outliers, up to the theoretical limit of 50% for estimators invariant to affine changes to the data, but has low statistical efficiency. M-estimators have higher statistical efficiency but tolerate much lower percentages of outliers unless properly initialized. While robust estimators have been used in a variety of computer vision applications, three are considered here. In analysis of range images—images containing depth or X, Y , Z measurements at each pixel instead of intensity measurements—robust estimators have been used successfully to estimate surface model parameters in small image regions. In stereo and motion analysis, they have been used to estimate parameters of what is called the “fundamental matrix,” which characterizes the relative imaging geometry of two cameras imaging the same scene. Recently, robust estimators have been applied to estimating a quadratic image-to-image transformation model necessary to create a composite, “mosaic image” from a series of images of the human retina. In each case, a straightforward application of standard robust estimators is insufficient, and carefully developed extensions are used to solve the problem.",
"title": ""
},
{
"docid": "bb152dfa033022423a65c4712e50a490",
"text": "We study interactive oracle proofs (IOPs) [BCS16, RRR16], which combine aspects of probabilistically checkable proofs (PCPs) and interactive proofs (IPs). We present IOP constructions and techniques that let us achieve tradeoffs in proof length versus query complexity that are not known to be achievable via PCPs or IPs alone. Our main results are: 1. Circuit satisfiability has 3-round IOPs with linear proof length (counted in bits) and constant query complexity. 2. Reed–Solomon codes have 2-round IOPs of proximity with linear proof length and constant query complexity. 3. Tensor product codes have 1-round IOPs of proximity with sublinear proof length and constant query complexity. (A familiar example of a tensor product code is the Reed–Muller code with a bound on individual degrees.) For all the above, known PCP constructions give quasilinear proof length and constant query complexity [BS08, Din07]. Also, for circuit satisfiability, [BKK13] obtain PCPs with linear proof length but sublinear (and super-constant) query complexity. As in [BKK13], we rely on algebraic-geometry codes to obtain our first result; but, unlike that work, our use of such codes is much “lighter” because we do not rely on any automorphisms of the code. We obtain our results by proving and combining “IOP-analogues” of tools underlying numerous IPs and PCPs: • Interactive proof composition. Proof composition [AS98] is used to reduce the query complexity of PCP verifiers, at the cost of increasing proof length by an additive factor that is exponential in the verifier’s randomness complexity. We prove a composition theorem for IOPs where this additive factor is linear. • Sublinear sumcheck. The sumcheck protocol [LFKN92, Sha92] is an IP that enables the verifier to check the sum of values of a low-degree multi-variate polynomial on an exponentially-large hypercube, but the verifier’s running time depends linearly on the bound on individual degrees. We prove a sumcheck protocol for IOPs where this dependence is sublinear (e.g., polylogarithmic). Our work demonstrates that even constant-round IOPs are more efficient than known PCPs and IPs.",
"title": ""
},
{
"docid": "9002cefa8b062c49858439d54c460472",
"text": "In heterogeneous or shared clusters, distributed learning processes are slowed down by straggling workers. In this work, we propose LB-BSP, a new synchronization scheme that eliminates stragglers by adapting each worker's training load (batch size) to its processing capability. For training in shared production clusters, a prerequisite for deciding the workers' batch sizes is to know their processing speeds before each iteration starts. To this end, we adopt NARX, an extended recurrent neural network that accounts for both the historical speeds and the driving factors such as CPU and memory in prediction.",
"title": ""
},
{
"docid": "5b75ba17d9e66a77fed91e7aa1cd9c27",
"text": "Building a machine learning model is an iterative process. A data scientist will build many tens to hundreds of models before arriving at one that meets some acceptance criteria (e.g. AUC cutoff, accuracy threshold). However, the current style of model building is ad-hoc and there is no practical way for a data scientist to manage models that are built over time. As a result, the data scientist must attempt to \"remember\" previously constructed models and insights obtained from them. This task is challenging for more than a handful of models and can hamper the process of sensemaking. Without a means to manage models, there is no easy way for a data scientist to answer questions such as \"Which models were built using an incorrect feature?\", \"Which model performed best on American customers?\" or \"How did the two top models compare?\" In this paper, we describe our ongoing work on ModelDB, a novel end-to-end system for the management of machine learning models. ModelDB clients automatically track machine learning models in their native environments (e.g. scikit-learn, spark.ml), the ModelDB backend introduces a common layer of abstractions to represent models and pipelines, and the ModelDB frontend allows visual exploration and analyses of models via a web-based interface.",
"title": ""
},
{
"docid": "2021f6474af6233c2a919b96dc4758e4",
"text": "We introduce a new approach for finding overlapping clusters given pairwise similarities of objects. In particular, we relax the problem of correlation clustering by allowing an object to be assigned to more than one cluster. At the core of our approach is an optimization problem in which each data point is mapped to a small set of labels, representing membership in different clusters. The objective is to find a mapping so that the given similarities between objects agree as much as possible with similarities taken over their label sets. The number of labels can vary across objects. To define a similarity between label sets, we consider two measures: (i) a 0–1 function indicating whether the two label sets have non-zero intersection and (ii) the Jaccard coefficient between the two label sets. The algorithm we propose is an iterative local-search method. The definitions of label set similarity give rise to two non-trivial optimization problems, which, for the measures of set-intersection and Jaccard, we solve using a greedy strategy and non-negative least squares, respectively. We also develop a distributed version of our algorithm based on the BSP model and implement it using a Pregel framework. Our algorithm uses as input pairwise similarities of objects and can thus be applied when clustering structured objects for which feature vectors are not available. As a proof of concept, we apply our algorithms on three different and complex application domains: trajectories, amino-acid sequences, and textual documents.",
"title": ""
},
{
"docid": "950c29856f0afb6d51f94d75a76e6941",
"text": "A developmental theory of reckless behavior among adolescents is presented, in which sensation seeking and adolescent egocentrism are especially prominent factors. Findings from studies of automobile driving, sex without contraception, illegal drug use, and minor criminal activity are presented in evidence of this. The influence of peers is then discussed and reinterpreted in the light of sensation seeking and adolescent egocentrism. Socialization influences are considered in interaction with sensation seeking and adolescent egocentrism, and the terms narrow and broad socialization are introduced. Factors that may be responsible for the decline of reckless behavior with age are discussed. © 1992 Academic",
"title": ""
},
{
"docid": "2aa26d9903b5c30b2bee8ec0737b8667",
"text": "The authors conducted a comprehensive review to understand the relation between personality and aggressive behavior, under provoking and nonprovoking conditions. The qualitative review revealed that some personality variables influenced aggressive behavior under both neutral and provocation conditions, whereas others influenced aggressive behavior only under provocation. Studies that assessed personality variables and that directly measured aggressive behavior were included in the quantitative review. Analyses revealed that trait aggressiveness and trait irritability influenced aggressive behavior under both provoking and neutral conditions but that other personality variables (e.g., trait anger, Type A personality, dissipation-rumination) influenced aggressive behavior only under provoking conditions. The authors discuss possible relations between these patterns of aggressive behavior and the personality dimensions of Agreeableness and Neuroticism and consider implications for theories of aggression.",
"title": ""
},
{
"docid": "42c890832d861ad2854fd1f56b13eb45",
"text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.",
"title": ""
},
{
"docid": "98a65cca7217dfa720dd4ed2972c3bdd",
"text": "Intramuscular fat percentage (IMF%) has been shown to have a positive influence on the eating quality of red meat. Selection of Australian lambs for increased lean tissue and reduced carcass fatness using Australian Sheep Breeding Values has been shown to decrease IMF% of the Muscularis longissimus lumborum. The impact this selection has on the IMF% of other muscle depots is unknown. This study examined IMF% in five different muscles from 400 lambs (M. longissimus lumborum, Muscularis semimembranosus, Muscularis semitendinosus, Muscularis supraspinatus, Muscularis infraspinatus). The sires of these lambs had a broad range in carcass breeding values for post-weaning weight, eye muscle depth and fat depth over the 12th rib (c-site fat depth). Results showed IMF% to be highest in the M. supraspinatus (4.87 ± 0.1, P<0.01) and lowest in the M. semimembranosus (3.58 ± 0.1, P<0.01). Hot carcass weight was positively associated with IMF% of all muscles. Selection for decreasing c-site fat depth reduced IMF% in the M. longissimus lumborum, M. semimembranosus and M. semitendinosus. Higher breeding values for post-weaning weight and eye muscle depth increased and decreased IMF%, respectively, but only in the lambs born as multiples and raised as singles. For each per cent increase in lean meat yield percentage (LMY%), there was a reduction in IMF% of 0.16 in all five muscles examined. Given the drive within the lamb industry to improve LMY%, our results indicate the importance of continued monitoring of IMF% throughout the different carcass regions, given its importance for eating quality.",
"title": ""
},
{
"docid": "1e82e123cacca01a84a8ea2fef641d98",
"text": "We propose a new class of convex penalty functions, called variational Gram functions (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applications. We study necessary and sufficient conditions under which a VGF is convex, and give a characterization of its subdifferential. We show how to compute its proximal operator, and discuss efficient optimization algorithms for regularized loss minimization problems where the loss admits a simple variational representation and the regularizer is a VGF. We also establish a general representer theorem for such learning problems. Lastly, numerical experiments on a hierarchical classification problem are presented to demonstrate the effectiveness of VGFs and the associated optimization algorithms.",
"title": ""
},
{
"docid": "1e12a7de843a49f429ac490939f8267c",
"text": "BACKGROUND\nThe preparation consisting of a head-fixed mouse on a spherical or cylindrical treadmill offers unique advantages in a variety of experimental contexts. Head fixation provides the mechanical stability necessary for optical and electrophysiological recordings and stimulation. Additionally, it can be combined with virtual environments such as T-mazes, enabling these types of recording during diverse behaviors.\n\n\nNEW METHOD\nIn this paper we present a low-cost, easy-to-build acquisition system, along with scalable computational methods to quantitatively measure behavior (locomotion and paws, whiskers, and tail motion patterns) in head-fixed mice locomoting on cylindrical or spherical treadmills.\n\n\nEXISTING METHODS\nSeveral custom supervised and unsupervised methods have been developed for measuring behavior in mice. However, to date there is no low-cost, turn-key, general-purpose, and scalable system for acquiring and quantifying behavior in mice.\n\n\nRESULTS\nWe benchmark our algorithms against ground truth data generated either by manual labeling or by simpler methods of feature extraction. We demonstrate that our algorithms achieve good performance, both in supervised and unsupervised settings.\n\n\nCONCLUSIONS\nWe present a low-cost suite of tools for behavioral quantification, which serve as valuable complements to recording and stimulation technologies being developed for the head-fixed mouse preparation.",
"title": ""
},
{
"docid": "54187ebfdd09b02a7a9c6864f6cca794",
"text": "Technological development, and in particular digitalisation, has major implications for labour markets. Assessing its impact will be crucial for developing policies that promote efficient labour markets for the benefit of workers, employers and societies as a whole. Rapid technological progress and innovation can threaten employment. Such a concern is not new but dates back at least to the 1930s, when John Maynard Keynes postulated his ‘technological unemployment theory’ – technological change causes loss of jobs (Keynes 1937). Technological innovations can affect employment in two main ways:",
"title": ""
},
{
"docid": "bc0def2cdcb570feaee55293cea0c97f",
"text": "Inductive Logic Programming (ILP) is a new discipline which investigates the inductive construction of rst-order clausal theories from examples and background knowledge. We survey the most important theories and methods of this new eld. Firstly, various problem speciications of ILP are formalised in semantic settings for ILP, yielding a \\model-theory\" for ILP. Secondly, a generic ILP algorithm is presented. Thirdly, the inference rules and corresponding operators used in ILP are presented, resulting in a \\proof-theory\" for ILP. Fourthly, since inductive inference does not produce statements which are assured to follow from what is given, inductive inferences require an alternative form of justiication. This can take the form of either probabilistic support or logical constraints on the hypothesis language. Information compression techniques used within ILP are presented within a unifying Bayesian approach to connrmation and corroboration of hypotheses. Also, diierent ways to constrain the hypothesis language, or specify the declarative bias are presented. Fifthly, some advanced topics in ILP are addressed. These include aspects of computational learning theory as applied to ILP, and the issue of predicate invention. Finally, we survey some applications and implementations of ILP. ILP applications fall under two diierent categories: rstly scientiic discovery and knowledge acquisition, and secondly programming assistants.",
"title": ""
},
{
"docid": "c1a9c71ef953554a32a72a8ad679db17",
"text": "The role of brands and branding in the new economy that is characterised by digitisation and globalisation are attracting considerable attention. Taking the organisational perspective the challenges for branding in online environments relate to: the message capacity of Web pages, the need to integrate branding and marketing communications across different channels, the trend towards organisational value propositions, brands as search keys, the opportunity to link and develop brand positions, globalisation, and the increased engagement of the public sector with branding. In the context of the brand experience, key themes are customer control, customisation and customer relationships, the help yourself nature of the medium, the increasing emphasis on experience, and the opportunity offered by m-commerce to revolutionise the brand experience. An online brand development strategy includes the following stages: setting the context for the brand, deciding on brand objectives and message; developing a brand specification; developing a brand design, creating the Web site and other communications using the brand, launching and promoting the brand, building the brand experience, and finally, reviewing, evolving and protecting the brand.",
"title": ""
},
{
"docid": "60ebcebd823033b1ecb9a2d23081f4e3",
"text": "The enhancements being developed by the Time-Sensitive Networking Task Group as part of IEEE 802.1 emerge as the future of real-time communication over Ethernet networks for automotive and industrial application domains. In particular IEEE 802.1Qbv is key to enabling timeliness guarantees via so-called time-aware shapers. In this paper, we address the computation of fully deterministic schedules for 802.1Qbv-compliant multi-hop switched networks. We identify and analyze key functional parameters affecting the deterministic behaviour of real-time communication under 802.1Qbv and, based on a generalized configuration of these parameters, derive the required constraints for computing offline schedules guaranteeing low and bounded jitter and deterministic end-to-end latency for critical communication flows. Furthermore, we discuss several optimization directions and concrete configurations exposing trade-offs against the required computation time. We also show the performance of our approach via synthetic network workloads on top of different network configurations.",
"title": ""
},
{
"docid": "479fe61e0b738cb0a0284da1bda7c36d",
"text": "In urban areas, congestion creates a substantial variation in travel speeds during peak morning and evening hours. This research presents a new solution approach, an iterative route construction and improvement algorithm (IRCI), for the time dependent vehicle routing problem (TDVRP) with hard or soft time windows. Improvements are obtained at a route level; hence the proposed approach does not rely on any type of local improvement procedure. Further, the solution algorithms can tackle constant speed or time-dependent speed problems without any alteration in their structure. A new formulation for the TDVRP with soft and hard time windows is presented. Leveraging on the well known Solomon instances, new test problems that capture the typical speed variations of congested urban settings are proposed. Results in terms of solution quality as well as computational time are presented and discussed. The computational complexity of the IRCI is analyzed and experimental results indicate that average computational time increases proportionally to the square of the number of customers.",
"title": ""
},
{
"docid": "67bc81066dbe06ac615df861435fdbd9",
"text": "When a three-dimensional ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon, the quantum anomalous Hall effect, provides a conceptually new platform for studies of 1D transport, distinct from the traditionally studied quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt-driven crossover from predominantly edge-state transport to diffusive transport in Crx(Bi,Sb)(2-x)Te3 thin films. This crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain by employing a Landauer-Büttiker formalism. Our methodology provides a powerful means of quantifying dissipative effects in temperature and chemical potential regimes far from perfect quantization.",
"title": ""
},
{
"docid": "d10dc295173202332700918cab02ac2b",
"text": "Markov logic networks (MLNs) have proven to be useful tools for reasoning about uncertainty in complex knowledge bases. In this paper, we extend MLNs with numerical constraints and present an efficient implementation in terms of a cutting plane method. This extension is useful for reasoning over uncertain temporal data. To show the applicability of this extension, we enrich log-linear description logics (DLs) with concrete domains (datatypes). Thereby, allowing to reason over weighted DLs with datatypes. Moreover, we use the resulting formalism to reason about temporal assertions in DBpedia, thus illustrating its practical use.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
}
] |
scidocsrr
|
bd0f2d8cf3c610f0676af4778a42f312
|
Event Digest: A Holistic View on Past Events
|
[
{
"docid": "6d5429ddf4050724432da73af60274d6",
"text": "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the subsentence or “concept”-level, to a sentencelevel model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.",
"title": ""
}
] |
[
{
"docid": "72e5b92632824d3633539727125763bc",
"text": "NB-IoT system focues on indoor coverage, low cost, long battery life, and enabling a large number of connected devices. The NB-IoT system in the inband mode should share the antenna with the LTE system and support mult-PRB to cover many terminals. Also, the number of used antennas should be minimized for price competitiveness. In this paper, the structure and implementation of the NB-IoT base station system will be describe.",
"title": ""
},
{
"docid": "1c104704a868e3e40583f1797b0e8439",
"text": "Mobile e-commerce or M-commerce describes online sales transaction that uses wireless or mobile electronic devices. These wireless devices interact with computer networks that have the ability to conduct online merchandise purchases. The rapid growth of mobile commerce is being driven by number of factors – increasing mobile user base, rapid adoption of online commerce and technological advances. These systems provide the potential for organizations and users to perform various commerce-related tasks without regard to time and location. Owing to wireless nature of these devices, there are many issues that affect the functioning of m-commerce. This paper identifies and discusses these issues which include technological issues and application issues pertaining to M-commerce. Keywords— mobile, e-commerce, M-commerce",
"title": ""
},
{
"docid": "246f3ccf2428951fb18dd9f9d06ab184",
"text": "Robust optimization has emerged as a tractable methodology for coping with parameter uncertainty in an optimization problem. In order to avoid conservative solutions, i.e. overly protective and expensive solutions, Ben-Tal and Nemirovski introduced the notion of affine adaptability. However, their approach significantly increases the program size and threatens its tractability, especially in the context of mixed-integer programming. In this paper, we focus on robust mixed-integer linear programs. We propose a tractable numerical strategy for solving them and demonstrate the computational efficiency of our method when applied to a real energy management problem. In addition, we propose a practical data-driven methodology for designing the uncertainty set of robust programs.",
"title": ""
},
{
"docid": "41b83a85c1c633785766e3f464cbd7a6",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
},
{
"docid": "15dba7f87943a6d106f819d86a1a56c3",
"text": "The Gesture Recognition Toolkit is a cross-platform open-source C++ library designed to make real-time machine learning and gesture recognition more accessible for non-specialists. Emphasis is placed on ease of use, with a consistent, minimalist design that promotes accessibility while supporting flexibility and customization for advanced users. The toolkit features a broad range of classification and regression algorithms and has extensive support for building real-time systems. This includes algorithms for signal processing, feature extraction and automatic gesture spotting.",
"title": ""
},
{
"docid": "4cc8b430fc70931a21015c800936001d",
"text": "Nowadays, there is a significant increase in the number of Bioinformatics tools and databases. Researchers from various interdisciplinary fields need to use these tools. Usability is an important quality of software in general, and bioinformatics tools in particular. Improving the usability of bioinformatics tools allows users to use the tool to its fullest potential. In this paper, we evaluate the usability of two online bioinformatics tools Ori-Finder 1 and Ori-Finder 2 in terms of efficiency, effectiveness, and satisfaction. The evaluation focuses on investigating how easily and successfully can users use Ori-Finder1 and Ori-Finder 2 to find the origin of replication in Bacterial and Archaeal genomes. To the best of our knowledge, the usability of these two tools has not been studied before. Twelve participants were recruited from four user groups. The average tasks completion times were compared. Many usability issues were identified by users of bioinformatics tools. Based on our results, we list recommendations for better design of bioinformatics tools.",
"title": ""
},
{
"docid": "aa64bd9576044ec5e654c9f29c4f7d84",
"text": "BACKGROUND\nSocial media are dynamic and interactive computer-mediated communication tools that have high penetration rates in the general population in high-income and middle-income countries. However, in medicine and health care, a large number of stakeholders (eg, clinicians, administrators, professional colleges, academic institutions, ministries of health, among others) are unaware of social media's relevance, potential applications in their day-to-day activities, as well as the inherent risks and how these may be attenuated and mitigated.\n\n\nOBJECTIVE\nWe conducted a narrative review with the aim to present case studies that illustrate how, where, and why social media are being used in the medical and health care sectors.\n\n\nMETHODS\nUsing a critical-interpretivist framework, we used qualitative methods to synthesize the impact and illustrate, explain, and provide contextual knowledge of the applications and potential implementations of social media in medicine and health care. Both traditional (eg, peer-reviewed) and nontraditional (eg, policies, case studies, and social media content) sources were used, in addition to an environmental scan (using Google and Bing Web searches) of resources.\n\n\nRESULTS\nWe reviewed, evaluated, and synthesized 76 articles, 44 websites, and 11 policies/reports. Results and case studies are presented according to 10 different categories of social media: (1) blogs (eg, WordPress), (2) microblogs (eg, Twitter), (3) social networking sites (eg, Facebook), (4) professional networking sites (eg, LinkedIn, Sermo), (5) thematic networking sites (eg, 23andMe), (6) wikis (eg, Wikipedia), (7) mashups (eg, HealthMap), (8) collaborative filtering sites (eg, Digg), (9) media sharing sites (eg, YouTube, Slideshare), and others (eg, SecondLife). Four recommendations are provided and explained for stakeholders wishing to engage with social media while attenuating risk: (1) maintain professionalism at all times, (2) be authentic, have fun, and do not be afraid, (3) ask for help, and (4) focus, grab attention, and engage.\n\n\nCONCLUSIONS\nThe role of social media in the medical and health care sectors is far reaching, and many questions in terms of governance, ethics, professionalism, privacy, confidentiality, and information quality remain unanswered. By following the guidelines presented, professionals have a starting point to engage with social media in a safe and ethical manner. Future research will be required to understand the synergies between social media and evidence-based practice, as well as develop institutional policies that benefit patients, clinicians, public health practitioners, and industry alike.",
"title": ""
},
{
"docid": "25f67dfd1e4de7aacd2f4e29cd6456c4",
"text": "Information Technology has permeated many facets of work life in industrialized nations. With the expansion of Internet access we are now witnessing an expansion of the use of information technology in the form of electronic commerce. This current study tests the applicability of one prominent information technology uptake model, the Technology Acceptance Model (Int. J. Man Mach. Stud. 38 (1993) 475), within an electronic commerce setting. Specifically, the relationship between the perceived ease of use, usefulness and three electronically recorded indicators of use were assessed within the context of an electronic supermarket. A total of 247 participants completed the attitudinal measures. Electronically recorded indicators of use in the form of deliveries, purchase value and number of log-ons to the system were also recorded for the month the participants completed the questionnaire and 6 further months. Results indicated that the Technology Acceptance Model could be successfully applied to an electronic supermarket setting, providing empirical support for the ability of the Technology Acceptance Model to predict actual behaviour. The Technology Acceptance Model explained up to 15% of the variance in the behavioural indicators through perceived ease of use and usefulness of the system. However, the perceived ease of use of the system did not uniquely contribute to the prediction of behaviour when usefulness was considered, indicating a mediation effect. Future research should now focus on product and service attributes to more fully explain the use of electronic commerce services. r 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7474f6843240605234cdd2d6d15c2ee6",
"text": "Nowadays the trend of power supply market is more inclined to high switching frequency, high efficiency and high power density. To meet this trend, resonant power supply holds more attraction, because it can be operated in high switching frequency with high efficiency. There are many resonant power supplies such as Series-Resonant Converter (SRC), Parallel-Resonant Converter (PRC) and Series-Parallel Resonant Converter (SPRC). Among them, LLC Resonant Converter has a lot of advantages over the conventional SRC and PRC considering relatively narrow switching frequency variation over wide input and load variation and Zero-Voltage-Switching for entire load range. Therefore, the LLC Resonant Converter has been widely used and discussed. However, the conventional analysis of LLC Resonant Converter with Fundamental Harmonic Approximation (FHA) can not explain the practical operation of LLC Resonant Converter. To overcome this limitation, in this paper, analysis and design of the LLC Resonant Converter including parasitic components which are affecting converter operation are proposed using a traditional analysis based on FHA. The effect of each parasitic component is analyzed with simulation results and the design guideline standing on this analysis will be described. Moreover, the experimental results of prototype designing on the basis of the analysis are shown to demonstrate the proposed analysis and design guideline.",
"title": ""
},
{
"docid": "e74e3e9fe94e24a18a025d38fb2f0e57",
"text": "OBJECTIVE\nThis paper aims to locate the ethnographic tradition in a socio-historical context.\n\n\nMETHOD\nIn this paper we chart the history of the ethnographic tradition, explaining its roots and highlighting its value in enabling the ethnographic researcher to explore and make sense of the otherwise invisible aspects of cultural norms and practices. We discuss a number of studies that have provided detailed and context-sensitive accounts of the everyday life of medical schools, medical practitioners and medical students. We demonstrate how the methods of ethnographic fieldwork offer \"other ways of knowing\" that can have a significant impact on medical education.\n\n\nCONCLUSIONS\nThe ethnographic research tradition in sociological and anthropological studies of educational settings is a significant one. Ethnographic research in higher education institutions is less common, but is itself a growing research strategy.",
"title": ""
},
{
"docid": "e9b89400c6bed90ac8c9465e047538e7",
"text": "Myriad of graph-based algorithms in machine learning and data mining require parsing relational data iteratively. These algorithms are implemented in a large-scale distributed environment to scale to massive data sets. To accelerate these large-scale graph-based iterative computations, we propose delta-based accumulative iterative computation (DAIC). Different from traditional iterative computations, which iteratively update the result based on the result from the previous iteration, DAIC updates the result by accumulating the “changes” between iterations. By DAIC, we can process only the “changes” to avoid the negligible updates. Furthermore, we can perform DAIC asynchronously to bypass the high-cost synchronous barriers in heterogeneous distributed environments. Based on the DAIC model, we design and implement an asynchronous graph processing framework, Maiter. We evaluate Maiter on local cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves as much as 60 × speedup over Hadoop and outperforms other state-of-the-art frameworks.",
"title": ""
},
{
"docid": "eb7b55c89ddbada0e186b3ff49769b5d",
"text": "By comparing the existing types of transformer bushings, this paper reviews distinctive features of RIF™ (Resin Impregnated Fiberglass) paperless condenser bushings; and, in more detail, it introduces principles, construction, characteristics and applications of this type of bushing when used with a new, safer and reliable built-in insulation monitoring function. As the construction of RIF™ insulation would delay the propagation of a core insulation breakdown after the onset of an initial insulation defect, this type of real time monitoring of core insulation condition provides a novel tool to manage bushing defects without any sense of urgency. It offers, for the first time, a very early field detection tool for transformer bushing insulation faults and by way of consequence, a much improved protection of power transformers over their operating life.",
"title": ""
},
{
"docid": "cae5b108b2dedc8852726dc02c77ed2f",
"text": "The purpose of this paper is to develop an enhanced radio frequency identification (RFID)-enabled graphical deduction model (rfid-GDM) for tracking the time-sensitive state, position, and other attributes of RFID-tagged objects in process flow. Concepts and definitions related to processes and RFID applications are first clarified, and enhanced state blocks are proposed to depict four kinds of RFID application scenarios. The implementation framework of rfid-GDM and its five steps are further addressed. Both mathematical formalization and graphical description of each step are involved. Finally, a case is studied to verify the feasibility of rfid-GDM. It is expected that rfid-GDM will provide instructions for modeling and tracking RFID-enabled process flows in diverse fields.",
"title": ""
},
{
"docid": "a622845933ba90cd76a44e46b2dabe99",
"text": "The present study provides the first evidence that personality can be reliably predicted from standard mobile phone logs. Using a set of novel psychology-informed indicators that can be computed from data available to all carriers, we were able to predict users’ personality with a mean accuracy across traits of 42% better than random, reaching up to 61% accuracy on a three-class problem. Given the fast growing number of mobile phone subscription and availability of phone logs to researchers, our new personality indicators open the door to exciting avenues for future research in social sciences. They potentially enable costeffective, questionnaire-free investigation of personality-related questions at a scale never seen before.",
"title": ""
},
{
"docid": "3bf99a4fdd8db8d56edb2a5840f296a7",
"text": "BACKGROUND AND OBJECTIVES\nCalcium deposits of the shoulder may persist for many years with resulting pain and impairment of mechanical function. The effects of different treatments vary significantly and do not show consistent and reliable long-term results. Cimetidine decreases calcium levels and improves symptoms in patients with hyperparathyroidism. We evaluated cimetidine as a treatment for chronic calcifying tendinitis of the shoulder in patients who did not respond to conservative treatment.\n\n\nMETHODS\nCimetidine, 200 mg twice daily, was given orally for 3 months in 16 patients who did not respond to more than 6 months of conservative treatment. We recorded subjective, functional, and radiologic findings at 1 day before, at 2 weeks after, and at 2 and 3 months after the start of cimetidine. We also performed a follow-up study (4 to 24 months).\n\n\nRESULTS\nAfter treatment, peak pain score (visual analogue scale: 0 - 100) decreased significantly from 63 +/- 13 to 14 +/- 19 (mean +/- SD, P <.01) and 10 patients (63%) became pain free. Physical impairment was also significantly improved. Calcium deposits disappeared in 9 patients (56%), decreased in 4 patients (25%), and did not change in 3 patients (19%). Follow-up data showed that improvement of symptoms was sustained. No recurrence or enlargement of calcium deposits was observed. Plasma concentrations of calcium and parathyroid hormone did not change significantly.\n\n\nCONCLUSIONS\nOur results indicate that cimetidine is effective in treating chronic calcifying tendinitis of the shoulder; however, the mechanism by which cimetidine improves the symptoms is unknown.",
"title": ""
},
{
"docid": "ec7c9fa71dcf32a3258ee8712ccb95c1",
"text": "Fuzzy graph is now a very important research area due to its wide application. Fuzzy multigraph and fuzzy planar graphs are two subclasses of fuzzy graph theory. In this paper, we define both of these graphs and studied a lot of properties. A very close association of fuzzy planar graph is fuzzy dual graph. This is also defined and studied several properties. The relation between fuzzy planar graph and fuzzy dual graph is also established.",
"title": ""
},
{
"docid": "ce29e17a4fb9c67676fb534e58e2e20d",
"text": "OBJECTIVE\nTo examine the association between frequency of assisting with home meal preparation and fruit and vegetable preference and self-efficacy for making healthier food choices among grade 5 children in Alberta, Canada.\n\n\nDESIGN\nA cross-sectional survey design was used. Children were asked how often they helped prepare food at home and rated their preference for twelve fruits and vegetables on a 3-point Likert-type scale. Self-efficacy was measured with six items on a 4-point Likert-type scale asking children their level of confidence in selecting and eating healthy foods at home and at school.\n\n\nSETTING\nSchools (n =151) located in Alberta, Canada.\n\n\nSUBJECTS\nGrade 5 students (n = 3398).\n\n\nRESULTS\nA large majority (83-93 %) of the study children reported helping in home meal preparation at least once monthly. Higher frequency of helping prepare and cook food at home was associated with higher fruit and vegetable preference and with higher self-efficacy for selecting and eating healthy foods.\n\n\nCONCLUSIONS\nEncouraging children to be more involved in home meal preparation could be an effective health promotion strategy. These findings suggest that the incorporation of activities teaching children how to prepare simple and healthy meals in health promotion programmes could potentially lead to improvement in dietary habits.",
"title": ""
},
{
"docid": "e86ce86403f13b441f29f4408eab4c1b",
"text": "This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.",
"title": ""
},
{
"docid": "1af40b48f5ecccdbf375a4783656f637",
"text": "A novel pulsewidth modulation buck–boost ac chopper using regenerative dc snubbers is proposed and analyzed. Compared to the previous buck–boost ac choppers, ac snubbers causing power loss are eliminated using regenerative dc snubbers. Experimental results show that the proposed scheme gives good steady-state performance of the ac chopper, which coincides with the theoretical results.",
"title": ""
},
{
"docid": "3b64e99ea608819fc4bf06a6850a5aff",
"text": "Cloud computing is one of the most useful technology that is been widely used all over the world. It generally provides on demand IT services and products. Virtualization plays a major role in cloud computing as it provides a virtual storage and computing services to the cloud clients which is only possible through virtualization. Cloud computing is a new business computing paradigm that is based on the concepts of virtualization, multi-tenancy, and shared infrastructure. This paper discusses about cloud computing, how virtualization is done in cloud computing, virtualization basic architecture, its advantages and effects [1].",
"title": ""
}
] |
scidocsrr
|
ea8dae621378be5d8a82cc34b66bc753
|
Qos architectural patterns for self-architecting software systems
|
[
{
"docid": "816d0a0315ce32b1c4000c585d0f9a63",
"text": "Self-management is put forward as one of the means by which we could provide systems that are scalable, support dynamic composition and rigorous analysis, and are flexible and robust in the presence of change. In this paper, we focus on architectural approaches to self-management, not because the language-level or network-level approaches are uninteresting or less promising, but because we believe that the architectural level seems to provide the required level of abstraction and generality to deal with the challenges posed. A self-managed software architecture is one in which components automatically configure their interaction in a way that is compatible with an overall architectural specification and achieves the goals of the system. The objective is to minimise the degree of explicit management necessary for construction and subsequent evolution whilst preserving the architectural properties implied by its specification. This paper discusses some of the current promising work and presents an outline three-layer reference model as a context in which to articulate some of the main outstanding research challenges.",
"title": ""
},
{
"docid": "da6b8e2a985c20a4659f2436f7701c0e",
"text": "The goal of this roadmap paper is to summarize the state-ofthe-art and to identify critical challenges for the systematic software engineering of self-adaptive systems. The paper is partitioned into four parts, one for each of the identified essential views of self-adaptation: modelling dimensions, requirements, engineering, and assurances. For each view, we present the state-of-the-art and the challenges that our community must address. This roadmap paper is a result of the Dagstuhl Seminar 08031 on “Software Engineering for Self-Adaptive Systems, ” which took place",
"title": ""
}
] |
[
{
"docid": "4dda701b0bf796f044abf136af7b0a9c",
"text": "Legacy substation automation protocols and architectures typically provided basic functionality for power system automation and were designed to accommodate the technical limitations of the networking technology available for implementation. There has recently been a vast improvement in networking technology that has changed dramatically what is now feasible for power system automation in the substation. Technologies such as switched Ethernet, TCP/IP, high-speed wide area networks, and high-performance low-cost computers are providing capabilities that could barely be imagined when most legacy substation automation protocols were designed. In order to take advantage of modern technology to deliver additional new benefits to users of substation automation, the International Electrotechnical Commission (IEC) has developed and released a new global standard for substation automation: IEC 61850. The paper provides a basic technical overview of IEC 61850 and discusses the benefits of each major aspect of the standard. The concept of a virtual model comprising both physical and logical device models that includes a set of standardized communications services are described along with explanations of how these standardized models, object naming conventions, and communication services bring significant benefits to the substation automation user. New services to support self-describing devices and object-orient peer-to-peer data exchange are explained with an emphasis on how these services can be applied to reduce costs for substation automation. The substation configuration language (SCL) of IEC 61850 is presented with information on how the standardization of substation configuration will impact the future of substation automation. The paper concludes with a brief introduction to the UCA International Users Group as a forum where users and suppliers cooperate in improving substation automation with testing, education, and demonstrations of IEC 61850 and other IEC standards technology",
"title": ""
},
{
"docid": "156639f4656088016e2b867d2d7b71af",
"text": "In this article we use Adomian decomposition method, which is a well-known method for solving functional equations now-a-days, to solve systems of differential equations of the first order and an ordinary differential equation of any order by converting it into a system of differential of the order one. Theoretical considerations are being discussed, and convergence of the method for theses systems is addressed. Some examples are presented to show the ability of the method for linear and non-linear systems of differential equations. 2002 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "452285eb334f8b4ecc17592e53d7080e",
"text": "Fathers are taking on more childcare and household responsibilities than they used to and many non-profit and government organizations have pushed for changes in policies to support fathers. Despite this effort, little research has explored how fathers go online related to their roles as fathers. Drawing on an interview study with 37 fathers, we find that they use social media to document and archive fatherhood, learn how to be a father, and access social support. They also go online to support diverse family needs, such as single fathers' use of Reddit instead of Facebook, fathers raised by single mothers' search for role models online, and stay-at-home fathers' use of father blogs. However, fathers are constrained by privacy concerns and perceptions of judgment relating to sharing content online about their children. Drawing on theories of fatherhood, we present theoretical and design ideas for designing online spaces to better support fathers and fatherhood. We conclude with a call for a research agenda to support fathers online.",
"title": ""
},
{
"docid": "305a6b7cfcc560e1356fa7a44fee8de2",
"text": "Power MOSFET designs have been moving to higher performance particularly in the medium voltage area. (60V to 300V) New designs require lower specific on-resistance (RSP) thus forcing designers to push the envelope of increasing the electric field stress on the shielding oxide, reducing the cell pitch, and increasing the epitaxial (epi) drift doping to reduce on resistance. In doing so, time dependant avalanche instabilities have become a concern for oxide charge balanced power MOSFETs. Avalanche instabilities can initiate in the active cell and/or the termination structures. These instabilities cause the avalanche breakdown to increase and/or decrease with increasing time in avalanche. They become a reliability risk when the drain to source breakdown voltage (BVdss) degrades below the operating voltage of the application circuit. This paper will explain a mechanism for these avalanche instabilities and propose an optimum design for the charge balance region. TCAD simulation was employed to give insight to the mechanism. Finally, measured data will be presented to substantiate the theory.",
"title": ""
},
{
"docid": "450f13659ece54bee1b4fe61cc335eb2",
"text": "Though considerable effort has recently been devoted to hardware realization of one-dimensional chaotic systems, the influence of implementation inaccuracies is often underestimated and limited to non-idealities in the non-linear map. Here we investigate the consequences of sample-and-hold errors. Two degrees of freedom in the design space are considered: the choice of the map and the sample-and-hold architecture. Current-mode systems based on Bernoulli Shift, on Tent Map and on Tailed Tent Map are taken into account and coupled with an order-one model of sample-and-hold to ascertain error causes and suggest implementation improvements. key words: chaotic systems, analog circuits, sample-and-hold errors",
"title": ""
},
{
"docid": "5eea47089f84c915005c40547712c617",
"text": "Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.",
"title": ""
},
{
"docid": "4a27c9c13896eb50806371e179ccbf33",
"text": "A geographical information system (CIS) is proposed as a suitable tool for mapping the spatial distribution of forest fire danger. Using a region severely affected by forest fires in Central Spain as the study area, topography, meteorological data, fuel models and human-caused risk were mapped and incorporated within a GIS. Three danger maps were generated: probability of ignition, fuel hazard and human risk, and all of them were overlaid in an integrated fire danger map, based upon the criteria established by the Spanish Forest Service. CIS make it possible to improve our knowledge of the geographical distribution of fire danger, which is crucial for suppression planning (particularly when hotshot crews are involved) and for elaborating regional fire defence plans.",
"title": ""
},
{
"docid": "de887adb8d3383ffa1ed4aa033e0bd4a",
"text": "An offline recognition system for Arabic handwritten words is presented. The recognition system is based on a semi-continuous 1-dimensional HMM. From each binary word image normalization parameters were estimated. First height, length, and baseline skew are normalized, then features are collected using a sliding window approach. This paper presents these methods in more detail. Some parameters were modified and the consequent effect on the recognition results are discussed. Significant tests were performed using the new IFN/ENIT database of handwritten Arabic words. The comprehensive database consists of 26459 Arabic words (Tunisian town/village names) handwritten by 411 different writers and is free for non-commercial research. In the performed tests we achieved maximal recognition rates of about 89% on a word level.",
"title": ""
},
{
"docid": "5e21663d7b39af780157e67a828a2017",
"text": "The development of high performance dense linear algebra (DLA) critically depends on highly optimized BLAS, and especially on the matrix multiplication routine (GEMM). This is especially true for Graphics Processing Units (GPUs), as evidenced by recently published results on DLA for GPUs that rely on highly optimized GEMM [13, 11]. However, the current best GEMM performance, e.g. of up to 375 GFlop/s in single precision and of up to 75 GFlop/s in double precision arithmetic on NVIDIA’s GTX 280, is difficult to achieve. The development involves extensive GPU knowledge and even backward engineering to understand some undocumented insides about the architecture that have been of key importance in the development [12]. In this paper, we describe some GPU GEMM auto-tuning optimization techniques that allow us to keep up with changing hardware by rapidly reusing, rather than reinventing, the existing ideas. Auto-tuning, as we show in this paper, is a very practical solution where in addition to getting an easy portability, we can often get substantial speedups even on current GPUs (e.g. up to 27% in certain cases for both single and double precision GEMMs on the GTX 280).",
"title": ""
},
{
"docid": "c578e84e6d3c2c35d42013040eb193ab",
"text": "Roads are important objects for many applications of topogr aphic data. They are often acquired manually and as this enta ils significant effort, automation is highly desirable. Deficits in the auto matic extraction hindering a wide-scale practical use have led to the idea of setting-up a EuroSDR test comparing different approache s for automatic road extraction. The goal is to show the poten tial of the state-of-the-art approaches as well as to identify promisi ng directions for research and development. After describi ng the data and the evaluation criteria used, we present the approaches of a num ber of groups which have submitted results and give a detaile d d scussion of the outcome of the evaluation of the submitted results. We finally present a summary and conclusions. 1. MOTIVATION AND BACKGROUND The need for accurate, up-to-date, and detailed informatio n for roads is rapidly increasing. They are used in a variety of app lications ranging from the provision of basic topographic inf rastructure, over transportation planning, traffic and fleet ma nagement, car navigation systems, location based services (LBS ), and tourism, to web-based applications. While road extraction has been performed by digitizing maps, the update and refinement of the road geometry is often based on aerial imagery or high resolution satellite imagery such as Ikonos or Quickbird. A dditionally, terrestrial methods, particularly mobile mappi ng are of significant importance for determining attributes for navi gat onal purposes. Because road extraction from imagery, on which we focus for t he remainder of this paper, entails large efforts in terms of ti me and money, automation of the extraction is of high potential int erest. Full automation of the extraction of topographic objects is currently practically impossible for almost all applications and thus a combination with human interaction is necessary. An impor tant factor hindering the practical use of automated proced ures is the lack of reliable measures indicating the quality and acc uracy of the results, making manual editing lengthy and cumbersom e. Manufacturers of commercial systems have developed very fe w tools for semi-automated extraction and their cooperation with academia has been minimal. Thus, users and producers of such data, including national mapping agencies (NMAs) and large p ivate photogrammetric firms, have been left with many wishes t o be fulfilled. NMAs increasingly plan to update their data in shorter cycle s. Their customers have increasing demands regarding the leve l of accuracy and object modeling detailedness, and often reque st additional attributes for the objects, e.g., the number of lan es for roads. The insufficient research output and the increasing u ser needs, necessitate appropriate actions. Practically orie nted research, e.g., the ATOMI project at the ETH Zurich (Zhang, 2004), has shown that an automation of road extraction and up date is feasible to an extent that is practically very releva nt. Companies that have developed semi-automated tools for buildi ng extraction and other firms too, could very well offer similar to ols for roads. These considerations led to the idea of setting-up a road ext raction test under the umbrella of EuroSDR (European Spatial Da ta Research – www.eurosdr.net). An important inspiration for it was the highly successful 3D reconstruction test of (Scharstei n and Szeliski, 2002) which has become a standard in the field. The e mphasis of our test is put on the thorough evaluation of the cur rent status of research (including models, strategies, methods an data used). Through testing and comparing existing semior full y automated methods based on various datasets and high quality r eference data extracted manually by an experienced operator f om the image data used for the test, weak points as well as promis ing directions should be identified and, strategies and methods that lead to a fast implementation of operational procedures for road extraction, update, and refinement should be proposed. Howe ver, since most of the participating groups focus on road extract ion rather than on refinement or update, the scope of this test has been limited purely on road extraction for the time being. 2. DATA AND TEST SET-UP Initially, eight test images were prepared from different a eri l and satellite sensors. All images have a size of at least 4,000 4,000 pixels. Unfortunately, this was found to be insurmountable by nearly all approaches and, therefore, the limiting factor o f the test. Reasons for an inability to process the larger scenes w ere apparently twofold: First, because of missing functionali ty for processing the whole image in patches which are then combine d into one solution, intermediate results just exceeded the a vailable memory. Second, even if this had not been the case, the time it takes to process the images together with the need to adapt th e parameters to all variations in the larger scenes, meant the se images required too much effort for most people. Hence we decid ed eventually to cut out three smaller parts with 1,600 1,600 pixels of Ikonos images where we found the largest interest. In the following, only those images are listed, for which at l east three extraction results were submitted: 3 scanned aerial images from the Federal Office of Topography, Bern, Switzerland (image scale 1 : 16 000, focal length 0.3 m, RGB, 0.5 m ground resolution, 4 000 4 000 pixels – see Fig. 1) ) – Aerial1: suburban area in hilly terrain – Aerial2: hilly rural scene with medium complexity – Aerial3: hilly rural scene with low complexity 3 IKONOS images (Geo) from Kosovo, provided by Bundeswehr Geoinformation Office (AGeoBw), Euskirchen, Germany, given as pan-sharpened images in red, green, blue, and infrared (1 600 1 600 pixels – see Fig. 1 and 2) – Ikonos1-Sub1: urban/suburban area in hilly terrain – Ikonos3-Sub1 and -Sub2: rural hilly scenes with medium complexity For evaluation we use criteria put forward by (Wiedemann et a l., 1998). The basic assumption is that reference data is availa ble n the form of the center lines of the roads. Additionally, it is a sumed that only roads within a buffer of a certain width, usua lly the average width of the roads, around the road, here 5 pixels on both sides, i.e., 10 m for the Ikonos data, are correct. The ex tracted roads which are inside the buffer of the given refere nce roads and vice versa are determined via matching of the respe ctive vector data. The most important criteria defined by (Wie demann et al., 1998) based on these matching results to which we have restricted the analysis are: Completeness: This is the percentage of the reference data which is explained by the extracted data, i.e., the part of the refe rence network which lies within the buffer around the extracted da ta. The optimum value for completeness is 1. Correctness: It represents the percentage of correctly extracted road data, i.e., the part of the extracted data which lie with in t e buffer around the reference network. The optimum value for c orrectness is 1. RMS (root mean square): The RMS error expresses the geometrical accuracy of the extracted road data around the reference network. In the given evaluation framework its value depends on the buffer widthw. If an equal distribution of the extracted road data within the buffer around the reference network is assum ed, it can be shown that RMS = w=p3. The optimum value is RMS = 0. As RMS mainly depends on the resolution of the image, it is given in pixels in this paper. The reference data has an estimated precision of half a pixel . It comprises major and secondary roads, but no paths or short dr iveways. The reference data has not been made available to the pa rticipants. The participants usually asked only once or twic e for an evaluation, i.e., no optimization in terms of the reference data was pursued. Opposed to (Scharstein and Szeliski, 2002) we allo wed people to optimize their parameters for each and every image , as constant parameters were seen as too challenging. 3. ROAD EXTRACTION APPROACHES We will shortly introduce the approaches of the participati ng groups (alphabetical ordering according to corresponding author): Uwe Bacher and Helmut Mayer, Institute for Photogrammetry and Cartography, Bundeswehr University Munich, German y: The approach is only suitable for the Ikonos images and is focusing on rural areas where roads are mostly homogeneous and are not disturbed by shadows or occlusions. It is based on ear lier work from TU München of (Wiedemann and Hinz, 1999) and partially (Baumgartner et al., 1999). The approach of (Wied emann and Hinz, 1999) starts with line extraction in all spect ral bands using the sub-pixel precise Steger line extractor (St eger, 1998) based on differential geometry and scale-space inclu ding a thorough analysis and linking of the topology at intersect ions. The lines are smoothed and split at high-curvature points. T he resulting line segments are evaluated according to their wi dth, length, curvature, etc. Lines from different channels or ex t acted at different scales, i.e., line widths, are then fused on a be st first basis. From the resulting lines a graph is constructed, supp lemented by hypotheses bridging gaps. After defining seed line s in the form of the best evaluated lines, optimal paths are compu ted in the graph and from it gaps to be closed are derived. Bacher has extended this by several means (Bacher and Mayer, 2005). The central idea is to take into account the spectral informa tion by means of a (fuzzy) classification approach based on fully a tomatically created training areas. For the latter parallel e dges are extracted in the spirit of (Baumgartner et al., 1999) in a buf fer around the lines and checked if the area in-between them is ho mogeneous. The information from the classification approac h is used to evaluate the lines. Additionally, it is the image inf ormation when ",
"title": ""
},
{
"docid": "804b540f59fbc1ab84c8d4f76ebfb865",
"text": "In order to resolve the trust problem due to the opening, the dynamics, the anonymity and uncertainty of p2p system, this paper proposes a new p2p trust model by analyzing and improving the subjective logic, a new trust quantification formula with the negative events effect and the time effect is given, and a definition of risk is also presented, By doing this, we can evaluate the trust relationships between peers more precisely and prevent the hidden security dangers of cooperative cheat and slander more effectively, thus can resolve security problems exist in p2p environment more effectively.",
"title": ""
},
{
"docid": "4219836dc38e96a142e3b73cdf87e234",
"text": "BACKGROUND\nNIATx200, a quality improvement collaborative, involved 201 substance abuse clinics. Each clinic was randomized to one of four implementation strategies: (a) interest circle calls, (b) learning sessions, (c) coach only or (d) a combination of all three. Each strategy was led by NIATx200 coaches who provided direct coaching or facilitated the interest circle and learning session interventions.\n\n\nMETHODS\nEligibility was limited to NIATx200 coaches (N = 18), and the executive sponsor/change leader of participating clinics (N = 389). Participants were invited to complete a modified Grasha Riechmann Student Learning Style Survey and Teaching Style Inventory. Principal components analysis determined participants' preferred learning and teaching styles.\n\n\nRESULTS\nResponses were received from 17 (94.4 %) of the coaches. Seventy-two individuals were excluded from the initial sample of change leaders and executive sponsors (N = 389). Responses were received from 80 persons (25.2 %) of the contactable individuals. Six learning profiles for the executive sponsors and change leaders were identified: Collaborative/Competitive (N = 28, 36.4 %); Collaborative/Participatory (N = 19, 24.7 %); Collaborative only (N = 17, 22.1 %); Collaborative/Dependent (N = 6, 7.8 %); Independent (N = 3, 5.2 %); and Avoidant/Dependent (N = 3, 3.9 %). NIATx200 coaches relied primarily on one of four coaching profiles: Facilitator (N = 7, 41.2 %), Facilitator/Delegator (N = 6, 35.3 %), Facilitator/Personal Model (N = 3, 17.6 %) and Delegator (N = 1, 5.9 %). Coaches also supported their primary coaching profiles with one of eight different secondary coaching profiles.\n\n\nCONCLUSIONS\nThe study is one of the first to assess teaching and learning styles within a QIC. Results indicate that individual learners (change leaders and executive sponsors) and coaches utilize multiple approaches in the teaching and practice-based learning of quality improvement (QI) processes. Identification teaching profiles could be used to tailor the collaborative structure and content delivery. Efforts to accommodate learning styles would facilitate knowledge acquisition enhancing the effectiveness of a QI collaborative to improve organizational processes and outcomes.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov Identifier: NCT00934141 Registered July 6, 2009. Retrospectively registered.",
"title": ""
},
{
"docid": "2c5eb3fb74c6379dfd38c1594ebe85f4",
"text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.",
"title": ""
},
{
"docid": "e5aa3c20ccd4b473142093e225fd314e",
"text": "BACKGROUND\nLong-term engagement in exercise and physical activity mitigates the progression of disability and increases quality of life in people with Parkinson disease (PD). Despite this, the vast majority of individuals with PD are sedentary. There is a critical need for a feasible, safe, acceptable, and effective method to assist those with PD to engage in active lifestyles. Peer coaching through mobile health (mHealth) may be a viable approach.\n\n\nOBJECTIVE\nThe purpose of this study was to develop a PD-specific peer coach training program and a remote peer-mentored walking program using mHealth technology with the goal of increasing physical activity in persons with PD. We set out to examine the feasibility, safety, and acceptability of the programs along with preliminary evidence of individual-level changes in walking activity, self-efficacy, and disability in the peer mentees.\n\n\nMETHODS\nA peer coach training program and a remote peer-mentored walking program using mHealth was developed and tested in 10 individuals with PD. We matched physically active persons with PD (peer coaches) with sedentary persons with PD (peer mentees), resulting in 5 dyads. Using both Web-based and in-person delivery methods, we trained the peer coaches in basic knowledge of PD, exercise, active listening, and motivational interviewing. Peer coaches and mentees wore FitBit Zip activity trackers and participated in daily walking over 8 weeks. Peer dyads interacted daily via the FitBit friends mobile app and weekly via telephone calls. Feasibility was determined by examining recruitment, participation, and retention rates. Safety was assessed by monitoring adverse events during the study period. Acceptability was assessed via satisfaction surveys. Individual-level changes in physical activity were examined relative to clinically important differences.\n\n\nRESULTS\nFour out of the 5 peer pairs used the FitBit activity tracker and friends function without difficulty. A total of 4 of the 5 pairs completed the 8 weekly phone conversations. There were no adverse events over the course of the study. All peer coaches were \"satisfied\" or \"very satisfied\" with the training program, and all participants were \"satisfied\" or \"very satisfied\" with the peer-mentored walking program. All participants would recommend this program to others with PD. Increases in average steps per day exceeding the clinically important difference occurred in 4 out of the 5 mentees.\n\n\nCONCLUSIONS\nRemote peer coaching using mHealth is feasible, safe, and acceptable for persons with PD. Peer coaching using mHealth technology may be a viable method to increase physical activity in individuals with PD. Larger controlled trials are necessary to examine the effectiveness of this approach.",
"title": ""
},
{
"docid": "86aaee95a4d878b53fd9ee8b0735e208",
"text": "The tensegrity concept has long been considered as a basis for lightweight and compact packaging deployable structures, but very few studies are available. This paper presents a complete design study of a deployable tensegrity mast with all the steps involved: initial formfinding, structural analysis, manufacturing and deployment. Closed-form solutions are used for the formfinding. A manufacturing procedure in which the cables forming the outer envelope of the mast are constructed by two-dimensional weaving is used. The deployment of the mast is achieved through the use of self-locking hinges. A stiffness comparison between the tensegrity mast and an articulated truss mast shows that the tensegrity mast is weak in bending.",
"title": ""
},
{
"docid": "6e2b6edf7f97126272ff2dfa1ce7d0ae",
"text": "This paper presents a wearable lower extremity exoskeleton (LEE) developed as a platform for research works on enhancement and assistive the ability of human's walking and load carrying. The whole process of the first prototype design is introduced together with the sub-systems, inner/outer exoskeleton, the attached flexible waist and footpad with sensors. Simulation model and feedback control with the ZMP method were established using Adams and Matlab. The ultimate goal of the current research work is to design and control a power assisted system, which integrates human's intellect as the control system for manipulating the wearable power-aided device. The feasibility and initial performance of the designed system are also discussed.",
"title": ""
},
{
"docid": "87eed35ce26bf0194573f3ed2e6be7ca",
"text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem, because such visualization can reveal deep insights of complex data. However, most of the existing embedding approaches run on an excessively high precision, even when users want to obtain a brief insight from a visualization of large-scale datasets, ignoring the fact that in the end, the outputs are embedded onto a fixed-range pixel-based screen space. Motivated by this observation and directly considering the properties of screen space in an embedding algorithm, we propose Pixel-Aligned Stochastic Neighbor Embedding (PixelSNE), a highly efficient screen resolution-driven 2D embedding method which accelerates Barnes-Hut treebased t-distributed stochastic neighbor embedding (BH-SNE), which is known to be a state-of-the-art 2D embedding method. Our experimental results show a significantly faster running time for PixelSNE compared to BH-SNE for various datasets while maintaining comparable embedding quality.",
"title": ""
},
{
"docid": "c74bbe9cbf34e841c04830f34e12e141",
"text": "Feature extraction and encoding represent two of the most crucial steps in an action recognition system. For building a powerful action recognition pipeline it is important that both steps are efficient and in the same time provide reliable performance. This work proposes a new approach for feature extraction and encoding that allows us to obtain real-time frame rate processing for an action recognition system. The motion information represents an important source of information within the video. The common approach to extract the motion information is to compute the optical flow. However, the estimation of optical flow is very demanding in terms of computational cost, in many cases being the most significant processing step within the overall pipeline of the target video analysis application. In this work we propose an efficient approach to capture the motion information within the video. Our proposed descriptor, Histograms of Motion Gradients (HMG), is based on a simple temporal and spatial derivation, which captures the changes between two consecutive frames. For the encoding step a widely adopted method is the Vector of Locally Aggregated Descriptors (VLAD), which is an efficient encoding method, however, it considers only the difference between local descriptors and their centroids. In this work we propose Shape Difference VLAD (SD-VLAD), an encoding method which brings complementary information by using the shape information within the encoding process. We validated our proposed pipeline for action recognition on three challenging datasets UCF50, UCF101 and HMDB51, and we propose also a real-time framework for action recognition.",
"title": ""
},
{
"docid": "a0605a35164bba33c4e74c5a1bf997fa",
"text": "Most of the research on text categorization has focused on classifying text documents into a set of categories with no structural relationships among them (flat classification). However, in many information repositories documents are organized in a hierarchy of categories to support a thematic search by browsing topics of interests. The consideration of the hierarchical relationship among categories opens several additional issues in the development of methods for automated document classification. Questions concern the representation of documents, the learning process, the classification process and the evaluation criteria of experimental results. They are systematically investigated in this paper, whose main contribution is a general hierarchical text categorization framework where the hierarchy of categories is involved in all phases of automated document classification, namely feature selection, learning and classification of a new document. An automated threshold determination method for classification scores is embedded in the proposed framework. It can be applied to any classifier that returns a degree of membership of a document to a category. In this work three learning methods are considered for the construction of document classifiers, namely centroid-based, naïve Bayes and SVM. The proposed framework has been implemented in the system WebClassIII and has been tested on three datasets (Yahoo, DMOZ, RCV1) which present a variety of situations in terms of hierarchical structure. Experimental results are reported and several conclusions are drawn on the comparison of the flat vs. the hierarchical approach as well as on the comparison of different hierarchical classifiers. The paper concludes with a review of related work and a discussion of previous findings vs. our findings.",
"title": ""
},
{
"docid": "be9b40cc2e2340249584f7324e26c4d3",
"text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.",
"title": ""
}
] |
scidocsrr
|
923000b84814d5ad0d40cca7b0499e9c
|
Bioinformatics and Computational Biology Solutions Using R and Bioconductor (Statistics for Biology and Health)
|
[
{
"docid": "d258a14fc9e64ba612f2c8ea77f85d08",
"text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.",
"title": ""
},
{
"docid": "88c5a6fca072ae849d300e6f30d15c40",
"text": "Models such as feed-forward neural networks and certain other structures investigated in the computer science literature are not amenable to closed-form Bayesian analysis. The paper reviews the various approaches taken to overcome this difficulty, involving the use of Gaussian approximations, Markov chain Monte Carlo simulation routines and a class of non-Gaussian but “deterministic” approximations called variational approximations.",
"title": ""
}
] |
[
{
"docid": "517cb20e47d3b92d12c6fb86b22c3a19",
"text": "The symmetric traveling salesman problem (TSP) is one of the best-known problems of combinatorial optimisation, very easy to explain and visualise, yet with a semblance of real-world applicability. Given a set of points, the cost of moving between each two in either direction, and a constant k, the task in the TSP is to decide, whether it is possible to visit all points, each one exactly once, and to return back to the point of departure, at a total cost of no more than k. The latest book by Applegate, Bixby, Chvátal, and Cook provides an excellent survey of methods that kick-started this \" engine of discovery in applied mathematics \" (invoked on pp. In more than 600 pages, the authors present a survey of methods used in their present-best TSP solver Concorde, almost to the exclusion of any other content. Chapters 1–4 describe the TSP and Chapters 5–6 provide a brief introduction to solving the TSP by using the branch and cut method. At the heart of the book are then Chapters 7–11, which survey various classes of cuts, in some cases first proposed by the authors themselves. Chapter 7 surveys cuts from blossoms and blocks, Chapter 8 presents cuts from combs and consecutive ones, and Chapter 9 introduces cuts from dominoes. Chapters 11 and 12 then describe in yet more detail separation and metamorphoses of strong valid inequalities. Other variants of the problem, such as the asymmetric TSP, and other solution approaches, including metaheuristics and approximation algorithms, are mentioned only in the passing. They are, however, well-covered elsewhere (Gutin & Punnen, 2002), and the seemingly narrow focus consequently enables the authors to provide an outstandingly in-depth treatment. The treatment especially benefits from authors' extensive experience with implementation of solvers for problems of combinatorial optimisation. In many textbooks on combinatorial optimisation, primal heuristics are mentioned only in passing and cuts are presented in the very mathematical style of definition – proof of validity – proof of dimensionality. Not here. Chapter 6-11 suggest separation routines, exact or heuristic, alongside the description of strong valid inequalities, Chapter 12 is devoted to management of cuts and instances of linear programming, Chapter 13 describes pricing routines for column generation, and last but not least, Chapter 15 is devoted to primal (tour-finding) heuristics. \" Implementation details \" , such as the choice of suitable data structures and trade-offs between heuristic and exact separation, are …",
"title": ""
},
{
"docid": "e04cccfd59c056678e39fc4aed0eaa2b",
"text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.",
"title": ""
},
{
"docid": "98e9d8fb4a04ad141b3a196fe0a9c08b",
"text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.",
"title": ""
},
{
"docid": "d8828a6cafcd918cd55b1782629b80e0",
"text": "For deep-neural-network (DNN) processors [1-4], the product-sum (PS) operation predominates the computational workload for both convolution (CNVL) and fully-connect (FCNL) neural-network (NN) layers. This hinders the adoption of DNN processors to on the edge artificial-intelligence (AI) devices, which require low-power, low-cost and fast inference. Binary DNNs [5-6] are used to reduce computation and hardware costs for AI edge devices; however, a memory bottleneck still remains. In Fig. 31.5.1 conventional PE arrays exploit parallelized computation, but suffer from inefficient single-row SRAM access to weights and intermediate data. Computing-in-memory (CIM) improves efficiency by enabling parallel computing, reducing memory accesses, and suppressing intermediate data. Nonetheless, three critical challenges remain (Fig. 31.5.2), particularly for FCNL. We overcome these problems by co-optimizing the circuits and the system. Recently, researches have been focusing on XNOR based binary-DNN structures [6]. Although they achieve a slightly higher accuracy, than other binary structures, they require a significant hardware cost (i.e. 8T-12T SRAM) to implement a CIM system. To further reduce the hardware cost, by using 6T SRAM to implement a CIM system, we employ binary DNN with 0/1-neuron and ±1-weight that was proposed in [7]. We implemented a 65nm 4Kb algorithm-dependent CIM-SRAM unit-macro and in-house binary DNN structure (focusing on FCNL with a simplified PE array), for cost-aware DNN AI edge processors. This resulted in the first binary-based CIM-SRAM macro with the fastest (2.3ns) PS operation, and the highest energy-efficiency (55.8TOPS/W) among reported CIM macros [3-4].",
"title": ""
},
{
"docid": "a1594c7a0ed8990dc1322eb74275c126",
"text": "The main objective of the project was to examine a proposed theoretical model of mindfulness mechanisms in sports. We conducted two studies (the first study using a cross-sectional design and the second a longitudinal design) to investigate if rumination and emotion regulation mediate the relation between dispositional mindfulness and sport-specific coping. Two hundred and forty-two young elite athletes, drawn from various sports, were recruited for the cross-sectional study. For the longitudinal study, 65 elite athletes were recruited. All analyses were performed using Bayesian statistics. The path analyses showed credible indirect effects of dispositional mindfulness on coping via rumination and emotion regulation in both the cross-sectional study and the longitudinal study. Additionally, the results in both studies showed credible direct effects of dispositional mindfulness on rumination and emotion regulation. Further, credible direct effects of emotion regulation as well as rumination on coping were also found in both studies. Our findings support the theoretical model, indicating that rumination and emotion regulation function as essential mechanisms in the relation between dispositional mindfulness and sport-specific coping skills. Increased dispositional mindfulness in competitive athletes (i.e. by practicing mindfulness) may lead to reductions in rumination, as well as an improved capacity to regulate negative emotions. By doing so, athletes may improve their sport-related coping skills, and thereby enhance athletic performance.",
"title": ""
},
{
"docid": "d5b9dd51c831473fa66e87baac352994",
"text": "Essential oils are secondary metabolites with a key-role in plants protection, consisting primarily of terpenes with a volatile nature and a diverse array of chemical structures. Essential oils exhibit a wide range of bioactivities, especially antimicrobial activity, and have long been utilized for treating various human ailments and diseases. Cancer cell prevention and cytotoxicity are exhibited through a wide range of mechanisms of action, with more recent research focusing on synergistic and antagonistic activity between specific essential oils major and minor components. Essential oils have been shown to possess cancer cell targeting activity and are able to increase the efficacy of commonly used chemotherapy drugs including paclitaxel and docetaxel, having also shown proimmune functions when administered to the cancer patient. The present review represents a state-of-the-art review of the research behind the application of EOs as anticancer agents both in vitro and in vivo. Cancer cell target specificity and the use of EOs in combination with conventional chemotherapeutic strategies are also explored.",
"title": ""
},
{
"docid": "77da7651b0e924d363c859d926e8c9da",
"text": "Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons’ schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating ‘task highlights’ which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data—sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.",
"title": ""
},
{
"docid": "35bcfb8837d6bd11d91e5d91f039b9b5",
"text": "Distributed word representation plays a pivotal role in various natural language processing tasks. In spite of its success, most existing methods only consider contextual information, which is suboptimal when used in various tasks due to a lack of task-specific features. The rational word embeddings should have the ability to capture both the semantic features and task-specific features of words. In this paper, we propose a task-oriented word embedding method and apply it to the text classification task. With the function-aware component, our method regularizes the distribution of words to enable the embedding space to have a clear classification boundary. We evaluate our method using five text classification datasets. The experiment results show that our method significantly outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "5c38ad54e43b71ea5588418620bcf086",
"text": "Chondrosarcomas are indolent but invasive chondroid malignancies that can form in the skull base. Standard management of chondrosarcoma involves surgical resection and adjuvant radiation therapy. This review evaluates evidence from the literature to assess the importance of the surgical approach and extent of resection on outcomes for patients with skull base chondrosarcoma. Also evaluated is the ability of the multiple modalities of radiation therapy, such as conventional fractionated radiotherapy, proton beam, and stereotactic radiosurgery, to control tumor growth. Finally, emerging therapies for the treatment of skull-base chondrosarcoma are discussed.",
"title": ""
},
{
"docid": "0bfebc28492f27539104c0c2a46dbc8c",
"text": "This paper presents a reinforcement learning (RL)–based energy management strategy for a hybrid electric tracked vehicle. A control-oriented model of the powertrain and vehicle dynamics is first established. According to the sample information of the experimental driving schedule, statistical characteristics at various velocities are determined by extracting the transition probability matrix of the power request. Two RL-based algorithms, namely Q-learning and Dyna algorithms, are applied to generate optimal control solutions. The two algorithms are simulated on the same driving schedule, and the simulation results are compared to clarify the merits and demerits of these algorithms. Although the Q-learning algorithm is faster (3 h) than the Dyna algorithm (7 h), its fuel consumption is 1.7% higher than that of the Dyna algorithm. Furthermore, the Dyna algorithm registers approximately the same fuel consumption as the dynamic programming–based global optimal solution. The computational cost of the Dyna algorithm is substantially lower than that of the stochastic dynamic programming.",
"title": ""
},
{
"docid": "415423f706491c5ec3df6a3b3bf48743",
"text": "The realm of human uniqueness steadily shrinks; reflecting this, other primates suffer from states closer to depression or anxiety than 'depressive-like' or 'anxiety-like behavior'. Nonetheless, there remain psychiatric domains unique to humans. Appreciating these continuities and discontinuities must inform the choice of neurobiological approach used in studying any animal model of psychiatric disorders. More fundamentally, the continuities reveal how aspects of psychiatric malaise run deeper than our species' history.",
"title": ""
},
{
"docid": "ece151547ff80622a5f026e631c626d3",
"text": "Over the last few years there has been substantial research on text summarization, but comparatively little research has been carried out on adaptable components that allow rapid development and evaluation of summarization solutions. This paper presents a set of adaptable summarization components together with well-established evaluation tools, all within the GATE paradigm. The toolkit includes resources for the computation of summarization features which are combined in order to provide functionalities for single-document, multi-document, querybased, and multi/cross-lingual summarization. The summarization tools have been successfully used in a number of applications including a fully-fledged information access system. RÉSUMÉ. Au cours des dernières années il y a eu un nombre important de recherches au sujet du résumé automatique. Toutefois, il y a eu comparativement peu de recherche au sujet des ressources computationnelles et composantes qui peuvent être adaptées facilement pour le développement et l’évaluation des systèmes de résumé automatique. Ici on présente un ensemble de ressources spécifiquement développées pour le résumé automatique qui se basent sur la plateforme GATE. Les composantes sont utilisées pour calculer des traits indiquant la pertinence des phrases. Ces composantes sont combinées pour produire différents types de systèmes de résumé tels que résumé de document simple, résumé de document multiple, et résumé basé sur des topiques. Les ressources et algorithmes implémentés ont été utilisés pour développer plusieurs applications pour l’accès à l’information dans des systèmes d’information.",
"title": ""
},
{
"docid": "7b205b171481afeb46d7347428b223cf",
"text": "The power–voltage characteristic of photovoltaic (PV) arrays displays multiple local maximum power points when all the modules do not receive uniform solar irradiance, i.e., under partial shading conditions (PSCs). Conventional maximum power point tracking (MPPT) methods are shown to be effective under uniform solar irradiance conditions. However, they may fail to track the global peak under PSCs. This paper proposes a new method for MPPT of PV arrays under both PSCs and uniform conditions. By analyzing the solar irradiance pattern and using the popular Hill Climbing method, the proposed method tracks all local maximum power points. The performance of the proposed method is evaluated through simulations in MATLAB/SIMULINK environment. Besides, the accuracy of the proposed method is proved using experimental results.",
"title": ""
},
{
"docid": "9d64f045d9976bfde0c3ae4b22ff816e",
"text": "In this paper, a novel capacitive coupled patch antenna array capable of providing 360° coverage is proposed. The design and performance of the antenna element is discussed. The proposed antenna design is positioned in the mobile phone chassis as a set of 4 sub-arrays each with 12 antenna elements to provide high gain around 27 dBi with each sub-array providing 90° coverage. The antenna array covers the frequency range of 24–28 GHz which is a promising band for future 5G based smartphone services. The antenna array performance is evaluated. The proposed antenna exhibits a uniform radiation pattern with a stable gain when integrated with the mobile phone chassis.",
"title": ""
},
{
"docid": "5b3a2eab238c1bad29df4c3c8608abee",
"text": "Existing attention mechanisms are trained to attend to individual items in a collection (the memory) with a predefined, fixed granularity, e.g., a word token or an image grid. We propose area attention: a way to attend to areas in the memory, where each area contains a group of items that are structurally adjacent, e.g., spatially for a 2D memory such as images, or temporally for a 1D memory such as natural language sentences. Importantly, the shape and the size of an area are dynamically determined via learning, which enables a model to attend to information with varying granularity. Area attention can easily work with existing model architectures such as multi-head attention for simultaneously attending to multiple areas in the memory. We evaluate area attention on two tasks: neural machine translation (both character and token-level) and image captioning, and improve upon strong (state-of-the-art) baselines in all the cases. These improvements are obtainable with a basic form of area attention that is parameter free.",
"title": ""
},
{
"docid": "a2514f994292481d0fe6b37afe619cb5",
"text": "The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking. 1. History of Information Hiding The idea of communicating secretly is as old as communication itself. In this section, we briefly discuss the historical development of information hiding techniques such as steganography/ watermarking. Early steganography was messy. Before phones, before mail, before horses, messages were sent on foot. If you wanted to hide a message, you had two choices: have the messenger memorize it, or hide it on the messenger. While information hiding techniques have received a tremendous attention recently, its application goes back to Greek times. According to Greek historian Herodotus, the famous Greek tyrant Histiaeus, while in prison, used unusual method to send message to his son-in-law. He shaved the head of a slave to tattoo a message on his scalp. Histiaeus then waited until the hair grew back on slave’s head prior to sending him off to his son-inlaw. The second story also came from Herodotus, which claims that a soldier named Demeratus needed to send a message to Sparta that Xerxes intended to invade Greece. Back then, the writing medium was written on wax-covered tablet. Demeratus removed the wax from the tablet, wrote the secret message on the underlying wood, recovered the tablet with wax to make it appear as a blank tablet and finally sent the document without being detected. Invisible inks have always been a popular method of steganography. Ancient Romans used to write between lines using invisible inks based on readily available substances such as fruit juices, urine and milk. When heated, the invisible inks would darken, and become legible. Ovid in his “Art of Love” suggests using milk to write invisibly. Later chemically affected sympathetic inks were developed. Invisible inks were used as recently as World War II. Modern invisible inks fluoresce under ultraviolet light and are used as anti-counterfeit devices. For example, \"VOID\" is printed on checks and other official documents in an ink that appears under the strong ultraviolet light used for photocopies. The monk Johannes Trithemius, considered one of the founders of modern cryptography, had ingenuity in spades. His three volume work Steganographia, written around 1500, describes an extensive system for concealing secret messages within innocuous texts. On its surface, the book seems to be a magical text, and the initial reaction in the 16th century was so strong that Steganographia was only circulated privately until publication in 1606. But less than five years ago, Jim Reeds of AT&T Labs deciphered mysterious codes in the third volume, showing that Trithemius' work is more a treatise on cryptology than demonology. Reeds' fascinating account of the code breaking process is quite readable. One of Trithemius' schemes was to conceal messages in long invocations of the names of angels, with the secret message appearing as a pattern of letters within the words. For example, as every other letter in every other word: padiel aporsy mesarpon omeuas peludyn malpreaxo which reveals \"prymus apex.\" Another clever invention in Steganographia was the \"Ave Maria\" cipher. The book contains a series of tables, each of which has a list of words, one per letter. To code a message, the message letters are replaced by the corresponding words. If the tables are used in order, one table per letter, then the coded message will appear to be an innocent prayer. The earliest actual book on steganography was a four hundred page work written by Gaspari Schott in 1665 and called Steganographica. Although most of the ideas came from Trithemius, it was a start. Further development in the field occurred in 1883, with the publication of Auguste Kerchoffs’ Cryptographie militaire. Although this work was mostly about cryptography, it describes some principles that are worth keeping in mind when designing a new steganographic system.",
"title": ""
},
{
"docid": "d658b95cc9dc81d0dbb3918795ccab50",
"text": "A brain–computer interface (BCI) is a communication channel which does not depend on the brain’s normal output pathways of peripheral nerves and muscles [1–3]. It supplies paralyzed patients with a new approach to communicate with the environment. Among various brain monitoring methods employed in current BCI research, electroencephalogram (EEG) is the main interest due to its advantages of low cost, convenient operation and non-invasiveness. In present-day EEG-based BCIs, the following signals have been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta rhythms, P300 evoked potential, slow cortical potential (SCP), and movement-related cortical potential (MRCP). Details about these signals can be found in chapter “Brain Signals for Brain–Computer Interfaces”. These systems offer some practical solutions (e.g., cursor movement and word processing) for patients with motor disabilities. In this chapter, practical designs of several BCIs developed in Tsinghua University will be introduced. First of all, we will propose the paradigm of BCIs based on the modulation of EEG rhythms and challenges confronting practical system designs. In Sect. 2, modulation and demodulation methods of EEG rhythms will be further explained. Furthermore, practical designs of a VEP-based BCI and a motor imagery based BCI will be described in Sect. 3. Finally, Sect. 4 will present some real-life application demos using these practical BCI systems.",
"title": ""
},
{
"docid": "3668a5a14ea32471bd34a55ff87b45b5",
"text": "This paper proposes a method to separate polyphonic music signal into signals of each musical instrument by NMF: Non-negative Matrix Factorization based on preservation of spectrum envelope. Sound source separation is taken as a fundamental issue in music signal processing and NMF is becoming common to solve it because of its versatility and compatibility with music signal processing. Our method bases on a common feature of harmonic signal: spectrum envelopes of musical signal in close pitches played by the harmonic music instrument would be similar. We estimate power spectrums of each instrument by NMF with restriction to synchronize spectrum envelope of bases which are allocated to all possible center frequencies of each instrument. This manipulation means separation of components which refers to tones of each instrument and realizes both of separation without pre-training and separation of signal including harmonic and non-harmonic sound. We had an experiment to decompose mixture sound signal of MIDI instruments into each instrument and evaluated the result by SNR of single MIDI instrument sound signals and separated signals. As a result, SNR of lead guitar and drums approximately marked 3.6 and 6.0 dB and showed significance of our method.",
"title": ""
},
{
"docid": "86820c43e63066930120fa5725b5b56d",
"text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.",
"title": ""
}
] |
scidocsrr
|
8e0d8bbaef344bace0d90c0be3bfb4dd
|
Improved Garbled Circuit: Free XOR Gates and Applications
|
[
{
"docid": "5ed4c23e1fcfb3f18c18bb1eb6f408ab",
"text": "In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.",
"title": ""
}
] |
[
{
"docid": "c0202481cd5e2e1a32a54a959eb1cbc4",
"text": "Sentiments are expressions of one’s words in a sentence. Hence understanding the meaning of text in the sentence is of utmost importance to people of various fields like customer reviews in companies, movie reviews in movies, etc. It may involve huge text data to analyze and it becomes totally unviable for manually understanding the meaning of sentences. Classifier algorithms should be used to classify the various meaning of the sentences. By using pre-defined data to train our classifier and three different algorithms namely Naive Bayes, Support Vector Machines, Decision Trees, we can simplify the task of text classification. Using relevant results and examples we will prove that SVM is one of the better algorithms in providing higher accuracy over the other two algorithms i.e. Naive Bayes and Decision Tree.",
"title": ""
},
{
"docid": "673fea40e5cb12b54cc296b1a2c98ddb",
"text": "Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.",
"title": ""
},
{
"docid": "bdfa9a484a2bca304c0a8bbd6dcd7f1a",
"text": "We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.",
"title": ""
},
{
"docid": "4d894156dd1ad6864eb6b47ed6bee085",
"text": "Preference learning is a fundamental problem in various smart computing applications such as personalized recommendation. Collaborative filtering as a major learning technique aims to make use of users’ feedback, for which some recent works have switched from exploiting explicit feedback to implicit feedback. One fundamental challenge of leveraging implicit feedback is the lack of negative feedback, because there is only some observed relatively “positive” feedback available, making it difficult to learn a prediction model. In this paper, we propose a new and relaxed assumption of pairwise preferences over item-sets, which defines a user’s preference on a set of items (item-set) instead of on a single item only. The relaxed assumption can give us more accurate pairwise preference relationships. With this assumption, we further develop a general algorithm called CoFiSet (collaborative filtering via learning pairwise preferences over item-sets), which contains four variants, CoFiSet(SS), CoFiSet(MOO), CoFiSet(MOS) and CoFiSet(MSO), representing “Set vs. Set,” “Many ‘One vs. One’,” “Many ‘One vs. Set”’ and “Many ‘Set vs. One”’ pairwise comparisons, respectively. Experimental results show that our CoFiSet(MSO) performs better than several state-of-the-art methods on five ranking-oriented evaluation metrics on three real-world data sets.",
"title": ""
},
{
"docid": "76105ede3908516cebd3bb84ad965be0",
"text": "897 don't know the coin used for each set of tosses. However, if we had some way of completing the data (in our case, guessing correctly which coin was used in each of the five sets), then we could reduce parameter estimation for this problem with incomplete data to maximum likelihood estimation with complete data. One iterative scheme for obtaining completions could work as follows: starting from some initial parameters, θ ˆ ˆ ˆ = θ Α ,θ Β (t) (t) (t) (), determine for each of the five sets whether coin A or coin B was more likely to have generated the observed flips (using the current parameter estimates). Then, assume these completions (that is, guessed coin assignments) to be correct, and apply the regular maximum likelihood estimation procedure to get θ ˆ(t+1). Finally, repeat these two steps until convergence. As the estimated model improves, so too will the quality of the resulting completions. The expectation maximization algorithm is a refinement on this basic idea. Rather than picking the single most likely completion of the missing coin assignments on each iteration, the expectation maximization algorithm computes probabilities for each possible completion of the missing data, using the current parameters θ ˆ(t). These probabilities are used to create a weighted training set consisting of all possible completions of the data. Finally, a modified version of maximum likelihood estimation that deals with weighted training examples provides new parameter estimates, θ ˆ(t+1). By using weighted training examples rather than choosing the single best completion, the expectation maximization algorithm accounts for the confidence of the model in each completion of the data (Fig. 1b). In summary, the expectation maximiza-tion algorithm alternates between the steps z = (z 1 , z 2 ,…, z 5), where x i ∈ {0,1,…,10} is the number of heads observed during the ith set of tosses, and z i ∈ {A,B} is the identity of the coin used during the ith set of tosses. Parameter estimation in this setting is known as the complete data case in that the values of all relevant random variables in our model (that is, the result of each coin flip and the type of coin used for each flip) are known. Here, a simple way to estimate θ A and θ B is to return the observed proportions of heads for each coin: (1) θ Α ˆ = # of heads using …",
"title": ""
},
{
"docid": "88c5a383133b28b186bd493b82c895e0",
"text": "Stock forecasting involves complex interactions between market-influencing factors and unknown random processes. In this study, an integrated system, CBDWNN by combining dynamic time windows, case based reasoning (CBR), and neural network for stock trading prediction is developed and it includes three different stages: (1) screening out potential stocks and the important influential factors; (2) using back propagation network (BPN) to predict the buy/sell points (wave peak and wave trough) of stock price and (3) adopting case based dynamic window (CBDW) to further improve the forecasting results from BPN. The system developed in this research is a first attempt in the literature to predict the sell/buy decision points instead of stock price itself. The empirical results show that the CBDW can assist the BPN to reduce the false alarm of buying or selling decisions. Nine different stocks with different trends, i.e., upward, downward and steady, are studied and one individual stock (AUO) will be studied as case example. The rates of return for upward, steady, and downward trend stocks are higher than 93.57%, 37.75%, and 46.62%, respectively. These results are all very promising and better than using CBR or BPN alone. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f128c1903831e9310d0ed179838d11d1",
"text": "A partially corporate feeding waveguide located below the radiating waveguide is introduced to a waveguide slot array to enhance the bandwidth of gain. A PMC termination associated with the symmetry of the feeding waveguide as well as uniform excitation is newly proposed for realizing dense and uniform slot arrangement free of high sidelobes. To exploit the bandwidth of the feeding circuit, the 4 × 4-element subarray is also developed for wider bandwidth by using standing-wave excitation. A 16 × 16-element array with uniform excitation is fabricated in the E-band by diffusion bonding of laminated thin copper plates which has the advantages of high precision and high mass-productivity. The antenna gain of 32.4 dBi and the antenna efficiency of 83.0% are measured at the center frequency. The 1 dB-down gain bandwidth is no less than 9.0% and a wideband characteristic is achieved.",
"title": ""
},
{
"docid": "2a057079c544b97dded598b6f0d750ed",
"text": "Introduction Sometimes it is not enough for a DNN to produce an outcome. For example, in applications such as healthcare, users need to understand the rationale of the decisions. Therefore, it is imperative to develop algorithms to learn models with good interpretability (Doshi-Velez 2017). An important factor that leads to the lack of interpretability of DNNs is the ambiguity of neurons, where a neuron may fire for various unrelated concepts. This work aims to increase the interpretability of DNNs on the whole image space by reducing the ambiguity of neurons. In this paper, we make the following contributions:",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "cac3a510f876ed255ff87f2c0db2ed8e",
"text": "The resurgence of cancer immunotherapy stems from an improved understanding of the tumor microenvironment. The PD-1/PD-L1 axis is of particular interest, in light of promising data demonstrating a restoration of host immunity against tumors, with the prospect of durable remissions. Indeed, remarkable clinical responses have been seen in several different malignancies including, but not limited to, melanoma, lung, kidney, and bladder cancers. Even so, determining which patients derive benefit from PD-1/PD-L1-directed immunotherapy remains an important clinical question, particularly in light of the autoimmune toxicity of these agents. The use of PD-L1 (B7-H1) immunohistochemistry (IHC) as a predictive biomarker is confounded by multiple unresolved issues: variable detection antibodies, differing IHC cutoffs, tissue preparation, processing variability, primary versus metastatic biopsies, oncogenic versus induced PD-L1 expression, and staining of tumor versus immune cells. Emerging data suggest that patients whose tumors overexpress PD-L1 by IHC have improved clinical outcomes with anti-PD-1-directed therapy, but the presence of robust responses in some patients with low levels of expression of these markers complicates the issue of PD-L1 as an exclusionary predictive biomarker. An improved understanding of the host immune system and tumor microenvironment will better elucidate which patients derive benefit from these promising agents.",
"title": ""
},
{
"docid": "626c274978a575cd06831370a6590722",
"text": "The honeypot has emerged as an effective tool to provide insights into new attacks and exploitation trends. However, a single honeypot or multiple independently operated honeypots only provide limited local views of network attacks. Coordinated deployment of honeypots in different network domains not only provides broader views, but also create opportunities of early network anomaly detection, attack correlation, and global network status inference. Unfortunately, coordinated honeypot operation require close collaboration and uniform security expertise across participating network domains. The conflict between distributed presence and uniform management poses a major challenge in honeypot deployment and operation. To address this challenge, we present Collapsar, a virtual machine-based architecture for network attack capture and detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. To attackers, these honeypots appear as real systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot expertise in every production network domain. Collapsar realizes the traditional honeyfarm vision as well as our new reverse honeyfarm vision, where honeypots act as vulnerable clients exploited by real-world malicious servers. We present the design, implementation, and evaluation of a Collapsar prototype. Our experiments with a number of real-world attacks demonstrate the effectiveness and practicality of Collapsar. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "ed14f03b87e7b1fa2b7d08f586631c45",
"text": "Patents are widely regarded as a proxy for inventive output which is valuable and can be commercialized by various means. Individual patent information such as technology field, classification, claims, application jurisdictions are increasingly available as released by different venues. This work has relied on a long-standing hypothesis that the citation received by a patent is a proxy for knowledge flows or impacts of the patent thus is directly related to patent value. This paper does not fall into the line of intensive existing work that test or apply this hypothesis, rather we aim to address the limitation of using so-far received citations for patent valuation. By devising a point process based patent citation type aware (self-citation and non-self-citation) prediction model which incorporates the various information of a patent, we open up the possibility for performing predictive patent valuation which can be especially useful for newly granted patents with emerging technology. Study on real-world data corroborates the efficacy of our approach. Our initiative may also have policy implications for technology markets, patent systems and all other stakeholders. The code and curated data will be available to the research community.",
"title": ""
},
{
"docid": "b4958ecd58d42437cddda89623f55c1f",
"text": "The assumption that acquired characteristics are not inherited is often taken to imply that the adaptations that an organism learns during its lifetime cannot guide the course of evolution. This inference is incorrect (Baldwin, 1896). Learning alters the shape of the search space in which evolution operates and thereby provides good evolutionary paths towards sets of co-adapted alleles. We demonstrate that this effect allows learning organisms to evolve much faster than their nonlearning equivalents, even though the characteristics acquired by the phenotype are not communicated to the genotype.",
"title": ""
},
{
"docid": "4f296caa2ee4621a8e0858bfba701a3b",
"text": "This paper considers the problem of assessing visual aesthetic quality with semantic information. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition offers the key to addressing this problem. Based on convolutional neural networks, we propose a general multi-task framework with four different structures. In each structure, aesthetic quality assessment task and semantic recognition task are leveraged, and different features are explored to improve the quality assessment. Moreover, an effective strategy of keeping a balanced effect between the semantic task and aesthetic task is developed to optimize the parameters of our framework. The correlation analysis among the tasks validates the importance of the semantic recognition in aesthetic quality assessment. Extensive experiments verify the effectiveness of the proposed multi-task framework, and further corroborate the",
"title": ""
},
{
"docid": "f9bd24894ed3eace01f51966c61f2a5d",
"text": "Ethanolic extract from the fruits of Pimpinella anisoides, an aromatic plant and a spice, exhibited activity against AChE and BChE, with IC(50) values of 227.5 and 362.1 microg/ml, respectively. The most abundant constituents of the extract were trans-anethole, (+)-limonene and (+)-sabinene. trans-Anethole exhibited the highest activity against AChE and BChE with IC(50) values of 134.7 and 209.6 microg/ml, respectively. The bicyclic monoterpene (+)-sabinene exhibited a promising activity against AChE (IC(50) of 176.5 microg/ml) and BChE (IC(50) of 218.6 microg/ml).",
"title": ""
},
{
"docid": "c27b61685ae43c7cd1b60ca33ab209df",
"text": "The establishment of damper settings that provide an optimal compromise between wobble- and weave-mode damping is discussed. The conventional steering damper is replaced with a network of interconnected mechanical components comprised of springs, dampers and inerters - that retain the virtue of the damper, while improving the weave-mode performance. The improved performance is due to the fact that the network introduces phase compensation between the relative angular velocity of the steering system and the resulting steering technique",
"title": ""
},
{
"docid": "2572c12521f0cf6834b5ca64f427fc7a",
"text": "Although the tarsometatarsal joints are separated into three distinct synovial compartments, communications between adjacent compartments are often noted during image-guided injections. This study aims to determine whether abnormal inter-compartment tarsometatarsal joint communication is associated with patient age or degree of tarsometatarsal osteoarthritis. One hundred forty tarsometatarsal injections were retrospectively reviewed by two radiologists. Extent of inter-compartment communication and degree of osteoarthritis were independently scored. Univariate and multivariable analyses were performed to assess whether the presence of and number of abnormal joint communications were related to age and degree of osteoarthritis. Forty out of 140 tarsometatarsal joints showed abnormal communication with a separate synovial compartment, and 3 of the 40 showed abnormal communication with two separate compartments. On univariate analysis, higher grade osteoarthritis (p < 0.001) and older age (p = 0.014) were associated with an increased likelihood of abnormal inter-compartment tarsometatarsal communication and a greater number of these abnormal communications. On multivariate analysis, the degree of osteoarthritis remained a significant predictor of the presence of (p < 0.001) and number of (p < 0.001) abnormal communications, while the association of age was not statistically significant. There was significant correlation between age and degree of osteoarthritis (p < 0.001). Higher grade osteoarthritis increases the likelihood of abnormal inter-compartment tarsometatarsal joint communication and is associated with a greater number of abnormal communications. Diagnostic injection to localize a symptomatic tarsometatarsal joint may be less reliable in the setting of advanced osteoarthritis.",
"title": ""
},
{
"docid": "da906b692787c40778edc44d310ef527",
"text": "From the beginning, a primary goal of the Cyc project has been to build a large knowledge base containing a store of formalized background knowledge suitable for supporting reasoning in a variety of domains. In this paper, we will discuss the portion of Cyc technology that has been released in open source form as OpenCyc, provide examples of the content available in ResearchCyc, and discuss their utility for the future development of fully formalized knowledge bases.",
"title": ""
},
{
"docid": "7f067f869481f06e865880e1d529adc8",
"text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.",
"title": ""
},
{
"docid": "7c75c77802045cfd8d89c73ca8a68ce6",
"text": "The results of the 2016 Brexit referendum in the U.K. and presidential election in the U.S. surprised pollsters and traditional media alike, and social media is now being blamed in part for creating echo chambers that encouraged the spread of fake news that influenced voters.",
"title": ""
}
] |
scidocsrr
|
339b23c1bfbda1eef4e5bbee349fcf4c
|
Miniaturized 3-bit Phase Shifter for 60 GHz Phased-Array in 65 nm CMOS Technology
|
[
{
"docid": "dd1feb262901990e8fe2af1fe5149b04",
"text": "The design and measurement of a compact, wide-band reflective-type phase shifter in 90 nm CMOS technology in V-band frequency is presented. This phase shifter has a fractional bandwidth of 26% and an average insertion loss of 6 dB over all phase states. The chip area is only 0.08 mm 2. Measurement results show that the developed phase shifter provides 90° continuous phase shift over the frequency range of 50-65 GHz. The measured return loss is greater than 12 dB at 50 GHz. The output power is linear up to at least 4 dBm input power.",
"title": ""
},
{
"docid": "dd732081865bb209276acd3bb76ee08f",
"text": "A 57-64-GHz low phase-error 5-bit switch-type phase shifter integrated with a low phase-variation variable gain amplifier (VGA) is implemented through TSMC 90-nm CMOS low-power technology. Using the phase compensation technique, the proposed VGA can provide appropriate gain tuning with almost constant phase characteristics, thus greatly reducing the phase-tuning complexity in a phased-array system. The measured root mean square (rms) phase error of the 5-bit phase shifter is 2° at 62 GHz. The phase shifter has a low group-delay deviation (phase distortion) of +/- 8.5 ps and an excellent insertion loss flatness of ±0.8 dB for a specific phase-shifting state, across 57-64 GHz. For all 32 states, the insertion loss is 14.6 ± 3 dB, including pad loss at 60 GHz. For the integrated phase shifter and VGA, the VGA can provide 6.2-dB gain tuning range, which is wide enough to cover the loss variation of the phase shifter, with only 1.86° phase variation. The measured rms phase error of the 5-bit phase shifter and VGA is 3.8° at 63 GHz. The insertion loss of all 32 states is 5.4 dB, including pad loss at 60 GHz, and the loss flatness is ±0.8 dB over 57-64 GHz. To the best of our knowledge, the 5-bit phase shifter presents the best rms phase error at center frequency among the V-band switch-type phase shifter.",
"title": ""
}
] |
[
{
"docid": "881d38d8f7ca47ca2f478c1dc1870c7f",
"text": "What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered.",
"title": ""
},
{
"docid": "0f285bcef022d0c260b97b14be2a4af3",
"text": "These are expended notes of my talk at the summer institute in algebraic geometry (Seattle, July-August 2005), whose main purpose is to present a global overview on the theory of higher and derived stacks. This text is far from being exhaustive but is intended to cover a rather large part of the subject, starting from the motivations and the foundational material, passing through some examples and basic notions, and ending with some more recent developments and open questions.",
"title": ""
},
{
"docid": "47063493a3ae85f68b19314a1eed7388",
"text": "Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user’s needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm.",
"title": ""
},
{
"docid": "e4890b63e9a51029484354535765801c",
"text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.",
"title": ""
},
{
"docid": "da377c870079c7a956a52c1cdc375555",
"text": "We introduce a model of coherence which captures the intentional discourse structure in text. Our work is based on the hypothesis that syntax provides a proxy for the communicative goal of a sentence and therefore the sequence of sentences in a coherent discourse should exhibit detectable structural patterns. Results show that our method has high discriminating power for separating out coherent and incoherent news articles reaching accuracies of up to 90%. We also show that our syntactic patterns are correlated with manual annotations of intentional structure for academic conference articles and can successfully predict the coherence of abstract, introduction and related work sections of these articles.",
"title": ""
},
{
"docid": "57897a9c927743037dab98a1538a1563",
"text": "Affective lexicons are a useful tool for emotion studies as well as for opinion mining and sentiment analysis. Such lexicons contain lists of words annotated with their emotional assessments. There exist a number of affective lexicons for English, Spanish, German and other languages. However, only a few of such resources are available for French. A lot of human efforts are needed to build and extend an affective lexicon. In our research, we propose to use Twitter, the most popular microblogging platform nowadays, to collect a dataset of emotional texts in French. Using the collected dataset, we estimated affective norms of words to construct an affective lexicon, which we use for polarity classification of video game reviews. Experimental results show that our method performs comparably to classic supervised learning methods.",
"title": ""
},
{
"docid": "42db53797dc57cfdb7f963c55bb7f039",
"text": "Vast amounts of artistic data is scattered on-line from both museums and art applications. Collecting, processing and studying it with respect to all accompanying attributes is an expensive process. With a motivation to speed up and improve the quality of categorical analysis in the artistic domain, in this paper we propose an efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain. We continue to show how different multi-task configurations of our method behave on artistic data and outperform handcrafted feature approaches as well as convolutional neural networks. In addition to the method and analysis, we propose a challenge like nature to the new aggregated data set with almost half a million samples and structuredmeta-data to encourage further research and societal engagement. ACM Reference format: Gjorgji Strezoski and Marcel Worring. 2017. OmniArt: Multi-task Deep Learning for Artistic Data Analysis.",
"title": ""
},
{
"docid": "97dfc2b23b527a05f7de443f10a89543",
"text": "Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users’ quality of experience (QoE). Developing models that can accurately predict users’ QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer’s recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events factors that interact in a complex way to affect a user’s QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.",
"title": ""
},
{
"docid": "fe16f2d946b3ea7bc1169d5667365dbe",
"text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.",
"title": ""
},
{
"docid": "1ba834bbb2e5b8251d1637711bccecf3",
"text": "Measurements of underwater electric potential (UEP) have several possible applications, including geophysical research, gas and oil exploration, and finding underwater cables and unexploded ordnance. In the present investigation, use of an autonomous underwater vehicle (AUV) to perform measurements of underwater electric potential was explored. An AUV was equipped with a spherical electric field sensor containing three pairs of Ag/AgCl electrodes. A controlled electric field source was mounted on a surface boat for testing. In a shallow saltwater test facility, tests were performed in which the UEP sensor-equipped AUV and surface boat passed one another from opposite directions. UEP measurements acquired by the AUV showed that an ac electric field source at frequencies of 5, 10, and 20 Hz on the surface boat were clearly detected.",
"title": ""
},
{
"docid": "b374975ae9690f96ed750a888713dbc9",
"text": "We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance.",
"title": ""
},
{
"docid": "af8fbdfbc4c4958f69b3936ff2590767",
"text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.",
"title": ""
},
{
"docid": "5adaee6e03fdd73ebed40804b9cad326",
"text": "Quantum circuits exhibit several features of large-scale distributed systems. They have a concise design formalism but behavior that is challenging to represent let alone predict. Issues of scalability—both in the yet-to-be-engineered quantum hardware and in classical simulators—are paramount. They require sparse representations for efficient modeling. Whereas simulators represent both the system’s current state and its operations directly, emulators manipulate the images of system states under a mapping to a different formalism. We describe three such formalisms for quantum circuits. The first two extend the polynomial construction of Dawson et al. [1] to (i) work for any set of quantum gates obeying a certain “balance” condition and (ii) produce a single polynomial over any sufficiently structured field or ring. The third appears novel and employs only simple Boolean formulas, optionally limited to a form we call “parity-of-AND” equations. Especially the third can combine with off-the-shelf state-of-the-art third-party software, namely model counters and #SAT solvers, that we show capable of vast improvements in the emulation time in natural instances. We have programmed all three constructions to proof-of-concept level and report some preliminary tests and applications. These include algebraic analysis of special quantum circuits and the possibility of a new classical attack on the factoring problem. Preliminary comparisons are made with the libquantum simulator[2–4]. 1 A Brief But Full QC Introduction A quantum circuit is a compact representation of a computational system. It consists of some number m of qubits represented by lines resembling a musical staff, and some number s of gates arrayed like musical notes and chords. Here is an example created using the popular visual simulator [5]: Fig. 1. A five-qubit quantum circuit that computes a Fourier transform on the first four qubits. The circuit C operates on m = 5 qubits. The input is the binary string x = 10010. The first n = 4 qubits see most of the action and hold the nominal input x0 = 1001 of length n = 4, while the fifth qubit is an ancilla initialized to 0 whose purpose here is to hold the nominal output bit. The circuit has thirteen gates. Six of them have a single control represented by a black dot; they activate if and only if the control receives a 1 signal. The last gate has two controls and a target represented by the parity symbol ⊕ rather than a labeled box. Called a Toffoli gate, it will set the output bit if and only if both controls receive a 1 signal. The two gates before it merely swap the qubits 2 and 3 and 1 and 4, respectively. They have no effect on the output and are included here only to say that the first twelve gates combine to compute the quantum Fourier transform QFT4. This is just the ordinary discrete Fourier transform F16 on 2 4 = 16 coordinates. The actual output C(x) of the circuit is a quantum state Z that belongs to the complex vector space C. Nine of its entries in the standard basis are shown in Figure 1; seven more were cropped from the screenshot. Sixteen of the components are absent, meaning Z has 0 in the corresponding coordinates. Despite the diversity of the nine complex entries ZL shown, each has magnitude |ZL| = 0.0625. In general, |ZL| represents the probability that a measurement—of all qubits—will yield the binary string z ∈ { 0, 1 } corresponding to the coordinate L under the standard ordered enumeration of { 0, 1 }. Here we are interested in those z whose final entry z5 is a 1. Two of them are shown; two others (11101 and 11111) are possible and also have probability 1 16 each, making a total of 1 4 probability for getting z5 = 1. Owing to the “cylindrical” nature of the set B of strings ending in 1, a measurement of just the fifth qubit yields 1 with probability 1 4 . Where does the probability come from? The physical answer is that it is an indelible aspect of nature as expressed by quantum mechanics. For our purposes the computational answer is that it comes from the four gates labeled H, for Hadamard gate. Each supplies one bit of nondeterminism, giving four bits in all, which govern the sixteen possible outcomes of this particular example. It is a mistake to think that the probabilities must be equally spread out and must be multiples of 1/2 where h is the number of Hadamard gates. Appending just one more Hadamard gate at the right end of the third qubit line creates nonzero probabilities as low as 0.0183058 . . . and as high as 0.106694 . . . , each appearing for four outcomes of 24 nonzero possibilities. This happens because the component values follow wave equations that can amplify some values while reducing or zeroing the amplitude of others via interference. Indeed, the goal of quantum computing is to marshal most of the amplitude onto a small set of desired outcomes, so that measurements— that is to say, quantum sampling—will reveal one of them with high probability. All of this indicates the burgeoning complexity of quantum systems. Our original circuit has 5 qubits, 4 nondeterministic gates, and 9 other gates, yet there are 2 = 32 components of the vectors representing states, 32 basic inputs and outputs, and 2 = 16 branchings to consider. Adding the fifth Hadamard gate creates a new fork in every path through the system, giving 32 branchings. The whole circuit C defines a 32× 32 matrix UC in which the I-th row encodes the quantum state ΦI resulting from computation on the standard basis vector x = eI . The matrix is unitary, meaning that UC multiplied by its conjugate transpose U∗ C gives the 32× 32 identity matrix. Indeed, UC is the product of thirteen simpler matrices U` representing the respective gates (` = 1, . . . , s with s = 13). Here each gate engages only a subset of the qubits of arity r < m, so that U` decomposes into its 2 r × 2 unitary gate matrix and the identity action (represented by the 2× 2 identity matrix I) on the other m− r lines. Here are some single-qubit gate matrices: H = 1 √ 2 [ 1 1 1 −1 ]",
"title": ""
},
{
"docid": "6e975b66aacef1c6b3952295ef0bf880",
"text": "In the past few years, a wide variety of highly capable and inexpensive wearable health sensors have emerged. One of the interesting aspects of such sensors is the capability for researchers to longitudinally and automatically quantify important health behaviors, such as physical activity and sleep, with little intervention required by the participant. While the accuracy of these devices has been evaluated in laboratory settings, there exists little public data with respect to user compliance and the consistency of the resulting measurements at a large scale. The focus of this paper is to share our experience in distributing five hundred Fitbit Charge HR devices across a group of college freshmen and to introduce the resulting dataset from our study, the NetHealth Study. We find that when users are compliant, they tend to be exceptionally so, having an average compliance of 86%. User non-compliance does play a role, however, reducing the overall average compliance rate to 67%. We discuss various reasons for non-compliance and also briefly highlight preliminary monitored characteristics of physical activity and sleep in our student population.",
"title": ""
},
{
"docid": "f6d82255e8eb4390719440819b8ad50a",
"text": "Anchor-based deep methods are the most widely used methods for face detection and have reached the state-of-the-art result. Compared with anchor-based methods that estimates the bounding-box rely on some pre-defined anchor boxes, anchor-free methods perform the localization by predicting the offsets of a pixel inside a face to its outside boundaries whose accuracies are much more precise. However, anchor-free methods suffer the drawback of low recall-rate mainly because 1) only using single scale features lead to miss detection of small faces, 2) the highly intra-class imbalance problem among different size faces. In this paper, to address these problems, we propose a unified anchor-free network for detecting multi-scale faces by leveraging the local and global contextual information of multi-layer features. We also utilize a scale aware sampling strategy to mitigate the intra-class imbalance issue which can adaptivity select the positive samples. Furthermore, a revised focal loss function is adopted to deal with the foreground/background imbalance issue. Experimental results on two benchmark datasets demonstrate the effective of our proposed method.",
"title": ""
},
{
"docid": "0fb45311d5e6a7348917eaa12ffeab46",
"text": "Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this task, which is a form of Memory Network, that recognizes entities and their relations to answers through a focus attention mechanism. Our model is named Question Dependent Recurrent Entity Network and extends Recurrent Entity Network by exploiting aspects of the question during the memorization process. We validate the model on both synthetic and real datasets: the bAbI question answering dataset and the CNN & Daily News reading comprehension dataset. In our experiments, the models achieved a State-ofThe-Art in the former and competitive results in the latter.",
"title": ""
},
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
},
{
"docid": "c79c4bdf28ca638161cb82ac9991d5e9",
"text": "This letter proposes a novel wideband circularly polarized magnetoelectric dipole antenna. In the proposed antenna, a pair of rotationally symmetric horizontal patches functions as an electric dipole, and two vertical patches with the ground act as an equivalent magnetic dipole. A Γ-shaped probe is used to excite the antenna, and a metallic cavity with two gaps is designed for wideband and good performance in radiation. A prototype was fabricated and measured. The experimental results show that the proposed antenna has an impedance bandwidth of 65% for SWR≤2 from 1.76 to 3.46 GHz, a 3-dB axial-ratio bandwidth of 71.5% from 1.68 to 3.55 GHz, and a stable gain of 8 ± 1 dBi. Good unidirectional radiation characteristic and low back-lobe level are achieved over the whole operating frequency band.",
"title": ""
},
{
"docid": "b6818031020a04a5b9385603f38da147",
"text": "LINGUISTIC HEDGING IN FINANCIAL DOCUMENTS by CAITLIN CASSIDY (Under the Direction of Frederick W. Maier) ABSTRACT Each year, publicly incorporated companies are required to file a Form 10-K with the United States Securities and Exchange Commission. These documents contain an enormous amount of natural language data and may offer insight into financial performance prediction. This thesis attempts to analyze two dimensions of language held within this data: sentiment and linguistic hedging. An experiment was conducted with 325 human annotators to manually score a subset of the sentiment words contained in a corpus of 106 10-K filings, and an inference engine identified instances of hedges having governance over these words in a dependency tree. Finally, this work proposes an algorithm for the automatic classification of sentences in the financial domain as speculative or non-speculative using the previously defined hedge cues.",
"title": ""
},
{
"docid": "4c0427bd87ef200484f0a510e8acb0de",
"text": "Recent deep learning (DL) models are moving more and more to dynamic neural network (NN) architectures, where the NN structure changes for every data sample. However, existing DL programming models are inefficient in handling dynamic network architectures because of: (1) substantial overhead caused by repeating dataflow graph construction and processing every example; (2) difficulties in batched execution of multiple samples; (3) inability to incorporate graph optimization techniques such as those used in static graphs. In this paper, we present “Cavs”, a runtime system that overcomes these bottlenecks and achieves efficient training and inference of dynamic NNs. Cavs represents a dynamic NN as a static vertex function F and a dynamic instance-specific graph G. It avoids the overhead of repeated graph construction by only declaring and constructing F once, and allows for the use of static graph optimization techniques on pre-defined operations in F . Cavs performs training and inference by scheduling the execution of F following the dependencies in G, hence naturally exposing batched execution opportunities over different samples. Experiments comparing Cavs to state-of-the-art frameworks for dynamic NNs (TensorFlow Fold, PyTorch and DyNet) demonstrate the efficacy of our approach: Cavs achieves a near one order of magnitude speedup on training of dynamic NN architectures, and ablations verify the effectiveness of our proposed design and optimizations.",
"title": ""
}
] |
scidocsrr
|
8a919d443345e198dd9c43fcac05a358
|
A lightweight anomaly detection framework for medical wireless sensor networks
|
[
{
"docid": "4b54527aa8554eae373e4b19e6774467",
"text": "In this paper, we proposed an integrated biometric-based security framework for wireless body area networks, which takes advantage of biometric features shared by body sensors deployed at different positions of a person's body. The data communications among these sensors are secured via the proposed authentication and selective encryption schemes that only require low computational power and less resources (e.g., battery and bandwidth). Specifically, a wavelet-domain Hidden Markov Model (HMM) classification is utilized by considering the non-Gaussian statistics of ECG signals for accurate authentication. In addition, the biometric information such as ECG parameters is selected as the biometric key for the encryption in the framework. Our experimental results demonstrated that the proposed approach can achieve more accurate authentication performance without extra requirements of key distribution and strict time synchronization.",
"title": ""
}
] |
[
{
"docid": "5090070d6d928b83bd22d380f162b0a6",
"text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.",
"title": ""
},
{
"docid": "481931c78a24020a02245075418a26c3",
"text": "Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting. d-KG accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.",
"title": ""
},
{
"docid": "59084b05271efe4b22dd490958622c1e",
"text": "Millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) seamlessly integrates two wireless technologies, mmWave communications and massive MIMO, which provides spectrums with tens of GHz of total bandwidth and supports aggressive space division multiple access using large-scale arrays. Though it is a promising solution for next-generation systems, the realization of mmWave massive MIMO faces several practical challenges. In particular, implementing massive MIMO in the digital domain requires hundreds to thousands of radio frequency chains and analog-to-digital converters matching the number of antennas. Furthermore, designing these components to operate at the mmWave frequencies is challenging and costly. These motivated the recent development of the hybrid-beamforming architecture, where MIMO signal processing is divided for separate implementation in the analog and digital domains, called the analog and digital beamforming, respectively. Analog beamforming using a phase array introduces uni-modulus constraints on the beamforming coefficients. They render the conventional MIMO techniques unsuitable and call for new designs. In this paper, we present a systematic design framework for hybrid beamforming for multi-cell multiuser massive MIMO systems over mmWave channels characterized by sparse propagation paths. The framework relies on the decomposition of analog beamforming vectors and path observation vectors into Kronecker products of factors being uni-modulus vectors. Exploiting properties of Kronecker mixed products, different factors of the analog beamformer are designed for either nulling interference paths or coherently combining data paths. Furthermore, a channel estimation scheme is designed for enabling the proposed hybrid beamforming. The scheme estimates the angles-of-arrival (AoA) of data and interference paths by analog beam scanning and data-path gains by analog beam steering. The performance of the channel estimation scheme is analyzed. In particular, the AoA spectrum resulting from beam scanning, which displays the magnitude distribution of paths over the AoA range, is derived in closed form. It is shown that the inter-cell interference level diminishes inversely with the array size, the square root of pilot sequence length, and the spatial separation between paths, suggesting different ways of tackling pilot contamination.",
"title": ""
},
{
"docid": "4ce8934f295235acc2bbf03c7530842b",
"text": "— Speech recognition has found its application on various aspects of our daily lives from automatic phone answering service to dictating text and issuing voice commands to computers. In this paper, we present the historical background and technological advances in speech recognition technology over the past few decades. More importantly, we present the steps involved in the design of a speaker-independent speech recognition system. We focus mainly on the pre-processing stage that extracts salient features of a speech signal and a technique called Dynamic Time Warping commonly used to compare the feature vectors of speech signals. These techniques are applied for recognition of isolated as well as connected words spoken. We conduct experiments on MATLAB to verify these techniques. Finally, we design a simple 'Voice-to-Text' converter application using MATLAB.",
"title": ""
},
{
"docid": "c7c40106a804061b96b6243cff85d317",
"text": "In this paper, we describe a system for detecting duplicate images and videos in a large collection of multimedia data. Our system consists of three major elements: Local-Difference-Pattern (LDP) as the unified feature to describe both images and videos, Locality-Sensitive-Hashing (LSH) as the core indexing structure to assure the most frequent data access occurred in the main memory, and multi-steps verification for queries to best exclude false positives and to increase the precision. The experimental results, validated on two public datasets, demonstrate that the proposed method is robust against the common image-processing tricks used to produce duplicates. In addition, the memory requirement has been addressed in our system to handle large-scale database.",
"title": ""
},
{
"docid": "a54f912c14b44fc458ed8de9e19a5e82",
"text": "Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.",
"title": ""
},
{
"docid": "db87ed8ad4e1ffa4f049945de80f957d",
"text": "The anterior cerebral artery (ACA) varies considerably and this complicates the description of the normal anatomy. The segmentation of the ACA is mostly agreed on by different authors, although the relationship of the pericallosal and callosomarginal arteries (CmA) is not agreed upon. The two basic configurations of the ACA are determined by the presence or absence of the CmA. The diameter, length and origin of the cortical branches have been measured and described by various authors and display great variability. Common anomalies of the ACA include the azygos, bihemispheric, and median anterior cerebral arteries. A pilot study was done on 19 hemispheres to assess the variation of the branches of the ACA. The most common variations included absence and duplication. The inferior internal parietal artery and the CmA were most commonly absent and the paracentral lobule artery was the most frequently duplicated (36.8%). The inferior internal parietal artery originated from the posterior cerebral artery in 40.0% and this was the most unusual origin observed. It is important to be aware of the possibility of variations since these variations can have serious clinical implications. The knowledge of these variations can be helpful to clinicians and neurosurgeons. The aim of this article is to review the anatomy and variations of the anterior cerebral artery, as described in the literature. This was also compared to the results from a pilot study.",
"title": ""
},
{
"docid": "26b77bf67e242ff3e88a6f6bf7137d3e",
"text": "In the recent years there has been growing interest in exploiting multibaseline (MB) SAR interferometry in a tomographic framework, to produce full 3D imaging e.g. of forest layers. However, Fourier-based MB SAR tomography is generally affected by unsatisfactory imaging quality due to a typically low number of baselines and their irregular distribution. In this work, we apply the more modern adaptive Capon spectral estimator to the vertical image reconstruction problem, using real airborne MB data. A first demonstration of possible imaging enhancement in real-world conditions is given. Keywordssynthetic aperture radar interferometry, electromagnetic tomography, forestry, spectral analysis.",
"title": ""
},
{
"docid": "6973231128048ac2ca5bce0121bf6d95",
"text": "PURPOSE\nThe aim of this study is to analyse the grip force distribution for different prosthetic hand designs and the human hand fulfilling a functional task.\n\n\nMETHOD\nA cylindrical object is held with a power grasp and the contact forces are measured at 20 defined positions. The distributions of contact forces in standard electric prostheses, in a experimental prosthesis with an adaptive grasp, and in human hands as a reference are analysed and compared. Additionally, the joint torques are calculated and compared.\n\n\nRESULTS\nContact forces of up to 24.7 N are applied by the middle and distal phalanges of the index finger, middle finger, and thumb of standard prosthetic hands, whereas forces of up to 3.8 N are measured for human hands. The maximum contact forces measured in a prosthetic hand with an adaptive grasp are 4.7 N. The joint torques of human hands and the adaptive prosthesis are comparable.\n\n\nCONCLUSIONS\nThe analysis of grip force distribution is proposed as an additional parameter to rate the performance of different prosthetic hand designs.",
"title": ""
},
{
"docid": "f03c4718a0d85917ea870a90c9bb05c5",
"text": "Conventional time-delay estimators exhibit dramatic performance degradations in the presence of multipath signals. This limits their application in reverberant enclosures, particularly when the signal of interest is speech and it may not possible to estimate and compensate for channel effects prior to time-delay estimation. This paper details an alternative approach which reformulates the problem as a linear regression of phase data and then estimates the time-delay through minimization of a robust statistical error measure. The technique is shown to be less susceptible to room reverberation effects. Simulations are performed across a range of source placements and room conditions to illustrate the utility of the proposed time-delay estimation method relative to conventional methods.",
"title": ""
},
{
"docid": "07457116fbecf8e5182459961b8a87d0",
"text": "Modeling temporal sequences plays a fundamental role in various modern applications and has drawn more and more attentions in the machine learning community. Among those efforts on improving the capability to represent temporal data, the Long Short-Term Memory (LSTM) has achieved great success in many areas. Although the LSTM can capture long-range dependency in the time domain, it does not explicitly model the pattern occurrences in the frequency domain that plays an important role in tracking and predicting data points over various time cycles. We propose the State-Frequency Memory (SFM), a novel recurrent architecture that allows to separate dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of input sequences. By jointly decomposing memorized dynamics into statefrequency components, the SFM is able to offer a fine-grained analysis of temporal sequences by capturing the dependency of uncovered patterns in both time and frequency domains. Evaluations on several temporal modeling tasks demonstrate the SFM can yield competitive performances, in particular as compared with the state-of-the-art LSTM models.",
"title": ""
},
{
"docid": "f11ff738aaf7a528302e6ec5ed99c43c",
"text": "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.",
"title": ""
},
{
"docid": "0d13be9f5e2082af96c370d3c316204f",
"text": "We present a combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.",
"title": ""
},
{
"docid": "a11d7186eb2c04477d4355cf8f91b4f2",
"text": "This study reports the results of a meta-analysis of empirical studies on Internet addiction published in academic journals for the period 1996-2006. The analysis showed that previous studies have utilized inconsistent criteria to define Internet addicts, applied recruiting methods that may cause serious sampling bias, and examined data using primarily exploratory rather than confirmatory data analysis techniques to investigate the degree of association rather than causal relationships among variables. Recommendations are provided on how researchers can strengthen this growing field of research.",
"title": ""
},
{
"docid": "b9d25bdbb337a9d16a24fa731b6b479d",
"text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.",
"title": ""
},
{
"docid": "17ee960777b02a910cf8fcc80f74d5cc",
"text": "The periosteum is a thin layer of connective tissue that covers the outer surface of a bone in all places except at joints (which are protected by articular cartilage). As opposed to bone itself, it has nociceptive nerve endings, making it very sensitive to manipulation. It also provides nourishment in the form of blood supply to the bone. The periosteum is connected to the bone by strong collagenous fibres called Sharpey's fibres, which extend to the outer circumferential and interstitial lamellae of bone. The periosteum consists of an outer \"fibrous layer\" and inner \"cambium layer\". The fibrous layer contains fibroblasts while the cambium layer contains progenitor cells which develop into osteoblasts that are responsible for increasing bone width. After a bone fracture the progenitor cells develop into osteoblasts and chondroblasts which are essential to the healing process. This review discusses the anatomy, histology and molecular biology of the periosteum in detail.",
"title": ""
},
{
"docid": "1b2fcf85bc73f3249d8685e0063aaa3a",
"text": "In our present society, the cinema has become one of the major forms of entertainment providing unlimited contexts of emotion elicitation for the emotional needs of human beings. Since emotions are universal and shape all aspects of our interpersonal and intellectual experience, they have proved to be a highly multidisciplinary research field, ranging from psychology, sociology, neuroscience, etc., to computer science. However, affective multimedia content analysis work from the computer science community benefits but little from the progress achieved in other research fields. In this paper, a multidisciplinary state-of-the-art for affective movie content analysis is given, in order to promote and encourage exchanges between researchers from a very wide range of fields. In contrast to other state-of-the-art papers on affective video content analysis, this work confronts the ideas and models of psychology, sociology, neuroscience, and computer science. The concepts of aesthetic emotions and emotion induction, as well as the different representations of emotions are introduced, based on psychological and sociological theories. Previous global and continuous affective video content analysis work, including video emotion recognition and violence detection, are also presented in order to point out the limitations of affective video content analysis work.",
"title": ""
},
{
"docid": "d88e4d9bba66581be16c9bd59d852a66",
"text": "After five decades characterized by empiricism and several pitfalls, some of the basic mechanisms of action of ozone in pulmonary toxicology and in medicine have been clarified. The present knowledge allows to understand the prolonged inhalation of ozone can be very deleterious first for the lungs and successively for the whole organism. On the other hand, a small ozone dose well calibrated against the potent antioxidant capacity of blood can trigger several useful biochemical mechanisms and reactivate the antioxidant system. In detail, firstly ex vivo and second during the infusion of ozonated blood into the donor, the ozone therapy approach involves blood cells and the endothelium, which by transferring the ozone messengers to billions of cells will generate a therapeutic effect. Thus, in spite of a common prejudice, single ozone doses can be therapeutically used in selected human diseases without any toxicity or side effects. Moreover, the versatility and amplitude of beneficial effect of ozone applications have become evident in orthopedics, cutaneous, and mucosal infections as well as in dentistry.",
"title": ""
},
{
"docid": "90b1d5b2269f742f9028199c34501043",
"text": "Motivated by the desire to construct compact (in terms of expected length to be traversed to reach a decision) decision trees, we propose a new node splitting measure for decision tree construction. We show that the proposed measure is convex and cumulative and utilize this in the construction of decision trees for classification. Results obtained from several datasets from the UCI repository show that the proposed measure results in decision trees that are more compact with classification accuracy that is comparable to that obtained using popular node splitting measures such as Gain Ratio and the Gini Index. 2008 Published by Elsevier Inc.",
"title": ""
}
] |
scidocsrr
|
2f0c8d57e62543aec6d42b7d089f1ac4
|
A Simple and Global Optimization Algorithm for Engineering Problems: Differential Evolution Algorithm
|
[
{
"docid": "3293e4e0d7dd2e29505db0af6fbb13d1",
"text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.",
"title": ""
}
] |
[
{
"docid": "20a90ed3aa2b428b19e85aceddadce90",
"text": "Deep learning has been a groundbreaking technology in various fields as well as in communications systems. In spite of the notable advancements of deep neural network (DNN) based technologies in recent years, the high computational complexity has been a major obstacle to apply DNN in practical communications systems which require real-time operation. In this sense, challenges regarding practical implementation must be addressed before the proliferation of DNN-based intelligent communications becomes a reality. To the best of the authors’ knowledge, for the first time, this article presents an efficient learning architecture and design strategies including link level verification through digital circuit implementations using hardware description language (HDL) to mitigate this challenge and to deduce feasibility and potential of DNN for communications systems. In particular, DNN is applied for an encoder and a decoder to enable flexible adaptation with respect to the system environments without needing any domain specific information. Extensive investigations and interdisciplinary design considerations including the DNN-based autoencoder structure, learning framework, and low-complexity digital circuit implementations for real-time operation are taken into account by the authors which ascertains the use of DNN-based communications in practice.",
"title": ""
},
{
"docid": "7aeb10faf8590ed9f4054bafcd4dee0c",
"text": "Concept, design, and measurement results of a frequency-modulated continuous-wave radar sensor in low-temperature co-fired ceramics (LTCC) technology is presented in this paper. The sensor operates in the frequency band between 77–81 GHz. As a key component of the system, wideband microstrip grid array antennas with a broadside beam are presented and discussed. The combination with a highly integrated feeding network and a four-channel transceiver chip based on SiGe technology results in a very compact LTCC RF frontend (23 mm <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 23 mm). To verify the feasibility of the concept, first radar measurement results are presented.",
"title": ""
},
{
"docid": "9a85994a8668a6cbb5646570fc20177c",
"text": "This paper investigates the application of linear learning techniques to the place recognition problem. We present two learning methods, a supervised change prediction technique based on linear regression and an unsupervised change removal technique based on principal component analysis, and investigate how the performance of each is affected by the choice of training data. We show that the change prediction technique presented here succeeds only if it is provided with appropriate and adequate training data, which can be challenging for a mobile robotic system operating in an uncontrolled environment. In contrast, change removal can improve place recognition performance even when trained with as few as 100 samples. This paper shows that change removal can be combined with a number of different image descriptors and can improve performance across a range of different appearance conditions.",
"title": ""
},
{
"docid": "3f1d69e8a2fdfc69e451679255782d70",
"text": "This tutorial gives a broad view of modern approaches for scaling up machine learning and data mining methods on parallel/distributed platforms. Demand for scaling up machine learning is task-specific: for some tasks it is driven by the enormous dataset sizes, for others by model complexity or by the requirement for real-time prediction. Selecting a task-appropriate parallelization platform and algorithm requires understanding their benefits, trade-offs and constraints. This tutorial focuses on providing an integrated overview of state-of-the-art platforms and algorithm choices. These span a range of hardware options (from FPGAs and GPUs to multi-core systems and commodity clusters), programming frameworks (including CUDA, MPI, MapReduce, and DryadLINQ), and learning settings (e.g., semi-supervised and online learning). The tutorial is example-driven, covering a number of popular algorithms (e.g., boosted trees, spectral clustering, belief propagation) and diverse applications (e.g., recommender systems and object recognition in vision).\n The tutorial is based on (but not limited to) the material from our upcoming Cambridge U. Press edited book which is currently in production.\n Visit the tutorial website at http://hunch.net/~large_scale_survey/",
"title": ""
},
{
"docid": "64cefd949f61afe81fbbb9ca1159dd4a",
"text": "Single carrier frequency division multiple access (SC-FDMA), which utilizes single carrier modulation and frequency domain equalization is a technique that has similar performance and essentially the same overall complexity as those of OFDM, in which high peak-to-average power ratio (PAPR) is a major drawback. An outstanding advantage of SC-FDMA is its lower PAPR due to its single carrier structure. In this paper, we analyze the PAPR of SC-FDMA signals with pulse shaping. We analytically derive the time domain SC-FDMA signals and numerically compare PAPR characteristics using the complementary cumulative distribution function (CCDF) of PAPR. The results show that SC-FDMA signals indeed have lower PAPR compared to those of OFDMA. Comparing the two forms of SC-FDMA, we find that localized FDMA (LFDMA) has higher PAPR than interleaved FDMA (IFDMA) but somewhat lower PAPR than OFDMA. Also noticeable is the fact that pulse shaping increases PAPR",
"title": ""
},
{
"docid": "4b1bb1a79d755ea8ccd6f80a8e827b40",
"text": "This paper analyzes the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of O ( 1 √ t ) , where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymptotically according to O ( e − τt (ln t)d/4 ) with high probability. Here, d is the dimension of the search space and τ is a constant that depends on the behaviour of the objective function near its global maximum.",
"title": ""
},
{
"docid": "82234158dc94216222efa5f80eee0360",
"text": "We investigate the possibility to prove security of the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model, i.e., without random oracles. We subsume these schemes under a more general class of blind signature schemes and show that finding security proofs for these schemes via black-box reductions in the standard model is hard. Technically, our result deploys meta-reduction techniques showing that black-box reductions for such schemes could be turned into efficient solvers for hard non-interactive cryptographic problems like RSA or discrete-log. Our technique yields significantly stronger impossibility results than previous meta-reductions in other settings by playing off the two security requirements of the blind signatures (unforgeability and blindness).",
"title": ""
},
{
"docid": "ae7347af720ab76ab098a62b3236c17c",
"text": "We propose discriminative adversarial networks (DAN) for semi-supervised learning and loss function learning. Our DAN approach builds upon generative adversarial networks (GANs) and conditional GANs but includes the key differentiator of using two discriminators instead of a generator and a discriminator. DAN can be seen as a framework to learn loss functions for predictors that also implements semi-supervised learning in a straightforward manner. We propose instantiations of DAN for two different prediction tasks: classification and ranking. Our experimental results on three datasets of different tasks demonstrate that DAN is a promising framework for both semi-supervised learning and learning loss functions for predictors. For all tasks, the semi-supervised capability of DAN can significantly boost the predictor performance for small labeled sets with minor architecture changes across tasks. Moreover, the loss functions automatically learned by DANs are very competitive and usually outperform the standard pairwise and negative log-likelihood loss functions for semi-supervised learning.",
"title": ""
},
{
"docid": "2eff2b22b7ed1a23613399ee39535ccf",
"text": "Despite wishing to return to productive activity, many individuals with schizophrenia enter rehabilitation with severe doubts about their abilities. Negative beliefs in schizophrenia have been linked with poorer employment outcome. Accordingly, in this paper, we describe efforts to synthesize vocational and cognitive behavior therapy interventions into a 6-month manualized program to assist persons with schizophrenia spectrum disorders overcome negative beliefs and meet vocational goals. This program, the Indianapolis Vocational Intervention Program (IVIP), includes weekly group and individual interventions and is intended as an adjunct to work therapy programs. The IVIP was initially developed over a year of working with 20 participants with Structured Clinical Interview for the Diagnostic and Statistical Manual-I (SCID-I) confirmed diagnoses of schizophrenia or schizoaffective disorder who were actively engaged in 20 hours per week of work activity. For this paper, we explain the development of the treatment manual and the group and individual interventions and present case examples that illustrate how persons with severe mental illness might utilize the manualized intervention.",
"title": ""
},
{
"docid": "37642371bbcc3167f96548d02ccd832e",
"text": "The manipulation of light-matter interactions in two-dimensional atomically thin crystals is critical for obtaining new optoelectronic functionalities in these strongly confined materials. Here, by integrating chemically grown monolayers of MoS2 with a silver-bowtie nanoantenna array supporting narrow surface-lattice plasmonic resonances, a unique two-dimensional optical system has been achieved. The enhanced exciton-plasmon coupling enables profound changes in the emission and excitation processes leading to spectrally tunable, large photoluminescence enhancement as well as surface-enhanced Raman scattering at room temperature. Furthermore, due to the decreased damping of MoS2 excitons interacting with the plasmonic resonances of the bowtie array at low temperatures stronger exciton-plasmon coupling is achieved resulting in a Fano line shape in the reflection spectrum. The Fano line shape, which is due to the interference between the pathways involving the excitation of the exciton and plasmon, can be tuned by altering the coupling strengths between the two systems via changing the design of the bowties lattice. The ability to manipulate the optical properties of two-dimensional systems with tunable plasmonic resonators offers a new platform for the design of novel optical devices with precisely tailored responses.",
"title": ""
},
{
"docid": "c7f38e2284ad6f1258fdfda3417a6e14",
"text": "Millimeter wave (mmWave) systems must overcome heavy signal attenuation to support high-throughput wireless communication links. The small wavelength in mmWave systems enables beamforming using large antenna arrays to combat path loss with directional transmission. Beamforming with multiple data streams, known as precoding, can be used to achieve even higher performance. Both beamforming and precoding are done at baseband in traditional microwave systems. In mmWave systems, however, the high cost of mixed-signal and radio frequency chains (RF) makes operating in the passband and analog domains attractive. This hardware limitation places additional constraints on precoder design. In this paper, we consider single user beamforming and precoding in mmWave systems with large arrays. We exploit the structure of mmWave channels to formulate the precoder design problem as a sparsity constrained least squares problem. Using the principle of basis pursuit, we develop a precoding algorithm that approximates the optimal unconstrained precoder using a low dimensional basis representation that can be efficiently implemented in RF hardware. We present numerical results on the performance of the proposed algorithm and show that it allows mmWave systems to approach waterfilling capacity.",
"title": ""
},
{
"docid": "740c75400509dd66ca05cdad8e562920",
"text": "Arabic optical character recognition (OCR) is the process of converting images that contain Arabic text to a format that can be edited. In this work, a simple approach for Arabic OCR is presented, the proposed method deployed correlation and dynamic-size windowing to segment and to recognize Arabic characters. The proposed coherent template recognition process is characterized by the ability of recognizing Arabic characters with different sizes. Recognition results reveal the robustness of the proposed method.",
"title": ""
},
{
"docid": "1f77513377899aa0d235bd3d92914168",
"text": "Concept maps are graphical tools for organizing and representing knowledge. They include concepts, usually enclosed in circles or boxes of some type, and relationships between concepts indicated by a connecting line linking two concepts. Words on the line, referred to as linking words or linking phrases, specify the relationship between the two concepts. We define concept as a perceived regularity in events or objects, or records of events or objects, designated by a label. The label for most concepts is a word, although sometimes we use symbols such as + or %, and sometimes more than one word is used. Propositions are statements about some object or event in the universe, either naturally occurring or constructed. Propositions contain two or more concepts connected using linking words or phrases to form a meaningful statement. Sometimes these are called semantic units, or units of meaning. Figure 1 shows an example of a concept map that describes the structure of concept maps and illustrates the above characteristics.",
"title": ""
},
{
"docid": "214231e8bb6ccd31a0ea42ffe73c0ee6",
"text": "Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding social biases found in web corpora. In this work, we study data and models associated with multilabel object classification and visual semantic role labeling. We find that (a) datasets for these tasks contain significant gender bias and (b) models trained on these datasets further amplify existing bias. For example, the activity cooking is over 33% more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68% at test time. We propose to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference. Our method results in almost no performance loss for the underlying recognition task but decreases the magnitude of bias amplification by 47.5% and 40.5% for multilabel classification and visual semantic role labeling, respectively.",
"title": ""
},
{
"docid": "2d5b476642b65c881558821fe6dc9e03",
"text": "In this paper we propose a real solution for gathering information throughout the entire pig meat supply chain. The architecture consists of a a complex identification system based on RFID tags that transmits data to a distributed database during all phases of the production process. The specific work environment required identifying a suitable technology for implementation in the supply chain and the best possible organization. The aim of this work is to keep track of all the information generated during meat processing, not only for traceability purposes but chiefly for enhancing and optimizing production. All information generated by the traceability system will be collected in a central database accessible by end users thtough a public dedicated web interface.",
"title": ""
},
{
"docid": "c4b0d93105e434d4d407575157a005a4",
"text": "Online Judge is widespread for the undergraduate to study programming. The users usually feel confused while locating the problems they prefer from the massive ones. This paper proposes a specialized recommendation model for the online judge systems in order to present the alternative problems to the users which they may be interested in potentially. In this model, a three-level collaborative filtering recommendation method is referred to and redesigned catering for the specific interaction mode of Online Judge. This method is described in detail in this paper and implemented in our demo system which demonstrates its availability.",
"title": ""
},
{
"docid": "b8bb4d195738e815430d146ac110df49",
"text": "Software testing is an effective way to find software errors. Generating a good test suite is the key. A program invariant is a property that is true at a particular program point or points. The property could reflect the program’s execution over a test suite. Based on this point, we integrate the random test case generation technique and the invariant extraction technique, achieving automatic test case generation and selection. With the same invariants, compared with the traditional random test case generation technique, the experimental results show that the approach this paper describes can generate a smaller test suite. Keywords-software testing; random testing; test case; program invariant",
"title": ""
},
{
"docid": "5b5345a894d726186ba7f6baf76cb65e",
"text": "In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate approach would be to look for loss functions that are inherently noise-tolerant. For binary classification there exist theoretical results on loss functions that are robust to label noise. In this paper, we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. These results generalize the existing results on noise-tolerant loss functions for binary classification. We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise. Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.",
"title": ""
},
{
"docid": "df5ef1235844aa1593203f96cd2130bd",
"text": "It is generally well acknowledged that humans are capable of having a theory of mind (ToM) of others. We present here a model which borrows mechanisms from three dissenting explanations of how ToM develops and functions, and show that our model behaves in accordance with various ToM experiments (Wellman, Cross, & Watson, 2001; Leslie, German, & Polizzi, 2005).",
"title": ""
},
{
"docid": "9dc52cd5a58077f74868f48021b390af",
"text": "Background: Motor development allows infants to gain knowledge of the world but its vital role in social development is often ignored. Method: A systematic search for papers investigating the relationship between motor and social skills was conducted , including research in typical development and in Developmental Coordination Disorder, Autism Spectrum Disorders and Specific Language Impairment. R sults: The search identified 42 studies, many of which highlighted a significant relationship between motor skills and the development of social cognition, language and social interactions. Conclusions: This complex relationship requires more attention from researchers and practitioners, allowing the development of more tailored intervention techniques for those at risk of motor, social and language difficulties. Key Practitioner Message Significant relationships exist between the development of motor skills, social cognition, language and social interactions in typical and atypical development Practitioners should be aware of the relationships between these aspects of development and understand the impact that early motor difficulties may have on later social skills Complex relationships between motor and social skills are evident in children with ASD, DCD and SLI Early screening and more targeted interventions may be appropriate",
"title": ""
}
] |
scidocsrr
|
6edbba61ffeecac5dc0297747f798e5a
|
Making CNC machine tools more open, interoperable and intelligent - a review of the technologies
|
[
{
"docid": "100b4df0a86534cba7078f4afc247206",
"text": "Presented in this article is a review of manufacturing techniques and introduction of reconfigurable manufacturing systems; a new paradigm in manufacturing which is designed for rapid adjustment of production capacity and functionality, in response to new market conditions. A definition of reconfigurable manufacturing systems is outlined and an overview of available manufacturing techniques, their key drivers and enablers, and their impacts, achievements and limitations is presented. A historical review of manufacturing from the point-of-view of the major developments in the market, technology and sciences issues affecting manufacturing is provided. The new requirements for manufacturing are discussed and characteristics of reconfigurable manufacturing systems and their key role in future manufacturing are explained. The paper is concluded with a brief review of specific technologies and research issues related to RMSs.",
"title": ""
}
] |
[
{
"docid": "b1f5ac697d3d0f015df5f426989619ce",
"text": "We review some of the most recent approaches to colorize gray-scale images using deep learning methods. Inspired by these, we propose a model which combines a deep Convolutional Neural Network trained from scratch with high-level features extracted from the InceptionResNet-v2 pre-trained model. Thanks to its fully convolutional architecture, our encoder-decoder model can process images of any size and aspect ratio. Other than presenting the training results, we assess the “public acceptance” of the generated images by means of a user study. Finally, we present a carousel of applications on different types of images, such as historical photographs.",
"title": ""
},
{
"docid": "2875373b63642ee842834a5360262f41",
"text": "Video stabilization techniques are essential for most hand-held captured videos due to high-frequency shakes. Several 2D-, 2.5D-, and 3D-based stabilization techniques have been presented previously, but to the best of our knowledge, no solutions based on deep neural networks had been proposed to date. The main reason for this omission is shortage in training data as well as the challenge of modeling the problem using neural networks. In this paper, we present a video stabilization technique using a convolutional neural network. Previous works usually propose an off-line algorithm that smoothes a holistic camera path based on feature matching. Instead, we focus on low-latency, real-time camera path smoothing that does not explicitly represent the camera path and does not use future frames. Our neural network model, called StabNet, learns a set of mesh-grid transformations progressively for each input frame from the previous set of stabilized camera frames and creates stable corresponding latent camera paths implicitly. To train the network, we collect a dataset of synchronized steady and unsteady video pairs via a specially designed hand-held hardware. Experimental results show that our proposed online method performs comparatively to the traditional off-line video stabilization methods without using future frames while running about 10 times faster. More importantly, our proposed StabNet is able to handle low-quality videos, such as night-scene videos, watermarked videos, blurry videos, and noisy videos, where the existing methods fail in feature extraction or matching.",
"title": ""
},
{
"docid": "e2f300ad1450ac93c75ad1fd4b4cc02e",
"text": "Understanding how appliances in a house consume power is important when making intelligent and informed decisions about conserving energy. Appliances can turn ON and OFF either by the actions of occupants or by automatic sensing and actuation (e.g., thermostat). It is also difficult to understand how much a load consumes at any given operational state. Occupants could buy sensors that would help, but this comes at a high financial cost. Power utility companies around the world are now replacing old electro-mechanical meters with digital meters (smart meters) that have enhanced communication capabilities. These smart meters are essentially free sensors that offer an opportunity to use computation to infer what loads are running and how much each load is consuming (i.e., load disaggregation). We present a new load disaggregation algorithm that uses a super-state hidden Markov model and a new Viterbi algorithm variant which preserves dependencies between loads and can disaggregate multi-state loads, all while performing computationally efficient exact inference. Our sparse Viterbi algorithm can efficiently compute sparse matrices with a large number of super-states. Additionally, our disaggregator can run in real-time on an inexpensive embedded processor using low sampling rates.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "76d27ae5220bdd692448797e8115d658",
"text": "Abstinence following daily marijuana use can produce a withdrawal syndrome characterized by negative mood (eg irritability, anxiety, misery), muscle pain, chills, and decreased food intake. Two placebo-controlled, within-subject studies investigated the effects of a cannabinoid agonist, delta-9-tetrahydrocannabinol (THC: Study 1), and a mood stabilizer, divalproex (Study 2), on symptoms of marijuana withdrawal. Participants (n=7/study), who were not seeking treatment for their marijuana use, reported smoking 6–10 marijuana cigarettes/day, 6–7 days/week. Study 1 was a 15-day in-patient, 5-day outpatient, 15-day in-patient design. During the in-patient phases, participants took oral THC capsules (0, 10 mg) five times/day, 1 h prior to smoking marijuana (0.00, 3.04% THC). Active and placebo marijuana were smoked on in-patient days 1–8, while only placebo marijuana was smoked on days 9–14, that is, marijuana abstinence. Placebo THC was administered each day, except during one of the abstinence phases (days 9–14), when active THC was given. Mood, psychomotor task performance, food intake, and sleep were measured. Oral THC administered during marijuana abstinence decreased ratings of ‘anxious’, ‘miserable’, ‘trouble sleeping’, ‘chills’, and marijuana craving, and reversed large decreases in food intake as compared to placebo, while producing no intoxication. Study 2 was a 58-day, outpatient/in-patient design. Participants were maintained on each divalproex dose (0, 1500 mg/day) for 29 days each. Each maintenance condition began with a 14-day outpatient phase for medication induction or clearance and continued with a 15-day in-patient phase. Divalproex decreased marijuana craving during abstinence, yet increased ratings of ‘anxious’, ‘irritable’, ‘bad effect’, and ‘tired.’ Divalproex worsened performance on psychomotor tasks, and increased food intake regardless of marijuana condition. Thus, oral THC decreased marijuana craving and withdrawal symptoms at a dose that was subjectively indistinguishable from placebo. Divalproex worsened mood and cognitive performance during marijuana abstinence. These data suggest that oral THC, but not divalproex, may be useful in the treatment of marijuana dependence.",
"title": ""
},
{
"docid": "d5dce73957da864062d45799471b06a4",
"text": "T elusive dream of replacing missing teeth with artificial analogs has been part of dentistry for a thousand years. The coincidental discovery by Dr P-I Brånemark and his coworkers of the tenacious affinity between living bone and titanium oxides, termed osseointegration, propelled dentistry into a new age of reconstructive dentistry. Initially, the essential tenets for obtaining osseointegration dictated the atraumatic placement of a titanium screw into viable bone and a prolonged undisturbed, submerged healing period. By definition, this required a 2-stage surgical procedure. To comply, a coupling mechanism for implant placement and the eventual attachment of a transmucosal extension for restoration was explored. The initial coronal design selected was a 0.7-mm-tall external hexagon. At its inception, the design made perfect sense, because it permitted engagement of a torque transfer coupling device (fixture mount) during the surgical placement of the implant into threaded bone and the subsequent second-stage connection of the transmucosal extension that, when used in series, could effectively restore an edentulous arch. As 20 years of osseointegration in clinical practice in North America have transpired, much has changed. The efficacy and predictability of osseointegrated implants are no longer issues.1–7 During the initial years, research focused on refinements in surgical techniques and grafting procedures. Eventually, the emphasis shifted to a variety of mechanical and esthetic challenges that remained problematic and unresolved.8–10 During this period, the envelope of implant utilization dramatically expanded from the original complete edentulous application to fixed partial dentures, single-tooth replacement, maxillofacial and a myriad of other applications, limited only by the ingenuity and skill of the clinician.11–13 The external hexagonal design, ad modum Brånemark, originally intended as a coupling and rotational torque transfer mechanism, consequently evolved by necessity into a prosthetic indexing and antirotational mechanism.14,15 The expanded utilization of the hexagonal resulted in a number of significant clinical complications.8–11,16–22 To mitigate these problems, the external hexagonal, its transmucosal connections, and their retaining screws have undergone a number of modifications.23 In 1992, English published an overview of the thenavailable external hexagonal implants, numbering 25 different implants, all having the standard Brånemark hex configuration.14 The external hex has since been modified and is now available in heights of 0.7, 0.9, 1.0, and 1.2 mm and with flat-to-flat widths of 2.0, 2.4, 2.7, 3.0, 3.3, and 3.4 mm, depending on the implant platform. The available number of hexagonal implants has more than doubled. The abutment-retaining screw has also been modified with respect to material, shank length, number of threads, diameter, length, thread design, and torque application (unpublished data, 1998).23 Entirely new secondand third-generation interface coupling geometries have also been introduced into the implant milieu to overcome intrinsic hexagonal deficiencies.24–28 Concurrent with the evolution of the coupling geometry was the introduction of a variety of new implant body shapes, diameters, thread patterns, and surface topography.26,27,29–36 Today, the clinician is overwhelmed with more than 90 root-form implants to select from in a variety of diameters, lengths, surfaces, platforms, interfaces, and body designs. Virtually every implant company manufactures a hex top, a proprietary interface, or both; “narrow,” “standard,” and “wide” diameter implant bodies; machined, textured, and hydroxyapatite (HA) and titanium plasma-spray (TPS) surface implants; and a variety of lengths and body shapes (Table 1). In the wide-diameter arena alone, there are 25 different offerings, 15 external hexagonal, and 10 other interfaces available in a number of configurations. 1Adjunct Professor of Prosthodontics, Graduate Prosthodontics, Indiana University, Indianapolis, Indiana; Assistant Research Scientist, Department of Restorative Dentistry, University of California at San Francisco; and Private Practice, Roseville, California.",
"title": ""
},
{
"docid": "c6dfe01e87a7ec648f0857bf1a74a3ba",
"text": "Received: 12 June 2006 Revised: 10 May 2007 Accepted: 22 July 2007 Abstract Although there is widespread agreement that leadership has important effects on information technology (IT) acceptance and use, relatively little empirical research to date has explored this phenomenon in detail. This paper integrates the unified theory of acceptance and use of technology (UTAUT) with charismatic leadership theory, and examines the role of project champions influencing user adoption. PLS analysis of survey data collected from 209 employees in seven organizations that had engaged in a large-scale IT implementation revealed that project champion charisma was positively associated with increased performance expectancy, effort expectancy, social influence and facilitating condition perceptions of users. Theoretical and managerial implications are discussed, and suggestions for future research in this area are provided. European Journal of Information Systems (2007) 16, 494–510. doi:10.1057/palgrave.ejis.3000682",
"title": ""
},
{
"docid": "be8b65d39ee74dbee0835052092040da",
"text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.",
"title": ""
},
{
"docid": "1b8550cdbe9a01742fdb34b7516cfb83",
"text": "Blood pressure (BP) is one of the important vital signs that need to be monitored for personal healthcare. Arterial blood pressure (BP) was estimated from pulse transit time (PTT) and PPG waveform. PTT is a time interval between an R-wave of electrocardiography (ECG) and a photoplethysmography (PPG) signal. This method does not require an aircuff and only a minimal inconvenience of attaching electrodes and LED/photo detector sensors on a subject. PTT computed between the ECG R-wave and the maximum first derivative PPG was strongly correlated with systolic blood pressure (SBP) (R = −0.712) compared with other PTT values, and the diastolic time proved to be appropriate for estimation diastolic blood pressure (DBP) (R = −0.764). The percent errors of SBP using the individual regression line (4–11%) were lower than those using the regression line obtained from all five subjects (9–14%). On the other hand, the DBP estimation did not show much difference between the individual regression (4–10%) and total regression line (6–10%). Our developed device had a total size of 7 × 13.5 cm and was operated by single 3-V battery. Biosignals can be measured for 72 h continuously without external interruptions. Through a serial network communication, an external personal computer can monitor measured waveforms in real time. Our proposed method can be used for non-constrained, thus continuous BP monitoring for the purpose of personal healthcare.",
"title": ""
},
{
"docid": "1a42f2e7dd43a4103298c64c8bca9d7b",
"text": "We consider the problem of efficiently exploring the arms of a stochastic bandit to identify the best subset of a specified size. Under the PAC and the fixed-budget formulations, we derive improved bounds by using KL-divergence-based confidence intervals. Whereas the application of a similar idea in the regret setting has yielded bounds in terms of the KL-divergence between the arms, our bounds in the pure-exploration setting involve the “Chernoff information” between the arms. In addition to introducing this novel quantity to the bandits literature, we contribute a comparison between strategies based on uniform and adaptive sampling for pure-exploration problems, finding evidence in favor of the latter.",
"title": ""
},
{
"docid": "46df7f6c826d1e95fb818de1fde3dc77",
"text": "Named entity recognition (NER) for English typically involves one of three gold standards: MUC, CoNLL, or BBN, all created by costly manual annotation. Recent work has used Wikipedia to automatically create a massive corpus of named entity annotated text. We present the first comprehensive crosscorpus evaluation of NER. We identify the causes of poor cross-corpus performance and demonstrate ways of making them more compatible. Using our process, we develop a Wikipedia corpus which outperforms gold standard corpora on crosscorpus evaluation by up to 11%.",
"title": ""
},
{
"docid": "476e612f4124fc5e9f391e2fa4a49a3b",
"text": "Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today's DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance-tracking data through transformations-in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds-orders-of-magnitude faster than alternative solutions-while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time.",
"title": ""
},
{
"docid": "f48f55963cf3beb43170df96a463feba",
"text": "This article proposes and implements a class of chaotic motors for electric compaction. The key is to develop a design approach for the permanent magnets PMs of doubly salient PM DSPM motors in such a way that chaotic motion can be naturally produced. The bifurcation diagram is employed to derive the threshold of chaoization in terms of PM flux, while the corresponding phase-plane trajectories are used to characterize the chaotic motion. A practical three-phase 12/8-pole DSPM motor is used for exemplification. The proposed chaotic motor is critically assessed for application to a vibratory soil compactor, which is proven to offer better compaction performance than its counterparts. Both computer simulation and experimental results are given to illustrate the proposed chaotic motor. © 2006 American Institute of Physics. DOI: 10.1063/1.2165783",
"title": ""
},
{
"docid": "065151713758d05a602b350d31e88dc6",
"text": "Previous works have shown that the ear is a promising candida te for biometric identification. However, in prior work, the pre-processing of ear images has had manua l steps, and algorithms have not necessarily handled problems caused by hair and earrings. We pres ent a complete system for ear biometrics, including automated segmentation of the ear in a profile view image and 3D shape matching for recognition. We evaluated this system with the largest experimen tal study to date in ear biometrics, achieving a rank-one recognition rate of 97.8% for an identification sc enario, and equal error rate of 1.2% for a verification scenario on a database of 415 subjects and 1,386 total probes. Keyword: biometrics, ear biometrics, 3-D shape, skin detection, cur vat e estimation, active contour, iterative closest point.",
"title": ""
},
{
"docid": "dc1053623155e38f00bf70d7da145d5b",
"text": "Genetic programming is combined with program analysis methods to repair bugs in off-the-shelf legacy C programs. Fitness is defined using negative test cases that exercise the bug to be repaired and positive test cases that encode program requirements. Once a successful repair is discovered, structural differencing algorithms and delta debugging methods are used to minimize its size. Several modifications to the GP technique contribute to its success: (1) genetic operations are localized to the nodes along the execution path of the negative test case; (2) high-level statements are represented as single nodes in the program tree; (3) genetic operators use existing code in other parts of the program, so new code does not need to be invented. The paper describes the method, reviews earlier experiments that repaired 11 bugs in over 60,000 lines of code, reports results on new bug repairs, and describes experiments that analyze the performance and efficacy of the evolutionary components of the algorithm.",
"title": ""
},
{
"docid": "4ec6229ae75b13bbcc429f07eda0fb4a",
"text": "Face detection is a well-explored problem. Many challenges on face detectors like extreme pose, illumination, low resolution and small scales are studied in the previous work. However, previous proposed models are mostly trained and tested on good-quality images which are not always the case for practical applications like surveillance systems. In this paper, we first review the current state-of-the-art face detectors and their performance on benchmark dataset FDDB, and compare the design protocols of the algorithms. Secondly, we investigate their performance degradation while testing on low-quality images with different levels of blur, noise, and contrast. Our results demonstrate that both hand-crafted and deep-learning based face detectors are not robust enough for low-quality images. It inspires researchers to produce more robust design for face detection in the wild.",
"title": ""
},
{
"docid": "79b8588f7c9b6dc87d90ddbd2e75a7d5",
"text": "BACKGROUND\nDespite the progress in reducing malaria infections and related deaths, the disease remains a major global public health problem. The problem is among the top five leading causes of outpatient visits in Dembia district of the northwest Ethiopia. Therefore, this study aimed to assess the determinants of malaria infections in the district.\n\n\nMETHODS\nAn institution-based case-control study was conducted in Dembia district from October to November 2016. Out of the ten health centers in the district, four were randomly selected for the study in which 370 participants (185 cases and 185 controls) were enrolled. Data were collected using a pretested structured questionnaire. Factors associated with malaria infections were determined using logistic regression analysis. Odds ratio with 95% CI was used as a measure of association, and variables with a p-value of ≤0.05 were considered as statistically significant.\n\n\nRESULTS\nThe median age of all participants was 26 years, while that of cases and controls was 22 and 30 with a range of 1 to 80 and 2 to 71, respectively. In the multivariable logistic regression, over 15 years of age adjusted odds ratio(AOR) and confidence interval (CI) of (AOR = 18; 95% CI: 2.1, 161.5), being male (AOR = 2.2; 95% CI: 1.2, 3.9), outdoor activities at night (AOR = 5.7; 95% CI: 2.5, 12.7), bed net sharing (AOR = 3.9; 95% CI: 2.0, 7.7), and proximity to stagnant water sources (AOR = 2.7; 95% CI: 1.3, 5.4) were independent predictors.\n\n\nCONCLUSION\nBeing in over 15 years of age group, male gender, night time activity, bed net sharing and proximity to stagnant water sources were determinant factors of malaria infection in Dembia district. Additional interventions and strategies which focus on men, outdoor work at night, household net utilization, and nearby stagnant water sources are essential to reduce malaria infections in the area.",
"title": ""
},
{
"docid": "7635d39eda6ac2b3969216b39a1aa1f7",
"text": "We introduce tailored displays that enhance visual acuity by decomposing virtual objects and placing the resulting anisotropic pieces into the subject's focal range. The goal is to free the viewer from needing wearable optical corrections when looking at displays. Our tailoring process uses aberration and scattering maps to account for refractive errors and cataracts. It splits an object's light field into multiple instances that are each in-focus for a given eye sub-aperture. Their integration onto the retina leads to a quality improvement of perceived images when observing the display with naked eyes. The use of multiple depths to render each point of focus on the retina creates multi-focus, multi-depth displays. User evaluations and validation with modified camera optics are performed. We propose tailored displays for daily tasks where using eyeglasses are unfeasible or inconvenient (e.g., on head-mounted displays, e-readers, as well as for games); when a multi-focus function is required but undoable (e.g., driving for farsighted individuals, checking a portable device while doing physical activities); or for correcting the visual distortions produced by high-order aberrations that eyeglasses are not able to.",
"title": ""
},
{
"docid": "d10ec03d91d58dd678c995ec1877c710",
"text": "Major depressive disorders, long considered to be of neurochemical origin, have recently been associated with impairments in signaling pathways that regulate neuroplasticity and cell survival. Agents designed to directly target molecules in these pathways may hold promise as new therapeutics for depression.",
"title": ""
},
{
"docid": "1e5ebd122bee855d7e8113d5fe71202d",
"text": "We derive the general expression of the anisotropic magnetoresistance (AMR) ratio of ferromagnets for a relative angle between the magnetization direction and the current direction. We here use the two-current model for a system consisting of a spin-polarized conduction state (s) and localized d states (d) with spin-orbit interaction. Using the expression, we analyze the AMR ratios of Ni and a half-metallic ferromagnet. These results correspond well to the respective experimental results. In addition, we give an intuitive explanation about a relation between the sign of the AMR ratio and the s-d scattering process. Introduction The anisotropic magnetoresistance (AMR) effect, in which the electrical resistivity depends on a relative angle θ between the magnetization (Mex) direction and the electric current (I) direction, has been studied extensively both experimentally [1-5] and theoretically [1,6]. The AMR ratio is often defined by ( ) ( ) ρ θ ρ θ ρ ρ ρ ⊥",
"title": ""
}
] |
scidocsrr
|
f2bdeedc65c0a90b07861a483791028d
|
pix2code: Generating Code from a Graphical User Interface Screenshot
|
[
{
"docid": "a622ad23ba4c6db4194539cf9547af61",
"text": "Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. This issue is related to a learning principle widely used in practice, that is, to maximize the likelihood of training samples. This principle encourages high resemblance to the “ground-truth” captions, while suppressing other reasonable descriptions. Conventional evaluation metrics, e.g. BLEU and METEOR, also favor such restrictive methods. In this paper, we explore an alternative approach, with the aim to improve the naturalness and diversity – two essential properties of human expression. Specifically, we propose a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. We overcome the difficulty by Policy Gradient, a strategy stemming from Reinforcement Learning, which allows the generator to receive early feedback along the way. We tested our method on two large datasets, where it performed competitively against real people in our user study and outperformed other methods on various tasks.",
"title": ""
}
] |
[
{
"docid": "14e5e95ae4422120f5f1bb8cccb2b186",
"text": "We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.",
"title": ""
},
{
"docid": "bf1f9f28d7077909851c41eaed31e0db",
"text": "Often the best performing supervised learning models are ensembles of hundreds or thousands of base-level classifiers. Unfortunately, the space required to store this many classifiers, and the time required to execute them at run-time, prohibits their use in applications where test sets are large (e.g. Google), where storage space is at a premium (e.g. PDAs), and where computational power is limited (e.g. hea-ring aids). We present a method for \"compressing\" large, complex ensembles into smaller, faster models, usually without significant loss in performance.",
"title": ""
},
{
"docid": "52af79f30a792ca75d9c3998593c8b07",
"text": "Research on emotion understanding in ADHD shows inconsistent results. This study uses control methods to investigate two questions about recognition and understanding of emotional expressions in 36 five- to eleven-year-old boys with ADHD: [1] Do they find this task more difficult than judging non-emotional information from faces, thus suggesting a specific social-cognitive impairment? [2] Are their judgements about faces impaired by general limitations on task performance, such as impulsive responding? In Part 1, 19 boys with ADHD and 19 age-matched typically developing boys matched facial expressions of emotion to situations, and did a control non-emotional face-processing task. Boys with ADHD performed more poorly than age-matches on both tasks, but found the emotion task harder than the non-emotion task. In Part 2, 17 boys with ADHD and 13 five-to six-year-old typically developing boys performed the same tasks, but with an ‘inhibitory scaffolding’ procedure to prevent impulsive responding. Boys with ADHD performed as well as the younger controls on the non-emotional task, but still showed impairments in the emotion task. Boys with ADHD may show poorer task performance because of general cognitive factors, but also showed selective problems in matching facial emotions to situations.",
"title": ""
},
{
"docid": "65c4d3f99a066c235bb5d946934bee05",
"text": "This paper describes a new Augmented Reality (AR) system called HoloLens developed by Microsoft, and the interaction model for supporting collaboration in this space with other users. Whereas traditional AR collaboration is between two or more head-mounted displays (HMD) users, we describe collaboration between a single HMD user and others who join the space by hitching on the view of the HMD user. The remote companions participate remotely through Skype-enabled devices such as tablets or PC's. The interaction is novel in the use of a 3D space with digital objects where the interaction by remote parties can be achieved asynchronously and reflected back to the primary user. We describe additional collaboration scenarios possible with this arrangement.",
"title": ""
},
{
"docid": "33b09a4689b3e948fc8a072c0d9672c2",
"text": "This review article identifies and discusses some of main issues and potential problems – paradoxes and pathologies – around the communication of recorded information, and points to some possible solutions. The article considers the changing contexts of information communication, with some caveats about the identification of ‘pathologies of information’, and analyses the changes over time in the way in which issues of the quantity and quality of information available have been regarded. Two main classes of problems and issues are discussed. The first comprises issues relating to the quantity and diversity of information available: information overload, information anxiety, etc. The second comprises issues relating to the changing information environment with the advent of Web 2.0: loss of identity and authority, emphasis on micro-chunking and shallow novelty, and the impermanence of information. A final section proposes some means of solution of problems and of improvements to the situation.",
"title": ""
},
{
"docid": "dd6b50a56b740d07f3d02139d16eeec4",
"text": "Mitochondria play a central role in the aging process. Studies in model organisms have started to integrate mitochondrial effects on aging with the maintenance of protein homeostasis. These findings center on the mitochondrial unfolded protein response (UPR(mt)), which has been implicated in lifespan extension in worms, flies, and mice, suggesting a conserved role in the long-term maintenance of cellular homeostasis. Here, we review current knowledge of the UPR(mt) and discuss its integration with cellular pathways known to regulate lifespan. We highlight how insight into the UPR(mt) is revolutionizing our understanding of mitochondrial lifespan extension and of the aging process.",
"title": ""
},
{
"docid": "e5a1f6546de9683e7dc90af147d73d40",
"text": "Progress in both speech and language processing has spurred efforts to support applications that rely on spoken rather than written language input. A key challenge in moving from text-based documents to such spoken documents is that spoken language lacks explicit punctuation and formatting, which can be crucial for good performance. This article describes different levels of speech segmentation, approaches to automatically recovering segment boundary locations, and experimental results demonstrating impact on several language processing tasks. The results also show a need for optimizing segmentation for the end task rather than independently.",
"title": ""
},
{
"docid": "b3066a9cde7f63ec048b4cfbee6e46a0",
"text": "As the deployment of network-centric systems increases, network attacks are proportionally increasing in intensity as well as complexity. Attack detection techniques can be broadly classified as being signature-based, classification-based, or anomaly-based. In this paper we present a multi level intrusion detection system (ML-IDS) that uses autonomic computing to automate the control and management of ML-IDS. This automation allows ML-IDS to detect network attacks and proactively protect against them. ML-IDS inspects and analyzes network traffic using three levels of granularities (traffic flow, packet header, and payload), and employs an efficient fusion decision algorithm to improve the overall detection rate and minimize the occurrence of false alarms. We have individually evaluated each of our approaches against a wide range of network attacks, and then compared the results of these approaches with the results of the combined decision fusion algorithm.",
"title": ""
},
{
"docid": "249a09e24ce502efb4669603b54b433d",
"text": "Deep Neural Networks (DNNs) are universal function approximators providing state-ofthe-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. 1 ar X iv :1 71 0. 09 30 2v 3 [ st at .M L ] 6 N ov 2 01 7",
"title": ""
},
{
"docid": "6c3f80b453d51e364eca52656ed54e62",
"text": "Despite substantial recent research activity related to continuous delivery and deployment (CD), there has not yet been a systematic, empirical study on how the practices often associated with continuous deployment have found their way into the broader software industry. This raises the question to what extent our knowledge of the area is dominated by the peculiarities of a small number of industrial leaders, such as Facebook. To address this issue, we conducted a mixed-method empirical study, consisting of a pre-study on literature, qualitative interviews with 20 software developers or release engineers with heterogeneous backgrounds, and a Web-based quantitative survey that attracted 187 complete responses. A major trend in the results of our study is that architectural issues are currently one of the main barriers for CD adoption. Further, feature toggles as an implementation technique for partial rollouts lead to unwanted complexity, and require research on better abstractions and modelling techniques for runtime variability. Finally, we conclude that practitioners are in need for more principled approaches to release decision making, e.g., which features to conduct A/B tests on, or which metrics to evaluate.",
"title": ""
},
{
"docid": "b35849046b0f660453637bd237c4a39b",
"text": "A new type of transmission-line resonator is proposed. It is composed of a finite-long straight nonreciprocal phase-shift composite right/left handed transmission line, and both terminals are open or shorted. On the contrary to conventional transmission-line resonators or traveling-wave resonators, the resonant frequency does not depend on the total size of the resonators, but on the configuration of the unit cells. In addition, field profiles on the resonator are analogous to those of traveling-wave resonators, i.e., uniform magnitude distribution and linearly space-varying phase distribution along the resonator. The spatial gradient of the phase distribution is determined by the nonreciprocal phase constants of the transmission lines. The proposed resonator is specifically designed and fabricated by employing a normally magnetized ferrite microstrip line. The fundamental operations of the proposed resonator are demonstrated.",
"title": ""
},
{
"docid": "d5d03cdfd3a6d6c2b670794d76e91c8e",
"text": "We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students’ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/ ̃glai1/data/race/ and the code is available at https://github.com/ cheezer/RACE_AR_baselines.",
"title": ""
},
{
"docid": "1af19c8ede0ee6ab2a9cce15c3f5af5a",
"text": "Topical crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to such crawlers can guide the navigation of links with the goal of efficiently locating highly relevant target pages. We developed a framework to fairly evaluate topical crawling algorithms under a number of performance metrics. Such a framework is employed here to evaluate different algorithms that have proven highly competitive among those proposed in the literature and in our own previous research. In particular we focus on the tradeoff between exploration and exploitation of the cues available to a crawler, and on adaptive crawlers that use machine learning techniques to guide their search. We find that the best performance is achieved by a novel combination of explorative and exploitative bias, and introduce an evolutionary crawler that surpasses the performance of the best nonadaptive crawler after sufficiently long crawls. We also analyze the computational complexity of the various crawlers and discuss how performance and complexity scale with available resources. Evolutionary crawlers achieve high efficiency and scalability by distributing the work across concurrent agents, resulting in the best performance/cost ratio.",
"title": ""
},
{
"docid": "8a31d15193b5feaa425e74eba1bbba3c",
"text": "A whole body physiologically based pharmacokinetic (PBPK) model was applied to investigate absorption, distribution, and physiologic variations on pharmacokinetics of imatinib in human body. Previously published pharmacokinetic data of the drug after intravenous (i.v.) infusion and oral administration were simulated by the PBPK model. Oral dose absorption kinetics were analyzed by adopting a compartmental absorption and transit model in gut section. Tissue/plasma partition coefficients of drug after i.v. infusion were also used for oral administration. Sensitivity analysis of the PBPK model was carried out by taking parameters that were commonly subject to variation in human. Drug concentration in adipose tissue was found to be higher than those in other tissues, suggesting that adipose tissue plays a role as a storage tissue for the drug. Variations of metabolism in liver, body weight, and blood/plasma partition coefficient were found to be important factors affecting the plasma concentration profile of drug in human body.",
"title": ""
},
{
"docid": "f4b271c7ee8bfd9f8aa4d4cf84c4efd4",
"text": "Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. MIT Autonomous Vehicle",
"title": ""
},
{
"docid": "b66e878b1d907c684637bf308ee9fd3f",
"text": "The search for free parking places is a promising application for vehicular ad hoc networks (VANETs). In order to guide drivers to a free parking place at their destination, it is necessary to estimate the occupancy state of the parking lots within the destination area at time of arrival. In this paper, we present a model to predict parking lot occupancy based on information exchanged among vehicles. In particular, our model takes the age of received parking lot information and the time needed to arrive at a certain parking lot into account and estimates the future parking situation at time of arrival. It is based on queueing theory and uses a continuous-time homogeneous Markov model. We have evaluated the model in a simulation study based on a detailed model of the city of Brunswick, Germany.",
"title": ""
},
{
"docid": "5626f7864fa20a964adacb33d6a875e2",
"text": "This paper uses a new face detection method based on Haar-Like feature. New Haar-Like feature is an extension of the Haar-Like feature basis. This article use four new Haar-Like feature, and these features with existing Haar-Like feature are input Adaboost classifier together to select feature, finally constructed classification performance and powerful cascade classifier for face detection. After detection experiments we can see, the algorithm can get better results compared with other traditional face detection classifiers like Haar-Like.",
"title": ""
},
{
"docid": "c3c1d2ec9e60300043070ea93a3c3e1b",
"text": "chology Today, March. Sherif, C. W., Sherif, W., and Nebergall, R. (1965). Attitude and Altitude Change. Philadelphia: W. B. Saunders. Stewart, E. C., and Bennett, M. J. (1991). American Cultural Patterns. Yarmouth, Maine: Intercultural Press. Tai, E. (1986). Modification of the Western Approach to Intercultural Communication for the Japanese Context. Unpublished master's thesis, Portland State University, Portland, Oregon. Thaler, A. (1970). Future Shock. New York: Bantam. Ursin, H. (1978). \"Activation, Coping and Psychosomatics.\" In E. Baade, S. Levine, and H. Ursin (Eds ) Psychobiology of Stress: A Study of Coping Men. New York: Academic Press. A Model of Intercultural Communication Competence",
"title": ""
},
{
"docid": "81537ba56a8f0b3beb29a03ed3c74425",
"text": "About ten years ago, soon after the Web’s birth, Web “search engines” were first by word of mouth. Soon, however, automated search engines became a world wide phenomenon, especially AltaVista at the beginning. I was pleasantly surprised by the amount and diversity of information made accessible by the Web search engines even in the mid 1990’s. The growth of the available Web pages is beyond most, if not all, people’s imagination. The search engines enabled people to find information, facts, and references among these Web pages.",
"title": ""
},
{
"docid": "d54fe1eb5cee89fdb4f58f45b1b3350c",
"text": "Recent advances in recombinant DNA technology have made possible the molecular analysis and prenatal diagnosis of several human genetic diseases. Fetal DNA obtained by aminocentesis or chorionic villus sampling can be analyzed by restriction enzyme digestion, with subsequent electrophoresis, Southern transfer, and specific hybridization to cloned gene or oligonucleotide probes. With This disease results from homozygosity of the sickle-cell allele (rS) at the 3globin gene locus. The S allele differs from the wild-type allele (3A) by substitution of an A in the wild-type to a T at the second position of the sixth codon of the p chain gene, resulting in the replacement of a glutamic acid by a valine in the expressed protein. For the prenatal diagnosis of sickle cell anemia, DNA ob-",
"title": ""
}
] |
scidocsrr
|
69f2b0e413994e2600631189b0239532
|
A 1-kW Contactless Energy Transfer System Based on a Rotary Transformer for Sealing Rollers
|
[
{
"docid": "dedef832d8b54cac137277afe9cd27eb",
"text": "The number of strands to minimize loss in a litz-wire transformer winding is determined. With fine stranding, the ac resistance factor decreases, but dc resistance increases because insulation occupies more of the window area. A power law to model insulation thickness is combined with standard analysis of proximity-effect losses.",
"title": ""
}
] |
[
{
"docid": "b31ebdbd7edc0b30b0529a85fab0b612",
"text": "In this paper, we present RFMS, the real-time flood monitoring system with wireless sensor networks, which is deployed in two volcanic islands Ulleung-do and Dok-do located in the East Sea near to the Korean peninsula and developed for flood monitoring. RFMS measures river and weather conditions through wireless sensor nodes equipped with different sensors. Measured information is employed for early-warning via diverse types of services such as SMS (short message service) and a Web service.",
"title": ""
},
{
"docid": "77fa88ab8c06672d181555bf37801b40",
"text": "In coreference resolution, a fair amount of research treats mention detection as a preprocessed step and focuses on developing algorithms for clustering coreferred mentions. However, there are significant gaps between the performance on gold mentions and the performance on the real problem, when mentions are predicted from raw text via an imperfect Mention Detection (MD) module. Motivated by the goal of reducing such gaps, we develop an ILP-based joint coreference resolution and mention head formulation that is shown to yield significant improvements on coreference from raw text, outperforming existing state-ofart systems on both the ACE-2004 and the CoNLL-2012 datasets. At the same time, our joint approach is shown to improve mention detection by close to 15% F1. One key insight underlying our approach is that identifying and co-referring mention heads is not only sufficient but is more robust than working with complete mentions.",
"title": ""
},
{
"docid": "b6cd09d268aa8e140bef9fc7890538c3",
"text": "XML is quickly becoming the de facto standard for data exchange over the Internet. This is creating a new set of data management requirements involving XML, such as the need to store and query XML documents. Researchers have proposed using relational database systems to satisfy these requirements by devising ways to \"shred\" XML documents into relations, and translate XML queries into SQL queries over these relations. However, a key issue with such an approach, which has largely been ignored in the research literature, is how (and whether) the ordered XML data model can be efficiently supported by the unordered relational data model. This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system. This is accomplished by encoding order as a data value. We propose three order encoding methods that can be used to represent XML order in the relational data model, and also propose algorithms for translating ordered XPath expressions into SQL using these encoding methods. Finally, we report the results of an experimental study that investigates the performance of the proposed order encoding methods on a workload of ordered XML queries and updates.",
"title": ""
},
{
"docid": "f3b76c5ad1841a56e6950f254eda8b17",
"text": "Due to the complexity of human languages, most of sentiment classification algorithms are suffered from a huge-scale dimension of vocabularies which are mostly noisy and redundant. Deep Belief Networks (DBN) tackle this problem by learning useful information in input corpus with their several hidden layers. Unfortunately, DBN is a time-consuming and computationally expensive process for large-scale applications. In this paper, a semi-supervised learning algorithm, called Deep Belief Networks with Feature Selection (DBNFS) is developed. Using our chi-squared based feature selection, the complexity of the vocabulary input is decreased since some irrelevant features are filtered which makes the learning phase of DBN more efficient. The experimental results of our proposed DBNFS shows that the proposed DBNFS can achieve higher classification accuracy and can speed up training time compared with others well-known semi-supervised learning algorithms.",
"title": ""
},
{
"docid": "6492522f3db9c42b05d5e56efa02a7ae",
"text": "Web services promise to become a key enabling technology for B2B e-commerce. One of the most-touted features of Web services is their capability to recursively construct a Web service as a workflow of other existing Web services. The quality of service (QoS) of Web-services-based workflows may be an essential determinant when selecting constituent Web services and determining the service-level agreement with users. To make such a selection possible, it is essential to estimate the QoS of a WS workflow based on the QoSs of its constituent WSs. In the context of WS workflow, this estimation can be made by a method called QoS aggregation. While most of the existing work on QoS aggregation treats the QoS as a deterministic value, we argue that due to some uncertainty related to a WS, it is more realistic to model its QoS as a random variable, and estimate the QoS of a WS workflow probabilistically. In this paper, we identify a set of QoS metrics in the context of WS workflows, and propose a unified probabilistic model for describing QoS values of a broader spectrum of atomic and composite Web services. Emulation data are used to demonstrate the efficiency and accuracy of the proposed approach. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "45c515da4f8e9c383f6d4e0fa6e09192",
"text": "In this paper, we demonstrate our Img2UML system tool. This system tool eliminates the gap between pixel-based diagram and engineering model, that it supports the extraction of the UML class model from images and produces an XMI file of the UML model. In addition to this, Img2UML offers a repository of UML class models of images that have been collected from the Internet. This project has both industrial and academic aims: for industry, this tool proposals a method that enables the updating of software design documentation (that typically contains UML images). For academia, this system unlocks a corpus of UML models that are publicly available, but not easily analyzable for scientific studies.",
"title": ""
},
{
"docid": "7e8b5b1b1c7720cb4d81922dc7099a99",
"text": "The anthrone reagent of Dreywood (1) has been applied to the determination of blood sugar by Durham, Bloom, Lewis, and Mandel (2), Fetz and Petrie (3), and Zipf and Waldo (4). In the procedures developed by these authors, the heat resulting from mixing sulfuric acid with water causes the reaction to take place. Greater precision is obtained by heating the mixture of anthrone, sulfuric acid, and carbohydrate for a definite time in a constant temperature bath. Scott and Melvin (5) reported that the “heat of mixing” procedure is satisfactory if accuracy no better than ~5 per cent is required. They obtained data showing a coefficient of variation of kO.48 per cent in their method, which involves heating in an ethylene glycol bath at 90” for 16 minutes. In our laboratory a method was developed for the determination of dextran in blood and urine in which a mixture of anthrone reagent and dextran solution is heated in a boiling water bath for a definite time (6). In twelve determinations by this method with dextran solution there was observed a coefficient of variation of kO.36 per cent, and in twelve determinations in which the dextran was precipitated from solution by alcohol the coefficient of variation was f0.56 per cent. Our observations with respect to the precision of the “heat of mixing” procedure, compared with heating for a definite time in a constant temperature bath, are in agreement with the work of Scott and Melvin (5). We have adapted our procedure for the determination of dextran to the estimation of the sugar in blood and spinal fluid. A stabilized anthrone reagent has been developed, and certain findings of interest are reported.",
"title": ""
},
{
"docid": "7604fdb727d378f9a63e6c5f43772236",
"text": "In this paper, we propose a novel graph kernel specifically to address a challenging problem in the field of cyber-security, namely, malware detection. Previous research has revealed the following: (1) Graph representations of programs are ideally suited for malware detection as they are robust against several attacks, (2) Besides capturing topological neighbourhoods (i.e., structural information) from these graphs it is important to capture the context under which the neighbourhoods are reachable to accurately detect malicious neighbourhoods. We observe that state-of-the-art graph kernels, such as Weisfeiler-Lehman kernel (WLK) capture the structural information well but fail to capture contextual information. To address this, we develop the Contextual Weisfeiler-Lehman kernel (CWLK) which is capable of capturing both these types of information. We show that for the malware detection problem, CWLK is more expressive and hence more accurate than WLK while maintaining comparable efficiency. Through our largescale experiments with more than 50,000 real-world Android apps, we demonstrate that CWLK outperforms two state-of-the-art graph kernels (including WLK) and three malware detection techniques by more than 5.27% and 4.87% F-measure, respectively, while maintaining high efficiency. This high accuracy and efficiency make CWLK suitable for large-scale real-world malware detection.",
"title": ""
},
{
"docid": "9c8daaa2770a109604988700e4eaca27",
"text": "In this paper, the neural-network-based robust optimal control design for a class of uncertain nonlinear systems via adaptive dynamic programming approach is investigated. First, the robust controller of the original uncertain system is derived by adding a feedback gain to the optimal controller of the nominal system. It is also shown that this robust controller can achieve optimality under a specified cost function, which serves as the basic idea of the robust optimal control design. Then, a critic network is constructed to solve the Hamilton– Jacobi–Bellman equation corresponding to the nominal system, where an additional stabilizing term is introduced to verify the stability. The uniform ultimate boundedness of the closed-loop system is also proved by using the Lyapunov approach. Moreover, the obtained results are extended to solve decentralized optimal control problem of continuous-time nonlinear interconnected large-scale systems. Finally, two simulation examples are presented to illustrate the effectiveness of the established control scheme. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bd125ed6f7d0c8759533343acbeb0da6",
"text": "A new compact omnidirectional circularly polarized (CP) cylindrical dielectric resonator antenna (DRA) with a top-loaded modified Alford loop is investigated. Fed by an axial probe, the DRA is excited in its TM01δ-mode, which radiates like a vertically polarized electric monopole. The modified Alford loop comprises a central circular patch and four curved branches. It is placed on the top of the DRA and provides an equivalent horizontally polarized magnetic dipole mode. Omnidirectional CP fields can be obtained when the two orthogonally polarized fields are equal in amplitude but different in phase by 90°. This CP DRA is applied to the design of a two-port CP diversity DRA which provides omnidirectional and broadside radiation patterns. The broadside radiation pattern is obtained by making use of the broadside HEM12δ+ 1-mode of the DRA, which is excited by a balanced slot serially fed by a microstrip line. For demonstration, both the omnidirectional CP DRA and the diversity CP DRA were designed at ~ 2.4 GHz for WLAN applications. Their S-parameters, axial ratios, radiation patterns, antenna gains, and antenna efficiencies are studied. The envelope correlation is also found for the diversity design. Reasonable agreement between the simulated and measured results is observed.",
"title": ""
},
{
"docid": "f02587ac75edc7a7880131a4db077bb2",
"text": "Single-unit recordings in monkeys have revealed neurons in the lateral prefrontal cortex that increase their firing during a delay between the presentation of information and its later use in behavior. Based on monkey lesion and neurophysiology studies, it has been proposed that a dorsal region of lateral prefrontal cortex is necessary for temporary storage of spatial information whereas a more ventral region is necessary for the maintenance of nonspatial information. Functional neuroimaging studies, however, have not clearly demonstrated such a division in humans. We present here an analysis of all reported human functional neuroimaging studies plotted onto a standardized brain. This analysis did not find evidence for a dorsal/ventral subdivision of prefrontal cortex depending on the type of material held in working memory, but a hemispheric organization was suggested (i.e., left-nonspatial; right-spatial). We also performed functional MRI studies in 16 normal subjects during two tasks designed to probe either nonspatial or spatial working memory, respectively. A group and subgroup analysis revealed similarly located activation in right middle frontal gyrus (Brodmann's area 46) in both spatial and nonspatial [working memory-control] subtractions. Based on another model of prefrontal organization [M. Petrides, Frontal lobes and behavior, Cur. Opin. Neurobiol., 4 (1994) 207-211], a reconsideration of the previous imaging literature data suggested that a dorsal/ventral subdivision of prefrontal cortex may depend upon the type of processing performed upon the information held in working memory.",
"title": ""
},
{
"docid": "d6959f0cd5ad7a534e99e3df5fa86135",
"text": "In the course of the project Virtual Try-On new VR technologies have been developed, which form the basis for a realistic, three dimensional, (real-time) simulation and visualization of individualized garments put on by virtual counterparts of real customers. To provide this cloning and dressing of people in VR, a complete process chain is being build up starting with the touchless 3-dimensional scanning of the human body up to a photo-realistic 3-dimensional presentation of the virtual customer dressed in the chosen pieces of clothing. The emerging platform for interactive selection and configuration of virtual garments, the „virtual shop“, will be accessible in real fashion boutiques as well as over the internet, thereby supplementing the conventional distribution channels.",
"title": ""
},
{
"docid": "1401f4b5f401a62c92a681c9f7e1da91",
"text": "We analyze URL and tag propagation on Twitter social network with 54 million nodes and 1.5 billion edges, one of the largest social networks studied in academia. We specifically focus on the interplay of external and network influences. We attribute propagation to the network whenever a user mentions a tag or URL after one of its neighbors has previously mentioned it, and to external influence otherwise. We develop a new metric to measure external influence and an efficient algorithm to calculate it. The insight we obtain from the external influence metric paired with the analysis of cascade dynamics of the network influence, not only validates some of the previously observed phenomena in other social networks but also provides new insight into the interplay of network and external influences over the lifetime of memes in the network.",
"title": ""
},
{
"docid": "92cbf1e02f94a1aa2e8331601a0bdbba",
"text": "The binary classification task of Paraphrase Identification (PI) is vital in the field of Natural Language Processing. The objective of this study is to propose an optimized Deep Learning architecture in combination with usage of word embedding technique for the classification of sentence pairs as paraphrases or not. For Paraphrase Identification task, this paper proposes a hybrid Deep Learning architecture aiming to capture as many features from the inputted sentences in natural language. The aim is to accurately classify whether the pair of sentences are paraphrases of each other or not. The importance of using an optimized word-embedding approach in combination with the proposed hybrid Deep Learning architecture is explained. This study also deals with the lack of the training data required to generate a robust Deep Learning model. The intention is to harness the memorizing power of Long Short Term Memory (LSTM) neural network and the feature extracting capability of Convolutional Neural Network (CNN) in combination with the optimized word-embedding approach which aims to capture wide-sentential contexts and word-order. The proposed model is compared with existing systems and it surpasses all the existing systems in the performance in terms of accuracy.",
"title": ""
},
{
"docid": "fdc875181fe37e6b469d07e0e580fadb",
"text": "Attention mechanism has recently attracted increasing attentions in the area of facial action unit (AU) detection. By finding the region of interest (ROI) of each AU with the attention mechanism, AU related local features can be captured. Most existing attention based AU detection works use prior knowledge to generate fixed attentions or refine the predefined attentions within a small range, which limits their capacity to model various AUs. In this paper, we propose a novel end-to-end weakly-supervised attention and relation learning framework for AU detection with only AU labels, which has not been explored before. In particular, multi-scale features shared by each AU are learned firstly, and then both channel-wise attentions and spatial attentions are learned to select and extract AU related local features. Moreover, pixellevel relations for AUs are further captured to refine spatial attentions so as to extract more relevant local features. Extensive experiments on BP4D and DISFA benchmarks demonstrate that our framework (i) outperforms the state-of-the-art methods for AU detection, and (ii) can find the ROI of each AU and capture the relations among AUs adaptively.",
"title": ""
},
{
"docid": "bca81a5b34376e5a6090e528a583b4f4",
"text": "There has been considerable debate in the literature about the relative merits of information processing versus dynamical approaches to understanding cognitive processes. In this article, we explore the relationship between these two styles of explanation using a model agent evolved to solve a relational categorization task. Specifically, we separately analyze the operation of this agent using the mathematical tools of information theory and dynamical systems theory. Information-theoretic analysis reveals how task-relevant information flows through the system to be combined into a categorization decision. Dynamical analysis reveals the key geometrical and temporal interrelationships underlying the categorization decision. Finally, we propose a framework for directly relating these two different styles of explanation and discuss the possible implications of our analysis for some of the ongoing debates in cognitive science.",
"title": ""
},
{
"docid": "4746d9ecd4773fa35d516bd40dbfb64b",
"text": "Deep learning has been successfully applied to image super resolution (SR). In this paper, we propose a deep joint super resolution (DJSR) model to exploit both external and self similarities for SR. A Stacked Denoising Convolutional Auto Encoder (SDCAE) is first pre-trained on external examples with proper data augmentations. It is then fine-tuned with multi-scale self examples from each input, where the reliability of self examples is explicitly taken into account. We also enhance the model performance by sub-model training and selection. The DJSR model is extensively evaluated and compared with state-of-the-arts, and show noticeable performance improvements both quantitatively and perceptually on a wide range of images.",
"title": ""
},
{
"docid": "032f444d4844c4fa9a3e948cbbc0818a",
"text": "This paper presents a microstrip dual-band bandpass filter (BPF) based on cross-shaped resonator and spurline. It is shown that spurlines added into input/output ports of a cross-shaped resonator generate an additional notch band. Using even and odd-mode analysis the proposed structure is realized and designed. The proposed bandpass filter has dual passband from 1.9 GHz to 2.4 GHz and 9.5 GHz to 11.5 GHz.",
"title": ""
}
] |
scidocsrr
|
5c9a32aa0bb666342235cca64b5b0a9e
|
Simulating Affective Touch: Using a Vibrotactile Array to Generate Pleasant Stroking Sensations
|
[
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
}
] |
[
{
"docid": "643d75042a38c24b0e4130cb246fc543",
"text": "Silicon carbide (SiC) switching power devices (MOSFETs, JFETs) of 1200 V rating are now commercially available, and in conjunction with SiC diodes, they offer substantially reduced switching losses relative to silicon (Si) insulated gate bipolar transistors (IGBTs) paired with fast-recovery diodes. Low-voltage industrial variable-speed drives are a key application for 1200 V devices, and there is great interest in the replacement of the Si IGBTs and diodes that presently dominate in this application with SiC-based devices. However, much of the performance benefit of SiC-based devices is due to their increased switching speeds ( di/dt, dv/ dt), which raises the issues of increased electromagnetic interference (EMI) generation and detrimental effects on the reliability of inverter-fed electrical machines. In this paper, the tradeoff between switching losses and the high-frequency spectral amplitude of the device switching waveforms is quantified experimentally for all-Si, Si-SiC, and all-SiC device combinations. While exploiting the full switching-speed capability of SiC-based devices results in significantly increased EMI generation, the all-SiC combination provides a 70% reduction in switching losses relative to all-Si when operated at comparable dv/dt. It is also shown that the loss-EMI tradeoff obtained with the Si-SiC device combination can be significantly improved by driving the IGBT with a modified gate voltage profile.",
"title": ""
},
{
"docid": "920b3c1264ad303bbb1a263ecf7c1162",
"text": "Nowadays, operational quality and robustness of cellular networks are among the hottest topics wireless communications research. As a response to a growing need in reduction of expenses for mobile operators, 3rd Generation Partnership Project (3GPP) initiated work on Minimization of Drive Tests (MDT). There are several major areas of standardization related to MDT, such as coverage, capacity, mobility optimization and verification of end user quality [1]. This paper presents results of the research devoted to Quality of Service (QoS) verification for MDT. The main idea is to jointly observe the user experienced QoS in terms of throughput, and corresponding radio conditions. Also the necessity to supplement the existing MDT metrics with the new reporting types is elaborated.",
"title": ""
},
{
"docid": "0f4d88770c1ce97a3f562d389116cf25",
"text": "BACKGROUND\nWhen a change of opioid is considered, equianalgesic dose tables are used. These tables generally propose a dose ratio of 5:1 between morphine and hydromorphone. In the case of a change from subcutaneous hydromorphone to methadone, dose ratios ranging from 1:6 to 1:10 are proposed. The purpose of this study was to review the analgesic dose ratios for methadone compared with hydromorphone.\n\n\nMETHODS\nIn a retrospective study, 48 cases of medication changes from morphine to hydromorphone, and 65 changes between hydromorphone and methadone were identified. the reason for the change, the analgesic dose, and pain intensity were obtained.\n\n\nRESULTS\nThe dose ratios between morphine and hydromorphone and vice versa were found to be 5.33 and 0.28, respectively (similar to expected results). However, the hydromorphone/methadone ratio was found to be 1.14:1 (5 to 10 times higher than expected). Although the dose ratios of hydromorphone/morphine and vice versa did not change according to a previous opioid dose, the hydromorphone/methadone ratio correlated with total opioid dose (correlation coefficient = 0.41 P < 0.001) and was 1.6 (range, 0.3-14.4) in patients receiving more than 330 mg of hydromorphone per day prior to the change, versus 0.95 (range, 0.2-12.3) in patients receiving ae330 mg of hydromorphone per day (P = 0.023).\n\n\nCONCLUSIONS\nThese results suggest that only partial tolerance develops between methadone and hydromorphone. Methadone is much more potent than previously described and any change should start at a lower equivalent dose.",
"title": ""
},
{
"docid": "b207f2efab5abaf254ec34a8c1559d49",
"text": "Image processing algorithms used in surveillance systems are designed to work under good weather conditions. For example, in a rainy day, raindrops are adhered to camera lenses and windshields, resulting in partial occlusions in acquired images, and making performance of image processing algorithms significantly degraded. To improve performance of surveillance systems in a rainy day, raindrops have to be automatically detected and removed from images. Addressing this problem, this paper proposes an adherent raindrop detection method from a single image which does not need training data and special devices. The proposed method employs image segmentation using Maximally Stable Extremal Regions (MSER) and qualitative metrics to detect adherent raindrops from the result of MSER-based image segmentation. Through a set of experiments, we demonstrate that the proposed method exhibits efficient performance of adherent raindrop detection compared with conventional methods.",
"title": ""
},
{
"docid": "840d4b26eec402038b9b3462fc0a98ac",
"text": "A bench model of the new generation intelligent universal transformer (IUT) has been recently developed for distribution applications. The distribution IUT employs high-voltage semiconductor device technologies along with multilevel converter circuits for medium-voltage grid connection. This paper briefly describes the basic operation of the IUT and its experimental setup. Performances under source and load disturbances are characterized with extensive tests using a voltage sag generator and various linear and nonlinear loads. Experimental results demonstrate that IUT input and output can avoid direct impact from its opposite side disturbances. The output voltage is well regulated when the voltage sag is applied to the input. The input voltage and current maintains clean sinusoidal and unity power factor when output is nonlinear load. Under load transients, the input and output voltages remain well regulated. These key features prove that the power quality performance of IUT is far superior to that of conventional copper-and-iron based transformers",
"title": ""
},
{
"docid": "8ab791e9db930fd27f6459e72a1687e5",
"text": "The problem of indexing time series has attracted much interest. Most algorithms used to index time series utilize the Euclidean distance or some variation thereof. However, it has been forcefully shown that the Euclidean distance is a very brittle distance measure. Dynamic time warping (DTW) is a much more robust distance measure for time series, allowing similar shapes to match even if they are out of phase in the time axis. Because of this flexibility, DTW is widely used in science, medicine, industry and finance. Unfortunately, however, DTW does not obey the triangular inequality and thus has resisted attempts at exact indexing. Instead, many researchers have introduced approximate indexing techniques or abandoned the idea of indexing and concentrated on speeding up sequential searches. In this work, we introduce a novel technique for the exact indexing of DTW. We prove that our method guarantees no false dismissals and we demonstrate its vast superiority over all competing approaches in the largest and most comprehensive set of time series indexing experiments ever undertaken.",
"title": ""
},
{
"docid": "244c8bea041869e5f03b93dca71dad7c",
"text": "The widespread deployment of Automated Fingerprint Identification Systems (AFIS) in law enforcement and border control applications has heightened the need for ensuring that these systems are not compromised. While several issues related to fingerprint system security have been investigated, including the use of fake fingerprints for masquerading identity, the problem of fingerprint alteration or obfuscation has received very little attention. Fingerprint obfuscation refers to the deliberate alteration of the fingerprint pattern by an individual for the purpose of masking his identity. Several cases of fingerprint obfuscation have been reported in the press. Fingerprint image quality assessment software (e.g., NFIQ) cannot always detect altered fingerprints since the implicit image quality due to alteration may not change significantly. The main contributions of this paper are: 1) compiling case studies of incidents where individuals were found to have altered their fingerprints for circumventing AFIS, 2) investigating the impact of fingerprint alteration on the accuracy of a commercial fingerprint matcher, 3) classifying the alterations into three major categories and suggesting possible countermeasures, 4) developing a technique to automatically detect altered fingerprints based on analyzing orientation field and minutiae distribution, and 5) evaluating the proposed technique and the NFIQ algorithm on a large database of altered fingerprints provided by a law enforcement agency. Experimental results show the feasibility of the proposed approach in detecting altered fingerprints and highlight the need to further pursue this problem.",
"title": ""
},
{
"docid": "eb1313075f4870dd0c123233ea297fd1",
"text": "This work summarizes our research on the topic of the application of unsupervised learning algorithms to the problem of intrusion detection, and in particular our main research results in network intrusion detection. We proposed a novel, two tier architecture for network intrusion detection, capable of clustering packet payloads and correlating anomalies in the packet stream. We show the experiments we conducted on such architecture, we give performance results, and we compare our achievements with other comparable existing systems.",
"title": ""
},
{
"docid": "e4a3065209c9dde50267358cbe6829b7",
"text": "OBJECTIVES\nWith the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents.\n\n\nMETHODS\nThis paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain.\n\n\nRESULTS\nText mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail.\n\n\nCONCLUSIONS\nText mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.",
"title": ""
},
{
"docid": "ee785105669d58052ad3b3a3954ba9fb",
"text": "Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.",
"title": ""
},
{
"docid": "725f9c045b5618fe0feb39a5f4cb4d8c",
"text": "This paper discusses the experiments carried out by us at Jadavpur University as part of the participation in ICON 2015 task: POS Tagging for Code-mixed Indian Social Media Text. The tool that we have developed for the task is based on Trigram Hidden Markov Model that utilizes information from dictionary as well as some other word level features to enhance the observation probabilities of the known tokens as well as unknown tokens. We submitted runs for Bengali-English, Hindi-English and Tamil-English Language pairs. Our system has been trained and tested on the datasets released for ICON 2015 shared task: POS Tagging For Code-mixed Indian Social Media Text. In constrained mode, our system obtains average overall accuracy (averaged over all three language pairs) of 75.60% which is very close to other participating two systems (76.79% for IIITH and 75.79% for AMRITA_CEN) ranked higher than our system. In unconstrained mode, our system obtains average overall accuracy of 70.65% which is also close to the system (72.85% for AMRITA_CEN) which obtains the highest average overall accuracy.",
"title": ""
},
{
"docid": "6e6c39c8511abf532197879adbf1f4df",
"text": "Indian cities face a transport crisis characterized by levels of congestion, noise, pollution, traffic fatalities and injuries, and inequity far exceeding those in most European and North American cities. India's transport crisis has been exacerbated by the extremely rapid growth of India's largest cities in a context of low incomes, limited and outdated transport infrastructure, rampant suburban sprawl, sharply rising motor vehicle ownership and use, deteriorating bus services, a wide range of motorized and non-motorized transport modes sharing roadways, and inadequate as well as uncoordinated land use and transport planning. This article summarizes key trends in India's transport system and travel behavior, analyzes the extent and causes of the most severe problems, and recommends nine policy improvements that would help mitigate India's urban transport crisis. a b Purchase Export",
"title": ""
},
{
"docid": "921062a73e2b4a5ab1d994ac22b04918",
"text": "This study describes a new corpus of over 60,000 hand-annotated metadiscourse acts from 106 OpenCourseWare lectures, from two different disciplines: Physics and Economics. Metadiscourse is a set of linguistic expressions that signal different functions in the discourse. This type of language is hypothesised to be helpful in finding a structure in unstructured text, such as lectures discourse. A brief summary is provided about the annotation scheme and labelling procedures, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary data that will be distributed with the corpus, and information relating to how to obtain the data. The results provide a deeper understanding of lecture structure and confirm the reliable coding of metadiscursive acts in academic lectures across different disciplines. The next stage of our research will be to build a classification model to automate the tagging process, instead of manual annotation, which take time and efforts. This is in addition to the use of these tags as indicators of the higher level structure of lecture discourse.",
"title": ""
},
{
"docid": "4e8fc4ca98ca0498885c45a683cea282",
"text": "Recent architectures for the advanced metering infrastructure (AMI) have incorporated several back-end systems that handle billing and other smart grid control operations. The non-availability of metering data when needed or the untimely delivery of data needed for control operations will undermine the activities of these back-end systems. Unfortunately, there are concerns that cyber attacks such as distributed denial of service (DDoS) will manifest in magnitude and complexity in a smart grid AMI network. Such attacks will range from a delay in the availability of end user's metering data to complete denial in the case of a grounded network. This paper proposes a cloud-based (IaaS) firewall for the mitigation of DDoS attacks in a smart grid AMI network. The proposed firewall has the ability of not only mitigating the effects of DDoS attack but can prevent the attack before they are launched. Our proposed firewall system leverages on cloud computing technology which has an added advantage of reducing the burden of data computations and storage for smart grid AMI back-end systems. The openflow firewall proposed in this study is a better security solution with regards to the traditional on-premises DoS solutions which cannot cope with the wide range of new attacks targeting the smart grid AMI network infrastructure. Simulation results generated from the study show that our model can guarantee the availability of metering/control data and could be used to improve the QoS of the smart grid AMI network under a DDoS attack scenario.",
"title": ""
},
{
"docid": "a81c5da3fc32903dd70e90b020c9394a",
"text": "We build a grammatical error correction (GEC) system primarily based on the state-of-the-art statistical machine translation (SMT) approach, using task-specific features and tuning, and further enhance it with the modeling power of neural network joint models. The SMT-based system is weak in generalizing beyond patterns seen during training and lacks granularity below the word level. To address this issue, we incorporate a character-level SMT component targeting the misspelled words that the original SMT-based system fails to correct. Our final system achieves 53.14% F0.5 score on the benchmark CoNLL-2014 test set, an improvement of 3.62% F0.5 over the best previous published score.",
"title": ""
},
{
"docid": "aa1ce09a8ad407ce413d9e56e13e79d4",
"text": "A boost-flyback converter was investigated for its ability to charge separate battery stacks from a low-voltage high-current renewable energy source. A low voltage (12V) battery was connected in the boost configuration, and a high voltage (330V) battery stack was connected in the flyback configuration. This converter works extremely well for this application because it gives charging priority to the low voltage battery and dumps the reserve energy to the high voltage stack. As the low-voltage battery approaches full charge, more power is adaptively directed to the high-voltage stack, until finally the charging of the low voltage battery stops. A two-secondary flyback is also capable of this adaptive charging, but the boost-flyback does it with much higher conversion efficiency, and with a simpler (less expensive) transformer design.",
"title": ""
},
{
"docid": "3f24f730953fb9719087cad6ffb3e494",
"text": "It is very difficult for human beings to manually summarize large documents of text. Text summarization solves this problem. Nowadays, Text summarization systems are among the most attractive research areas. Text summarization (TS) is used to provide a shorter version of the original text and keeping the overall meaning. There are various methods that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this review, we present a comparative study among almost algorithms based on Latent Semantic Analysis (LSA) approach.",
"title": ""
},
{
"docid": "5af5936ec0d889ab19bd8c6c8e8ebc35",
"text": "Development in the wireless communication systems is the evolving field of research in today’s world. The demand of high data rate, low latency at the minimum cost by the user requires many changes in the hardware organization. The use of digital modulation techniques like OFDM assures the reliability of communication in addition to providing flexibility and robustness. Modifications in the hardware structure can be replaced by the change in software only which gives birth to Software Define Radio (SDR): a radio which is more flexible as compared to conventional radio and can perform signal processing at the minimum cost. GNU Radio with the help of Universal Software Peripheral Radio (USRP) provides flexible and the cost effective SDR platform for the purpose of real time video transmission. The results given in this paper are taken from the experiment performed on USRP-1 along with the GNU Radio version 3.2.2.",
"title": ""
},
{
"docid": "788c9479bc5eb1a7bb36bfd774280f45",
"text": "The low-density parity-check (LDPC) codes are used to achieve excellent performance with low encoding and decoding complexity. One major criticism concerning LDPC codes has been their apparent high encoding complexity and memory inefficient nature due to large parity check matrix. More generally, we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. A new technique for efficient encoding of LDP Codes based on the known concept of approximate lower triangulation (ALT) is introduced. The algorithm computes parity check symbols by solving a set of sparse equations, and the triangular factorization is employed to solve the equations efficiently. The key of the encoding method is to get the systematic approximate lower triangular (SALT) form of the Parity Check Matrix with minimum gap g, because the smaller the gap is, the more efficient encoding will be obtained. The functions are to be coded in MATLAB.",
"title": ""
},
{
"docid": "b40bbfc19072efc645e5f1d6fb1d89e7",
"text": "With the development of information technologies, a great amount of semantic data is being generated on the web. Consequently, finding efficient ways of accessing this data becomes more and more important. Question answering is a good compromise between intuitiveness and expressivity, which has attracted the attention of researchers from different communities. In this paper, we propose an intelligent questing answering system for answering questions about concepts. It is based on ConceptRDF, which is an RDF presentation of the ConceptNet knowledge base. We use it as a knowledge base for answering questions. Our experimental results show that our approach is promising: it can answer questions about concepts at a satisfactory level of accuracy (reaches 94.5%).",
"title": ""
}
] |
scidocsrr
|
2d27417e796895bade46ed2ab2c9f5d6
|
Lip Reading in Profile
|
[
{
"docid": "c2daec5b85a4e8eea614d855c6549ef0",
"text": "An audio-visual corpus has been collected to support the use of common material in speech perception and automatic speech recognition studies. The corpus consists of high-quality audio and video recordings of 1000 sentences spoken by each of 34 talkers. Sentences are simple, syntactically identical phrases such as \"place green at B 4 now\". Intelligibility tests using the audio signals suggest that the material is easily identifiable in quiet and low levels of stationary noise. The annotated corpus is available on the web for research use.",
"title": ""
}
] |
[
{
"docid": "5475df204bca627e73b077594af29d47",
"text": "Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.",
"title": ""
},
{
"docid": "e9f7bf5eb9bf3c2c3ff7820ffb34cb93",
"text": "BACKGROUND\nThe transconjunctival lower eyelid blepharoplasty is advantageous for its quick recovery and low complication rates. Conventional techniques rely on fat removal to contour the lower eyelid. This article describes the authors' extended transconjunctival lower eyelid blepharoplasty technique that takes dissection beyond the orbital rim to address aging changes on the midcheek.\n\n\nMETHODS\nFrom December of 2012 to December of 2015, 54 patients underwent this procedure. Through a transconjunctival incision, the preseptal space was entered and excess orbital fat pads were excised. Medially, the origins of the palpebral part of the orbicularis oculi, the tear trough ligament, and orbital part of the orbicularis oculi were sequentially released, connecting the dissection with the premaxillary space. More laterally, the orbicularis retaining ligament was released, connecting the dissection with the prezygomatic space. Excised orbital fat was then grafted under the released tear trough ligament to correct the tear trough deformity. When the patients had significant maxillary retrusion, structural fat grafting was performed at the same time.\n\n\nRESULTS\nThe mean follow-up was 10 months. High satisfaction was noted among the patients treated with this technique. The revision rate was 2 percent. Complication rates were low. No chemosis, prolonged swelling, lower eyelid retraction, or ectropion was seen in any patients.\n\n\nCONCLUSION\nThe extended transconjunctival lower blepharoplasty using the midcheek soft-tissue spaces is a safe and effective approach for treating patients presenting with eye bags and tear trough deformity.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
},
{
"docid": "de94c8531839326cc549b97855f8348a",
"text": "In this paper, we investigate the prediction of daily stock prices of the top five companies in the Thai SET50 index. A Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) is applied to forecast the next daily stock price (High, Low, Open, Close). Deep Belief Network (DBN) is applied to compare the result with LSTM. The test data are CPALL, SCB, SCC, KBANK, and PTT from the SET50 index. The purpose of selecting these five stocks is to compare how the model performs in different stocks with various volatility. There are two experiments of five stocks from the SET50 index. The first experiment compared the MAPE with different length of training data. The experiment is conducted by using training data for one, three, and five-year. PTT and SCC stock give the lowest median value of MAPE error for five-year training data. KBANK, SCB, and CPALL stock give the lowest median value of MAPE error for one-year training data. In the second experiment, the number of looks back and input are varied. The result with one look back and four inputs gives the best performance for stock price prediction. By comparing different technique, the result show that LSTM give the best performance with CPALL, SCB, and KTB with less than 2% error. DBN give the best performance with PTT and SCC with less than 2% error.",
"title": ""
},
{
"docid": "93c928adef35a409acaa9b371a1498f3",
"text": "The acquisition of a new motor skill is characterized first by a short-term, fast learning stage in which performance improves rapidly, and subsequently by a long-term, slower learning stage in which additional performance gains are incremental. Previous functional imaging studies have suggested that distinct brain networks mediate these two stages of learning, but direct comparisons using the same task have not been performed. Here we used a task in which subjects learn to track a continuous 8-s sequence demanding variable isometric force development between the fingers and thumb of the dominant, right hand. Learning-associated changes in brain activation were characterized using functional MRI (fMRI) during short-term learning of a novel sequence, during short-term learning after prior, brief exposure to the sequence, and over long-term (3 wk) training in the task. Short-term learning was associated with decreases in activity in the dorsolateral prefrontal, anterior cingulate, posterior parietal, primary motor, and cerebellar cortex, and with increased activation in the right cerebellar dentate nucleus, the left putamen, and left thalamus. Prefrontal, parietal, and cerebellar cortical changes were not apparent with short-term learning after prior exposure to the sequence. With long-term learning, increases in activity were found in the left primary somatosensory and motor cortex and in the right putamen. Our observations extend previous work suggesting that distinguishable networks are recruited during the different phases of motor learning. While short-term motor skill learning seems associated primarily with activation in a cortical network specific for the learned movements, long-term learning involves increased activation of a bihemispheric cortical-subcortical network in a pattern suggesting \"plastic\" development of new representations for both motor output and somatosensory afferent information.",
"title": ""
},
{
"docid": "c3eb752554416512ed0b8b3641116fac",
"text": "Keyphrase generation (KG) aims to generate a set of keyphrases given a document, which is a fundamental task in natural language processing (NLP). Most previous methods solve this problem in an extractive manner, while recently, several attempts are made under the generative setting using deep neural networks. However, the state-of-the-art generative methods simply treat the document title and the document main body equally, ignoring the leading role of the title to the overall document. To solve this problem, we introduce a new model called Title-Guided Network (TG-Net) for automatic keyphrase generation task based on the encoderdecoder architecture with two new features: (i) the title is additionally employed as a query-like input, and (ii) a titleguided encoder gathers the relevant information from the title to each word in the document. Experiments on a range of KG datasets demonstrate that our model outperforms the state-ofthe-art models with a large margin, especially for documents with either very low or very high title length ratios.",
"title": ""
},
{
"docid": "89f02dab224baf8cd41bf3b5c03e3b2f",
"text": "In this paper the design for the 2004 Prius electric drive machine is investigated in detail to assess its characteristics. It takes the form a fluid-cooled interior permanent magnet machine. This information is used to generate a complete operating profile. This information is then utilized to produce a design for an alternative induction motor. The specification is found to be demanding however a design is produced for the application which is found to be operationally close to that of the current permanent magnet design and should, in theory, be cheaper to manufacture due to the absence of permanent magnets.",
"title": ""
},
{
"docid": "513030b6f4a8a8eaa1b1d972867db403",
"text": "In placental mammals, viviparity—the production of living young within the mother’s body—evolved under the auspices of the immune system. Elements of immunity were incorporated, giving pregnancy a mildly inflammatory character. Formation of the placenta, the organ that feeds the fetus, involves a cooperation between maternal natural killer (NK) cells and fetal trophoblast cells that remodels the blood supply. Recent research reveals that this process and human reproductive success are influenced by polymorphic HLA-C ligands and their killer cell immunoglobulin-like receptors (KIR).",
"title": ""
},
{
"docid": "7b19a4e0f756a25bd468798bb9711422",
"text": "Object perception in 3-D is a highly challenging problem in computer vision. The major concern in these tasks involves object occlusion, different object poses, appearance and limited perception of the environment by individual sensors in terms of range measurements. In this particular project, our goal is improving 3D perception of the environment by using fusion from lidars and cameras with focus to autonomous driving. The main reason for using lidars and cameras are to combine the complementary information from each of the modalities for efficient feature set extraction that leads to improved perception.",
"title": ""
},
{
"docid": "01d441a277e9f9cbf6af40d0d526d44f",
"text": "On-orbit fabrication of spacecraft components can enable space programs to escape the volumetric limitations of launch shrouds and create systems with extremely large apertures and very long baselines in order to deliver higher resolution, higher bandwidth, and higher SNR data. This paper will present results of efforts to investigated the value proposition and technical feasibility of adapting several of the many rapidly-evolving additive manufacturing and robotics technologies to the purpose of enabling space systems to fabricate and integrate significant parts of themselves on-orbit. We will first discuss several case studies for the value proposition for on-orbit fabrication of space structures, including one for a starshade designed to enhance the capabilities for optical imaging of exoplanets by the proposed New World Observer mission, and a second for a long-baseline phased array radar system. We will then summarize recent work adapting and evolving additive manufacturing techniques and robotic assembly technologies to enable automated on-orbit fabrication of large, complex, three-dimensional structures such as trusses, antenna reflectors, and shrouds.",
"title": ""
},
{
"docid": "e43814f288e1c5a84fb9d26b46fc7e37",
"text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.",
"title": ""
},
{
"docid": "a6fe5ebfc0a58b246005603854be07a0",
"text": "Social networking sites (SNS) are quickly becoming one of the most popular tools for social interaction and information exchange. Previous research has shown a relationship between users’ personality and SNS use. Using a general population sample (N=300), this study furthers such investigations by examining the personality correlates (Neuroticism, Extraversion, Openness-to-Experience, Agreeableness, Conscientiousness, Sociability and Need-for-Cognition) of social and informational use of the two largest SNS: Facebook and Twitter. Age and Gender were also examined. Results showed that personality was related to online socialising and information seeking/exchange, though not as influential as some previous research has suggested. In addition, a preference for Facebbok or Twitter was associated with differences in personality. The results reveal differential relationships between personality and Facebook and Twitter usage.",
"title": ""
},
{
"docid": "c972f38d2fec97a5750052ec916bb25c",
"text": "Clustering is one of the most commonly used tools in the analysis of gene expression data(1, 2). The usage in grouping genes is based on the premise that co-expression is a result of co-regulation. It is thus a preliminary step in extracting gene networks and inference of gene function (3, 4). Clustering of experiments can be used to discover novel phenotypic aspects of cells and tissues (3, 5, 6), including sensitivity to drugs(7), and can also detect artifacts of experimental conditions (8). Clustering and its applications in biology are presented in greater detail in the chapter by Zhao and Karypis (see also (9)). While we focus on gene expression data in this chapter, the methodology presented here is applicable for other types of data as well. Clustering is a form of unsupervised learning, i.e. no information on the class variable is assumed, and the objective is to find the “natural” groups in the data. However, most clustering algorithms generate a clustering even if the data has no inherent cluster structure, so external validation tools are required. Given a set of partitions of the data into an increasing number of clusters (e.g. by a hierarchical clustering algorithm, or k-means), such a validation tool will tell the user the number of clusters in the data (if any). Many methods have been proposed in the literature to address this problem (10–15). Recent studies have shown the advantages of sampling-based methods (12, 14). These methods are based on the idea that when a partition has captured the structure in the data, this partition should be stable with respect to perturbation of the data. Bittner et al. (16) used a similar approach to validate clusters representing gene expression of melanoma patients. The emergence of cluster structure depends on several choices: data representation and normalization, the choice of a similarity measure and clustering algorithm. In this chapter we extend the stability-based validation of cluster structure, and propose stability as a figure of merit that is useful for comparing clustering solutions, thus helping in making these choices. We use this framework to demonstrate the ability of Principal Component Analysis (PCA) to extract features relevant to the cluster structure. We use stability as a tool for simultaneously choosing the number of principal components and the number of clusters; we compare the performance of different similarity measures and normalization schemes. The approach is demonstrated through a case study of yeast gene expression data from Eisenet al. (1). For yeast, a functional classification of a large number of genes is known, and we use this classification for validating the results produced by clustering. A method for comparing clustering solutions specifically applicable to gene expression data was introduced in(17). However, it cannot be used to choose the number of clusters, and is not directly applicable in choosing the number of principal components. The results of clustering are easily corrupted by the addition of noise: even a few",
"title": ""
},
{
"docid": "6adf612b6a80494f9c9559170ab66670",
"text": "In recent years, Steganography and Steganalysis are two important areas of research that involve a number of applications. These two areas of research are important especially when reliable and secure information exchange is required. Steganography is an art of embedding information in a cover image without causing statistically significant variations to the cover image. Steganalysis is the technology that attempts to defeat Steganography by detecting the hidden information and extracting. In this paper a comparative analysis is made to demonstrate the effectiveness of the proposed methods. The effectiveness of the proposed methods has been estimated by computing Mean square error (MSE) and Peak Signal to Noise Ratio (PSNR), Processing time, security.The analysis shows that the BER and PSNR is improved in the LSB Method but security sake DCT is the best method.",
"title": ""
},
{
"docid": "9e32c4fed9c9aecfba909fd82287336b",
"text": "StructuredQueryLanguage injection (SQLi) attack is a code injection techniquewherehackers injectSQLcommandsintoadatabaseviaavulnerablewebapplication.InjectedSQLcommandscan modifytheback-endSQLdatabaseandthuscompromisethesecurityofawebapplication.Inthe previouspublications,theauthorhasproposedaNeuralNetwork(NN)-basedmodelfordetections andclassificationsof theSQLiattacks.Theproposedmodelwasbuiltfromthreeelements:1)a UniformResourceLocator(URL)generator,2)aURLclassifier,and3)aNNmodel.Theproposed modelwas successful to:1)detect eachgeneratedURLaseitherabenignURLoramalicious, and2)identifythetypeofSQLiattackforeachmaliciousURL.Thepublishedresultsprovedthe effectivenessoftheproposal.Inthispaper,theauthorre-evaluatestheperformanceoftheproposal throughtwoscenariosusingcontroversialdatasets.Theresultsoftheexperimentsarepresentedin ordertodemonstratetheeffectivenessoftheproposedmodelintermsofaccuracy,true-positiverate aswellasfalse-positiverate. KeyWoRDS Artificial Intelligence, Databases, Intrusion Detection, Machine Learning, Neural Networks, SQL Injection Attacks, Web Attacks",
"title": ""
},
{
"docid": "a5c054899abf8aa553da4a576577678e",
"text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.",
"title": ""
},
{
"docid": "c955e63d5c5a30e18c008dcc51d1194b",
"text": "We report, for the first time, the identification of fatty acid particles in formulations containing the surfactant polysorbate 20. These fatty acid particles were observed in multiple mAb formulations during their expected shelf life under recommended storage conditions. The fatty acid particles were granular or sand-like in morphology and were several microns in size. They could be identified by distinct IR bands, with additional confirmation from energy-dispersive X-ray spectroscopy analysis. The particles were readily distinguishable from protein particles by these methods. In addition, particles containing a mixture of protein and fatty acids were also identified, suggesting that the particulation pathways for the two particle types may not be distinct. The techniques and observations described will be useful for the correct identification of proteinaceous versus nonproteinaceous particles in pharmaceutical products.",
"title": ""
},
{
"docid": "fa373f09456dbdb1222aaa9df10fd117",
"text": "A scalable extension to the H.264/AVC video coding standard has been developed within the joint video team (JVT), a joint organization of the ITU-T video coding group (VCEG) and the ISO/IEC moving picture experts group (MPEG). The extension allows multiple resolutions of an image sequence to be contained in a single bit stream. In this paper, we introduce the spatially scalable extension within the resulting scalable video coding standard. The high-level design is described and individual coding tools are explained. Additionally, encoder issues are identified. Finally, the performance of the design is reported.",
"title": ""
},
{
"docid": "f46eefb5c9a7bd6ec12ae518b5e8f614",
"text": "Software defined networking (SDN) and OpenFlow as the outcome of recent research and development efforts provided unprecedented access into the forwarding plane of networking elements. This is achieved by decoupling the network control out of the forwarding devices. This separation paves the way for a more flexible and innovative networking. While SDN concept and OpenFlow find their ways into commercial deployments, performance evaluation of the SDN concept and its scalability, delay bounds, buffer sizing and similar performance metrics are not investigated in recent researches. In spite of usage of benchmark tools (like OFlops and Cbench), simulation studies and very few analytical models, there is a lack of analytical models to express the boundary condition of SDN deployment. In this work we present a model based on network calculus theory to describe the functionality of an SDN switch and controller. To the best of our knowledge, this is for the first time that network calculus framework is utilized to model the behavior of an SDN switch in terms of delay and queue length boundaries and the analysis of the buffer length of SDN controller and SDN switch. The presented model can be used for network designers and architects to get a quick view of the overall SDN network deployment performance and buffer sizing of SDN switches and controllers.",
"title": ""
}
] |
scidocsrr
|
38e5d5c554182a1ac56a453fea078aac
|
New perspectives and methods in link prediction
|
[
{
"docid": "282d956beea5b0d121d1ac0c13861be4",
"text": "We study empirically the time evolution of scientific collaboration networks in physics and biology. In these networks, two scientists are considered connected if they have coauthored one or more papers together. We show that the probability of a pair of scientists collaborating increases with the number of other collaborators they have in common, and that the probability of a particular scientist acquiring new collaborators increases with the number of his or her past collaborators. These results provide experimental evidence in favor of previously conjectured mechanisms for clustering and power-law degree distributions in networks.",
"title": ""
}
] |
[
{
"docid": "26cfea93f837197e3244f771526d2fe7",
"text": "The payof matrix of the numberdistance game is as folow. We know that each player is invariant to the diferent actions in her support. First we guessed that al of the actions are in supports for both players. Let x,y,z be the probability that the first players plays 1,0,2 respectively and Let p,q,r be the probability that the second players plays 1,0,2 respectively. For the first player we have: 0*p+1*q+3*r = 1*p+0*q+2*r = 3*p+2*q+0*r p+q+r=1",
"title": ""
},
{
"docid": "cb5d3d06c46266c9038aea9d18d4ae69",
"text": "Signal distortion of photoplethysmographs (PPGs) due to motion artifacts has been a limitation for developing real-time, wearable health monitoring devices. The artifacts in PPG signals are analyzed by comparing the frequency of the PPG with a reference pulse and daily life motions, including typing, writing, tapping, gesturing, walking, and running. Periodical motions in the range of pulse frequency, such as walking and running, cause motion artifacts. To reduce these artifacts in real-time devices, a least mean square based active noise cancellation method is applied to the accelerometer data. Experiments show that the proposed method recovers pulse from PPGs efficiently.",
"title": ""
},
{
"docid": "e69acc779b3bd736c0e5bd6962c8d459",
"text": "The genome-wide transcriptome profiling of cancerous and normal tissue samples can provide insights into the molecular mechanisms of cancer initiation and progression. RNA Sequencing (RNA-Seq) is a revolutionary tool that has been used extensively in cancer research. However, no existing RNA-Seq database provides all of the following features: (i) large-scale and comprehensive data archives and analyses, including coding-transcript profiling, long non-coding RNA (lncRNA) profiling and coexpression networks; (ii) phenotype-oriented data organization and searching and (iii) the visualization of expression profiles, differential expression and regulatory networks. We have constructed the first public database that meets these criteria, the Cancer RNA-Seq Nexus (CRN, http://syslab4.nchu.edu.tw/CRN). CRN has a user-friendly web interface designed to facilitate cancer research and personalized medicine. It is an open resource for intuitive data exploration, providing coding-transcript/lncRNA expression profiles to support researchers generating new hypotheses in cancer research and personalized medicine.",
"title": ""
},
{
"docid": "bdb3ea54ed125826939f4f571eb777dd",
"text": "Charts are an excellent way to convey patterns and trends in data, but they do not facilitate further modeling of the data or close inspection of individual data points. We present a fully automated system for extracting the numerical values of data points from images of scatter plots. We use deep learning techniques to identify the key components of the chart, and optical character recognition together with robust regression to map from pixels to the coordinate system of the chart. We focus on scatter plots with linear scales, which already have several interesting challenges. Previous work has done fully automatic extraction for other types of charts, but to our knowledge this is the first approach that is fully automatic for scatter plots. Our method performs well, achieving successful data extraction on 89% of the plots in our test set.",
"title": ""
},
{
"docid": "6a9738cbe28b53b3a9ef179091f05a4a",
"text": "The study examined the impact of advertising on building brand equity in Zimbabwe’s Tobacco Auction floors. In this study, 100 farmers were selected from 88 244 farmers registered in the four tobacco growing regions of country. A structured questionnaire was used as a tool to collect primary data. A pilot survey with 20 participants was initially conducted to test the reliability of the questionnaire. Results of the pilot study were analysed to test for reliability using SPSS.Results of the study found that advertising affects brand awareness, brand loyalty, brand association and perceived quality. 55% of the respondents agreed that advertising changed their perceived quality on auction floors. A linear regression analysis was performed to predict brand quality as a function of the type of farmer, source of information, competitive average pricing, loyalty, input assistance, service delivery, number of floors, advert mode, customer service, floor reputation and attitude. There was a strong relationship between brand quality and the independent variables as depicted by the regression coefficient of 0.885 and the model fit is perfect at 78.3%. From the ANOVA tables, a good fit was established between advertising and brand equity with p=0.001 which is less than the significance level of 0.05. While previous researches concentrated on the elements of brand equity as suggested by Keller’s brand equity model, this research has managed to extend the body of knowledge on brand equity by exploring the role of advertising. Future research should assess the relationship between advertising and a brand association.",
"title": ""
},
{
"docid": "6bfc3d00fe6e9fcdb09ad8993b733dfd",
"text": "This article presents the upper-torso design issue of Affeto who can physically interact with humans, which biases the perception of affinity beyond the uncanny valley effect. First, we review the effect and hypothesize that the experience of physical interaction with Affetto decreases the effect. Then, the reality of physical existence is argued with existing platforms. Next, the design concept and a very preliminary experiment are shown. Finally, future issues are given. I. THE UNCANNY VALLEY REVISITED The term “Uncanny” is a translation of Freud’s term “Der Unheimliche” and applied to a phenomenon noted by Masahiro Mori who mentioned that the presence of movement steepens the slopes of the uncanny valley (Figure 2 in [1]). Several studies on this effect can be summarised as follows1. 1) Multimodal impressions such as visual appearance, body motion, sounds (speech and others), and tactile sensation should be congruent to decrease the valley steepness. 2) Antipathetic expressions may exaggerate the valley effect. The current technologies enable us to minimize the gap caused by mismatch among cross-modal factors. Therefore, the valley effect is expected to be reduced gradually. For example, facial expressions and tactile sensations of Affetto [2] are realistic and congruent due to baby-like face skin mask of urethane elastomer gel (See Figure 1). Generated facial expressions almost conquered the uncanny valley. Further, baby-like facial expressions may contribute to the reduction of the valley effect due to 2). In addition to these, we suppose that the motor experience of physical interactions with robots biases the perception of affinity as motor experiences biases the perception of movements [3]. To verify this hypothesis, Affetto needs its body which realizes physical interactions naturally. The rest of this article is organized as follows. The next section argues about the reality of physical existence with existing platforms. Then, the design concept and a very preliminary experiment are shown, and the future issues are given.",
"title": ""
},
{
"docid": "eb32c8c731ba9a28023bc8806d83b80e",
"text": "Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.",
"title": ""
},
{
"docid": "b568dae2d11ca8c28c0b7268368ce53d",
"text": "The Box and Block Test, a test of manual dexterity, has been used by occupational therapists and others to evaluate physically handicapped individuals. Because the test lacked normative data for adults, the results of the test have been interpreted subjectively. The purpose of this study was to develop normative data for adults. Test subjects were 628 Normal adults (310 males and 318 females)from the seven-county Milwaukee area. Data on males and females 20 to 94 years old were divided into 12 age groups. Means, standard deviations, standard error, and low and high scores are reported for each five-year age group. These data will enable clinicians to objectively compare a patient's score to a normal population parameter. Occupational therapists are frequently involved with increasing the manual dexterity of their patients. Often, these patients are unable to perform tests offine manual or finger dexterity, such as the Purdue Pegboard Test or the Crawford Small Parts Dexterity Test. Tests of manual dexterity, such as the Minnesota Rate of Manipulation Test, have limited clinical application because a) they require lengthy administration time, b) a standardized standing position must be used for testing, and c) the tests use normative samples that poorly represent the wide range of clinical patients. Because of the limitations of such standardized tests, therapists often evaluate dexterity subjectively. The Box and Block Test has been suggested as a measure of gross manual dexterity (1) and as a prevocational test for handicapped people (2). Norms have been collected on adults with neuromuscular involvement (2) and on normal children (7, 8, and 9 years old) (3). Standardized instructions along with reliability and validity data, are reported in the literature (2,3), but there are no norms for the normal adult population. Therefore, the purpose of this study was to collect normative data for adults. Methods",
"title": ""
},
{
"docid": "cfe2143743887d1899deb957898374c8",
"text": "Coordinated multi-point (CoMP) communication is attractive for heterogeneous cellular networks (HCNs) for interference reduction. However, previous approaches to CoMP face two major hurdles in HCNs. First, they usually ignore the inter-cell overhead messaging delay, although it results in an irreducible performance bound. Second, they consider the grid or Wyner model for base station locations, which is not appropriate for HCN BS locations which are numerous and haphazard. Even for conventional macrocell networks without overlaid small cells, SINR results are not tractable in the grid model nor accurate in the Wyner model. To overcome these hurdles, we develop a novel analytical framework which includes the impact of overhead delay for CoMP evaluation in HCNs. This framework can be used for a class of CoMP schemes without user data sharing. As an example, we apply it to downlink CoMP zero-forcing beamforming (ZFBF), and see significant divergence from previous work. For example, we show that CoMP ZFBF does not increase throughput when the overhead channel delay is larger than 60% of the channel coherence time. We also find that, in most cases, coordinating with only one other cell is nearly optimum for downlink CoMP ZFBF.",
"title": ""
},
{
"docid": "71428f1d968a25eb7df33f55557eb424",
"text": "BACKGROUND\nThe 'Choose and Book' system provides an online booking service which primary care professionals can book in real time or soon after a patient's consultation. It aims to offer patients choice and improve outpatient clinic attendance rates.\n\n\nOBJECTIVE\nAn audit comparing attendance rates of new patients booked into the Audiological Medicine Clinic using the 'Choose and Book' system with that of those whose bookings were made through the traditional booking system.\n\n\nMETHODS\nData accrued between 1 April 2008 and 31 October 2008 were retrospectively analysed for new patient attendance at the department, and the age and sex of the patients, method of appointment booking used and attendance record were collected. Patients were grouped according to booking system used - 'Choose and Book' or the traditional system. The mean ages of the groups were compared by a t test. The standard error of the difference between proportions was used to compare the data from the two groups. A P value of < or = 0.05 was considered to be significant.\n\n\nRESULTS\n'Choose and Book' patients had a significantly better rate of attendance than traditional appointment patients, P < 0.01 (95% CI 4.3, 20.5%). There was no significant difference between the two groups in terms of sex, P > 0.1 (95% CI-3.0, 16.2%). The 'Choose and Book' patients, however, were significantly older than the traditional appointment patients, P < 0.001 (95% CI 4.35, 12.95%).\n\n\nCONCLUSION\nThis audit suggests that when primary care agents book outpatient clinic appointments online it improves outpatient attendance.",
"title": ""
},
{
"docid": "1edd6cb3c6ed4657021b6916efbc23d9",
"text": "Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.",
"title": ""
},
{
"docid": "a98d158c4621ee83c537dd7449db4251",
"text": "This paper presents a design of simultaneous localization and mapping (SLAM) for an omni-directional mobile robot using an omni-directional camera. A method is proposed to realize visual SLAM of the omni-directional mobile robot based on extended Kalman filter (EKF). Taking advantage of the 360° view of omni-directional images, visual reference scan approach is adopted in the SLAM design. Features of previously visited places can be used repeatedly to reduce the complexity of EKF calculation. Practical experiments of the proposed self-localization and control algorithms have been carried out by using a self-constructed omni-directional mobile robot. The localization error between the start point and target point is less than 0.15m and 1° after traveling more than 40 meters in an indoor environment.",
"title": ""
},
{
"docid": "0d22b9e723d95c1f86a5c2795f3dbd42",
"text": "Cancer is a class of diseases characterized by out-of-control cell growth. There are over 200 different types of cancer, and each is classified by the type of cell that is initially affected. This paper discusses the technical aspects of some of the ontology-based medical systems for cancer diseases. It also proposes an ontology based system for cancer diseases knowledge management. The system can be used to help patients, students and physicians to decide what cancer type the patient has, what is the stage of the cancer and how it can be treated. The system performance and accuracy are acceptable, with a cancer diseases classification accuracy of 92%.",
"title": ""
},
{
"docid": "4958f0fbdf29085cabef3591a1c05c51",
"text": "Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.",
"title": ""
},
{
"docid": "16a18f742d67e4dfb660b4ce3b660811",
"text": "Container-based virtualization has become the de-facto standard for deploying applications in data centers. However, deployed containers frequently include a wide-range of tools (e.g., debuggers) that are not required for applications in the common use-case, but they are included for rare occasions such as in-production debugging. As a consequence, containers are significantly larger than necessary for the common case, thus increasing the build and deployment time. CNTR1 provides the performance benefits of lightweight containers and the functionality of large containers by splitting the traditional container image into two parts: the “fat” image — containing the tools, and the “slim” image — containing the main application. At run-time, CNTR allows the user to efficiently deploy the “slim” image and then expand it with additional tools, when and if necessary, by dynamically attaching the “fat” image. To achieve this, CNTR transparently combines the two container images using a new nested namespace, without any modification to the application, the container manager, or the operating system. We have implemented CNTR in Rust, using FUSE, and incorporated a range of optimizations. CNTR supports the full Linux filesystem API, and it is compatible with all container implementations (i.e., Docker, rkt, LXC, systemd-nspawn). Through extensive evaluation, we show that CNTR incurs reasonable performance overhead while reducing, on average, by 66.6% the image size of the Top-50 images available on Docker Hub.",
"title": ""
},
{
"docid": "7f02090e896afacd6b70537c03956078",
"text": "Although the literature on Asian Americans and racism has been emerging, few studies have examined how coping influences one's encounters with racism. To advance the literature, the present study focused on the psychological impact of Filipino Americans' experiences with racism and the role of coping as a mediator using a community-based sample of adults (N = 199). Two multiple mediation models were used to examine the mediating effects of active, avoidance, support-seeking, and forbearance coping on the relationship between perceived racism and psychological distress and self-esteem, respectively. Separate analyses were also conducted for men and women given differences in coping utilization. For men, a bootstrap procedure indicated that active, support-seeking, and avoidance coping were mediators of the relationship between perceived racism and psychological distress. Active coping was negatively associated with psychological distress, whereas both support seeking and avoidance were positively associated with psychological distress. A second bootstrap procedure for men indicated that active and avoidance coping mediated the relationship between perceived racism and self-esteem such that active coping was positively associated with self-esteem, and avoidance was negatively associated with self-esteem. For women, only avoidance coping had a significant mediating effect that was associated with elevations in psychological distress and decreases in self-esteem. The results highlight the importance of examining the efficacy of specific coping responses to racism and the need to differentiate between the experiences of men and women.",
"title": ""
},
{
"docid": "41c5dbb3e903c007ba4b8f37d40b06ef",
"text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.",
"title": ""
},
{
"docid": "9493b44f845bb7d37bf68a96a8ff96f6",
"text": "This paper focuses on services and applications provided to mobile users using airborne computing infrastructure. We present concepts such as drones-as-a-service and fl yin,fly-out infrastructure, and note data management and sys tem design issues that arise in these scenarios. Issues of Big Da ta arising from such applications, optimising the configuration of airborne and ground infrastructure to provide the best QoS and QoE, situation-awareness, scalability, reliability,scheduling for efficiency, interaction with users and drones using phys ical annotations are outlined.",
"title": ""
}
] |
scidocsrr
|
af2b7f450e116395700b6414dd427aff
|
DroidBot: A Lightweight UI-Guided Test Input Generator for Android
|
[
{
"docid": "fce11219cdd4d85dde1d3d893f252e14",
"text": "Smartphones and tablets with rich graphical user interfaces (GUI) are becoming increasingly popular. Hundreds of thousands of specialized applications, called apps, are available for such mobile platforms. Manual testing is the most popular technique for testing graphical user interfaces of such apps. Manual testing is often tedious and error-prone. In this paper, we propose an automated technique, called Swift-Hand, for generating sequences of test inputs for Android apps. The technique uses machine learning to learn a model of the app during testing, uses the learned model to generate user inputs that visit unexplored states of the app, and uses the execution of the app on the generated inputs to refine the model. A key feature of the testing algorithm is that it avoids restarting the app, which is a significantly more expensive operation than executing the app on a sequence of inputs. An important insight behind our testing algorithm is that we do not need to learn a precise model of an app, which is often computationally intensive, if our goal is to simply guide test execution into unexplored parts of the state space. We have implemented our testing algorithm in a publicly available tool for Android apps written in Java. Our experimental results show that we can achieve significantly better coverage than traditional random testing and L*-based testing in a given time budget. Our algorithm also reaches peak coverage faster than both random and L*-based testing.",
"title": ""
}
] |
[
{
"docid": "266114ecdd54ce1c5d5d0ec42c04ed4d",
"text": "A multiscale image registration technique is presented for the registration of medical images that contain significant levels of noise. An overview of the medical image registration problem is presented, and various registration techniques are discussed. Experiments using mean squares, normalized correlation, and mutual information optimal linear registration are presented that determine the noise levels at which registration using these techniques fails. Further experiments in which classical denoising algorithms are applied prior to registration are presented, and it is shown that registration fails in this case for significantly high levels of noise, as well. The hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [20] is presented, and accurate registration of noisy images is achieved by obtaining a hierarchical multiscale decomposition of the images and registering the resulting components. This approach enables successful registration of images that contain noise levels well beyond the level at which ordinary optimal linear registration fails. Image registration experiments demonstrate the accuracy and efficiency of the multiscale registration technique, and for all noise levels, the multiscale technique is as accurate as or more accurate than ordinary registration techniques.",
"title": ""
},
{
"docid": "f4440f6c069854c73fbc90d1d921fd7c",
"text": "In this paper we present Geckos, a new type of tangible objects which are tracked using a Force-Sensitive Resistance sensor. Geckos are based on low-cost permanent magnets and can also be used on non-horizontal surfaces. Unique pressure footprints are used to identify each tangible Gecko. Two types of tangible object designs are presented: Using a single magnet in combination with felt pads provides new pressure-based interaction modalities. Using multiple separate magnets it is possible to change the marker footprint dynamically and create new haptic experiences. The tangible object design and interaction are illustrated with example applications. We also give details on the feasibility and benefits of our tracking approach and show compatibility with other tracking technologies.",
"title": ""
},
{
"docid": "aed8a983fc25d2c1c71401b338d8f5f3",
"text": "Heart disease is the leading cause of death in the world over the past 10 years. Researchers have been using several data mining techniques to help health care professionals in the diagnosis of heart disease. Decision Tree is one of the successful data mining techniques used. However, most research has applied J4.8 Decision Tree, based on Gain Ratio and binary discretization. Gini Index and Information Gain are two other successful types of Decision Trees that are less used in the diagnosis of heart disease. Also other discretization techniques, voting method, and reduced error pruning are known to produce more accurate Decision Trees. This research investigates applying a range of techniques to different types of Decision Trees seeking better performance in heart disease diagnosis. A widely used benchmark data set is used in this research. To evaluate the performance of the alternative Decision Trees the sensitivity, specificity, and accuracy are calculated. The research proposes a model that outperforms J4.8 Decision Tree and Bagging algorithm in the diagnosis of heart disease patients.",
"title": ""
},
{
"docid": "74686e9acab0a4d41c87cadd7da01889",
"text": "Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.",
"title": ""
},
{
"docid": "a90c56a22559807463b46d1c7ab36cb3",
"text": "We have studied manual motor function in a man deafferented by a severe peripheral sensory neuropathy. Motor power was almost unaffected. Our patients could produce a very wide range of preprogrammed finger movements with remarkable accuracy, involving complex muscle synergies of the hand and forearm muscles. He could perform individual finger movements and outline figures in the air with high eyes closed. He had normal pre- and postmovement EEG potentials, and showed the normal bi/triphasic pattern of muscle activation in agonist and antagonist muscles during fast limb movements. He could also move his thumb accurately through three different distances at three different speeds, and could produce three different levels of force at his thumb pad when required. Although he could not judge the weights of objects placed in his hands without vision, he was able to match forces applied by the experimenter to the pad of each thumb if he was given a minimal indication of thumb movement. Despite his success with these laboratory tasks, his hands were relatively useless to him in daily life. He was unable to grasp a pen and write, to fasten his shirt buttons or to hold a cup in one hand. Part of hist difficulty lay in the absence of any automatic reflex correction in his voluntary movements, and also to an inability to sustain constant levels of muscle contraction without visual feedback over periods of more than one or two seconds. He was also unable to maintain long sequences of simple motor programmes without vision.",
"title": ""
},
{
"docid": "966d650d8d186715dd1ee08effedce92",
"text": "Over the past few years, various tasks involving videos such as classification, description, summarization and question answering have received a lot of attention. Current models for these tasks compute an encoding of the video by treating it as a sequence of images and going over every image in the sequence, which becomes computationally expensive for longer videos. In this paper, we focus on the task of video classification and aim to reduce the computational cost by using the idea of distillation. Specifically, we propose a Teacher-Student network wherein the teacher looks at all the frames in the video but the student looks at only a small fraction of the frames in the video. The idea is to then train the student to minimize (i) the difference between the final representation computed by the student and the teacher and/or (ii) the difference between the distributions predicted by the teacher and the student. This smaller student network which involves fewer computations but still learns to mimic the teacher can then be employed at inference time for video classification. We experiment with the YouTube-8M dataset and show that the proposed student network can reduce the inference time by upto 30% with a negligent drop in the performance.",
"title": ""
},
{
"docid": "70b410094dd718d10e6ae8cd3f93c768",
"text": "Software developers and project managers are struggling to assess the appropriateness of agile processes to their development environments. This paper identifies limitations that apply to many of the published agile processes in terms of the types of projects in which their application may be problematic. INTRODUCTION As more organizations seek to gain competitive advantage through timely deployment of Internet-based services, developers are under increasing pressure to produce new or enhanced implementations quickly [2,8]. Agile software development processes were developed primarily to address this problem, that is, the problem of developing software in \"Internet time\". Agile approaches utilize technical and managerial processes that continuously adapt and adjust to (1) changes derived from experiences gained during development, (2) changes in software requirements and (3) changes in the development environment. Agile processes are intended to support early and quick production of working code. This is accomplished by structuring the development process into iterations, where an iteration focuses on delivering working code and other artifacts that provide value to the customer and, secondarily, to the project. Agile process proponents and critics often emphasize the code focus of these processes. Proponents often argue that code is the only deliverable that matters, and marginalize the role of analysis and design models and documentation in software creation and evolution. Agile process critics point out that the emphasis on code can lead to corporate memory loss because there is little emphasis on producing good documentation and models to support software creation and evolution of large, complex systems. The claims made by agile process proponents and critics lead to questions about what practices, techniques, and infrastructures are suitable for software development in today’s rapidly changing development environments. In particular, answers to questions related to the suitability of agile processes to particular application domains and development environments are often based on anecdotal accounts of experiences. In this paper we present what we perceive as limitations of agile processes based on our analysis of published works on agile processes [14]. Processes that name themselves “agile” vary greatly in values, practices, and application domains. It is therefore difficult to assess agile processes in general and identify limitations that apply to all agile processes. Our analysis [14] is based on a study of assumptions underlying Extreme Programming (XP) [3,5,6,10], Scrum [12,13], Agile Unified Process [11], Agile Modeling [1] and the principles stated by the Agile Alliance. It is mainly an analytical study, supported by experiences on a few XP projects conducted by the authors. THE AGILE ALLIANCE In recent years a number of processes claiming to be \"agile\" have been proposed in the literature. To avoid confusion over what it means for a process to be \"agile\", seventeen agile process methodologists came to an agreement on what \"agility\" means during a 2001 meeting where they discussed future trends in software development processes. One result of the meeting was the formation of the \"Agile Alliance\" and the publication of its manifesto (see http://www.agilealliance.org/principles.html). The manifesto of the \"Agile Alliance\" is a condensed definition of the values and goals of \"Agile Software Development\". This manifesto is detailed through a number of common principles for agile processes. The principles are listed below. 1. \"Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.\" 2. \"Business people and developers must work together daily throughout the project.\" 3. \"Welcome changing requirements, even late in development.\" 4. \"Deliver working software frequently.\" 5. \"Working software is the primary measure of progress.\" 6. \"Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.\" 7. \"The best architectures, requirements, and designs emerge from self-organizing teams.\" 8. \"The most efficient and effective method of conveying information to and within a development team is face-toface conversation.\" 9. \"Agile processes promote sustainable development.\" 10. \"Continuous attention to technical excellence and good design enhances agility.\" 11. \"Simplicity is essential.\" 12. \"Project teams evaluate their effectiveness at regular intervals and adjust their behavior accordingly.\" [TFR02] D. Turk, R. France, B. Rumpe. Limitations of Agile Software Processes. In: Third International Conference on Extreme Programming and Flexible Processes in Software Engineering, XP2002, May 26-30, Alghero, Italy, pg. 43-46, 2002. www.se-rwth.de/publications AN ANALYSIS OF AGILE PROCESSES In this section we discuss the limitations of agile processes that we have identified, based on our analysis of the Agile Alliance principles and assumptions underlying agile processes. The next subsection lists the managerial and technical assumptions we identified in our study [14], and the following subsection discusses the limitations derived from the assumptions. Underlying Assumptions The stated benefits of agile processes over traditional prescriptive processes are predicated on the validity of these assumptions. These assumptions are discussed in more details in another paper [14]. Assumption 1: Customers are co-located with the development team and are readily available when needed by developers. Furthermore, the reliance on face-to-face communication requires that developers be located in close proximity to each other. Assumption 2: Documentation and software models do not play central roles in software development. Assumption 3: Software requirements and the environment in which software is developed evolve as the software is being developed. Assumption 4: Development processes that are dynamically adapted to changing project and product characteristics are more likely to produce high-quality products. Assumption 5: Developers have the experience needed to define and adapt their processes appropriately. In other words, an organization can form teams consisting of bright, highly-experienced problem solvers capable of effectively evolving their processes while they are being executed. Assumption 6: Project visibility can be achieved primarily through delivery of increments and a few metrics. Assumption 7: Rigorous evaluation of software artifacts (products and processes) can be restricted to frequent informal reviews and code testing. Assumption 8: Reusability and generality should not be goals of application-specific software development. Assumption 9: Cost of change does not dramatically increase over time. Assumption 10: Software can be developed in increments. Assumption 11: There is no need to design for change because any change can be effectively handled by refactoring the code [9]. Limitations of Agile Processes The assumptions listed above do not hold for all software development environments in general, nor for all “agile” processes in particular. This should not be surprising; none of the agile processes is a silver bullet (despite the enthusiastic claims of some its proponents). In this part we describe some of the situations in which agile processes may generally not be applicable. It is possible that some agile processes fit these assumptions better, while others may be able to be extended to address the limitations discussed here. Such extensions can involve incorporating principles and practices often associated with more predictive development practices into agile processes. 1. Limited support for distributed development",
"title": ""
},
{
"docid": "bd1cc759e636f8bf6828e758c27a0ca5",
"text": "Although personalised nutrition is frequently considered in the context of diet-gene interactions, increasingly, personalised nutrition is seen to exist at three levels. The first is personalised dietary advice using Internet-delivered services, which ultimately will become automated and which will also draw on mobile phone technology. The second level of personalised dietary advice will include phenotypic information on anthropometry, physical activity, clinical parameters and biochemical markers of nutritional status. It remains possible that in addition to personalised dietary advice based on phenotypic data, advice at that group or metabotype level may be offered where metabotypes are defined by a common metabolic profile. The third level of personalised nutrition will involve the use of genomic data. While the genomic aspect of personalised nutrition is often considered as its main driver, there are significant challenges to translation of data on SNP and diet into personalised advice. The majority of the published data on SNP and diet emanate from observational studies and as such do not offer any cause-effect associations. To achieve this, purpose-designed dietary intervention studies will be needed with subjects recruited according to their genotype. Extensive research indicates that consumers would welcome personalised dietary advice including dietary advice based on their genotype. Unlike personalised medicine where genotype data are linked to the risk of developing a disease, in personalised nutrition the genetic data relate to the optimal diet for a given genotype to reduce disease risk factors and thus there are few ethical and legal issues in personalised nutrition.",
"title": ""
},
{
"docid": "f56f2119b3e65970db35676fe1cac9ba",
"text": "While behavioral and social sciences occupations comprise one of the largest portions of the \"STEM\" workforce, most studies of diversity in STEM overlook this population, focusing instead on fields such as biomedical or physical sciences. This study evaluates major demographic trends and productivity in the behavioral and social sciences research (BSSR) workforce in the United States during the past decade. Our analysis shows that the demographic trends for different BSSR fields vary. In terms of gender balance, there is no single trend across all BSSR fields; rather, the problems are field-specific, and disciplines such as economics and political science continue to have more men than women. We also show that all BSSR fields suffer from a lack of racial and ethnic diversity. The BSSR workforce is, in fact, less representative of racial and ethnic minorities than are biomedical sciences or engineering. Moreover, in many BSSR subfields, minorities are less likely to receive funding. We point to various funding distribution patterns across different demographic groups of BSSR scientists, and discuss several policy implications.",
"title": ""
},
{
"docid": "dd726458660c3dfe05bd775df562e188",
"text": "Maternally deprived rats were treated with tianeptine (15 mg/kg) once a day for 14 days during their adult phase. Their behavior was then assessed using the forced swimming and open field tests. The BDNF, NGF and energy metabolism were assessed in the rat brain. Deprived rats increased the immobility time, but tianeptine reversed this effect and increased the swimming time; the BDNF levels were decreased in the amygdala of the deprived rats treated with saline and the BDNF levels were decreased in the nucleus accumbens within all groups; the NGF was found to have decreased in the hippocampus, amygdala and nucleus accumbens of the deprived rats; citrate synthase was increased in the hippocampus of non-deprived rats treated with tianeptine and the creatine kinase was decreased in the hippocampus and amygdala of the deprived rats; the mitochondrial complex I and II–III were inhibited, and tianeptine increased the mitochondrial complex II and IV in the hippocampus of the non-deprived rats; the succinate dehydrogenase was increased in the hippocampus of non-deprived rats treated with tianeptine. So, tianeptine showed antidepressant effects conducted on maternally deprived rats, and this can be attributed to its action on the neurochemical pathways related to depression.",
"title": ""
},
{
"docid": "19259e0b88e1f5bfbde873886f832e43",
"text": "Molecular biologists routinely clone genetic constructs from DNA segments and formulate plans to assemble them. However, manual assembly planning is complex, error prone and not scalable. We address this problem with an algorithm-driven DNA assembly planning software tool suite called Raven (http://www.ravencad.org/) that produces optimized assembly plans and allows users to apply experimental outcomes to redesign assembly plans interactively. We used Raven to calculate assembly plans for thousands of variants of five types of genetic constructs, as well as hundreds of constructs of variable size and complexity from the literature. Finally, we experimentally validated a subset of these assembly plans by reconstructing four recombinase-based 'genetic counter' constructs and two 'repressilator' constructs. We demonstrate that Raven's solutions are significantly better than unoptimized solutions at small and large scales and that Raven's assembly instructions are experimentally valid.",
"title": ""
},
{
"docid": "66f1279585c6d1a0a388faa91bd25c62",
"text": "Our research project is to design a readout IC for an ultrasonic transducer consisting of a matrix of more than 2000 elements. The IC and the matrix transducer will be put into the tip of a transesophageal probe for 3D echocardiography. A key building block of the readout IC, a programmable analog delay line, is presented in this paper. It is based on the time-interleaved sample-and-hold (S/H) principle. Compared with conventional analog delay lines, this design is simple, accurate and flexible. A prototype has been fabricated in a standard 0.35µm CMOS technology. Measurement results showing its functionality are presented.",
"title": ""
},
{
"docid": "6a763e49cdfd41b28922eb536d9404ed",
"text": "With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.",
"title": ""
},
{
"docid": "8aa3007d5a14c63dc035b11f2df0793b",
"text": "To detect the smallest delay faults at a fault site, the longest path(s) through it must be tested at full speed. Most existing test generation tools are either inefficient in automatically identifying the longest testable paths due to the high computational complexity or do not support at-speed test using existing practical design-for-testability structures, such as scan design. In this work a test generation methodology for scan-based synchronous sequential circuits is presented, under two at-speed test strategies used in industry. The two strategies are compared and the test generation efficiency is evaluated on the ISCAS89 benchmark circuits.",
"title": ""
},
{
"docid": "7b13637b634b11b3061f7ebe0c64b3a6",
"text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.",
"title": ""
},
{
"docid": "1e9c7c97256e7778dbb1ef4f09c1b28e",
"text": "A new neural paradigm called diagonal recurrent neural network (DRNN) is presented. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer, and the hidden layer comprises self-recurrent neurons. Two DRNN's are utilized in a control system, one as an identifier called diagonal recurrent neuroidentifier (DRNI) and the other as a controller called diagonal recurrent neurocontroller (DRNC). A controlled plant is identified by the DRNI, which then provides the sensitivity information of the plant to the DRNC. A generalized dynamic backpropagation algorithm (DBP) is developed and used to train both DRNC and DRNI. Due to the recurrence, the DRNN can capture the dynamic behavior of a system. To guarantee convergence and for faster learning, an approach that uses adaptive learning rates is developed by introducing a Lyapunov function. Convergence theorems for the adaptive backpropagation algorithms are developed for both DRNI and DRNC. The proposed DRNN paradigm is applied to numerical problems and the simulation results are included.",
"title": ""
},
{
"docid": "436a250dc621d58d70bee13fd3595f06",
"text": "The solid-state transformer allows add-on intelligence to enhance power quality compatibility between source and load. It is desired to demonstrate the benefits gained by the use of such a device. Recent advancement in semiconductor devices and converter topologies facilitated a newly proposed intelligent universal transformer (IUT), which can isolate a disturbance from either source or load. This paper describes the basic circuit and the operating principle for the multilevel converter based IUT and its applications for medium voltages. Various power quality enhancement features are demonstrated with computer simulation for a complete IUT circuit.",
"title": ""
},
{
"docid": "18233af1857390bff51d2e713bc766d9",
"text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.",
"title": ""
},
{
"docid": "5d92f58e929a851097eae320eb9c3ddc",
"text": "In recent years, the study of genomic alterations and protein expression involved in the pathways of breast cancer carcinogenesis has provided an increasing number of targets for drugs development in the setting of metastatic breast cancer (i.e., trastuzumab, everolimus, palbociclib, etc.) significantly improving the prognosis of this disease. These drugs target specific molecular abnormalities that confer a survival advantage to cancer cells. On these bases, emerging evidence from clinical trials provided increasing proof that the genetic landscape of any tumor may dictate its sensitivity or resistance profile to specific agents and some studies have already showed that tumors treated with therapies matched with their molecular alterations obtain higher objective response rates and longer survival. Predictive molecular biomarkers may optimize the selection of effective therapies, thus reducing treatment costs and side effects. This review offers an overview of the main molecular pathways involved in breast carcinogenesis, the targeted therapies developed to inhibit these pathways, the principal mechanisms of resistance and, finally, the molecular biomarkers that, to date, are demonstrated in clinical trials to predict response/resistance to targeted treatments in metastatic breast cancer.",
"title": ""
}
] |
scidocsrr
|
aa5c36588b37e7012b1d4eaac8b31d2a
|
Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours
|
[
{
"docid": "e144d8c0f046ad6cd2e5c71844b2b532",
"text": "Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features. Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with sensor specific features that are related to surface characteristics. We show the synergism between these features resulting in a richer and more abstract surface description.",
"title": ""
},
{
"docid": "85a076e58f4d117a37dfe6b3d68f5933",
"text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.",
"title": ""
}
] |
[
{
"docid": "48c28572e5eafda1598a422fa1256569",
"text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.",
"title": ""
},
{
"docid": "fbecc8c4a8668d403df85b4e52348f6e",
"text": "Honeypots are more and more used to collect data on malicious activities on the Internet and to better understand the strategies and techniques used by attackers to compromise target systems. Analysis and modeling methodologies are needed to support the characterization of attack processes based on the data collected from the honeypots. This paper presents some empirical analyses based on the data collected from the Leurré.com honeypot platforms deployed on the Internet and presents some preliminary modeling studies aimed at fulfilling such objectives.",
"title": ""
},
{
"docid": "bb4afc6c50df5b4d32e4ce539932b0bd",
"text": "Traumatic brain injury (TBI) is a major health and socioeconomic problem that affects all societies. In recent years, patterns of injury have been changing, with more injuries, particularly contusions, occurring in older patients. Blast injuries have been identified as a novel entity with specific characteristics. Traditional approaches to the classification of clinical severity are the subject of debate owing to the widespread policy of early sedation and ventilation in more severely injured patients, and are being supplemented with structural and functional neuroimaging. Basic science research has greatly advanced our knowledge of the mechanisms involved in secondary damage, creating opportunities for medical intervention and targeted therapies; however, translating this research into patient benefit remains a challenge. Clinical management has become much more structured and evidence based since the publication of guidelines covering many aspects of care. In this Review, we summarise new developments and current knowledge and controversies, focusing on moderate and severe TBI in adults. Suggestions are provided for the way forward, with an emphasis on epidemiological monitoring, trauma organisation, and approaches to management.",
"title": ""
},
{
"docid": "690603bd37dd8376893fc1bb1946fc03",
"text": "Recently, the use of herbal medicines has been increased all over the world due to their therapeutic effects and fewer adverse effects as compared to the modern medicines. However, many herbal drugs and herbal extracts despite of their impressive in-vitro findings demonstrates less or negligible in-vivo activity due to their poor lipid solubility or improper molecular size, resulting in poor absorption and hence poor bioavailability. Nowadays with the advancement in the technology, novel drug delivery systems open the door towards the development of enhancing bioavailability of herbal drug delivery systems. For last one decade many novel carriers such as liposomes, microspheres, nanoparticles, transferosomes, ethosomes, lipid based systems etc. have been reported for successful modified delivery of various herbal drugs. Many herbal compounds including quercetin, genistein, naringin, sinomenine, piperine, glycyrrhizin and nitrile glycoside have demonstrated capability to enhance the bioavailability. The objective of this review is to summarize various available novel drug delivery technologies which have been developed for delivery of drugs (herbal), and to achieve better therapeutic response. An attempt has also been made to compile a profile on bioavailability enhancers of herbal origin with the mechanism of action (wherever reported) and studies on improvement in drug bioavailability, exhibited particularly by natural compounds.",
"title": ""
},
{
"docid": "9a5127b10d6e47fab88027bb402172bb",
"text": "Despite all that's been written about mergers and acquisitions, even the experts know surprisingly little about them. The author recently headed up a year-long study sponsored by Harvard Business School on the subject of M&A activity. In-depth findings will emerge over the next few years, but the research has already revealed some interesting results. Most intriguing is the notion that, although academics, consultants, and businesspeople lump M&As together, they represent very different strategic activities. Acquisitions occur for the following reasons: to deal with overcapacity through consolidation in mature industries; to roll up competitors in geographically fragmented industries; to extend into new products and markets; as a substitute for R&D; and to exploit eroding industry boundaries by inventing an industry. The different strategic intents present distinct integration challenges. For instance, if you acquire a company because your industry has excess capacity, you have to determine which plants to shut down and which people to let go. If, on the other hand, you buy a company because it has developed an important technology, your challenge is to keep the acquisition's best engineers from jumping ship. These scenarios require the acquiring company to engage in nearly opposite managerial behaviors. The author explores each type of M&A--its strategic intent and the integration challenges created by that intent. He underscores the importance of the acquiring company's assessment of the acquired group's culture. Depending on the type of M&A, approaches to the culture in place must vary, as will the level to which culture interferes with integration. He draws from the experiences of such companies as Cisco, Viacom, and BancOne to exemplify the different kinds of M&As.",
"title": ""
},
{
"docid": "89d4143e7845d191433882f3fa5aaa26",
"text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory. In order to collect a large number of manipulation demonstrations for different objects, we developed a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot can even manipulate objects it has never seen before. Keywords— Robotics and Learning, Crowd-sourcing, Manipulation",
"title": ""
},
{
"docid": "158c535b44fe81ca7194d5a0b386f2b5",
"text": "Deep networks are increasingly being applied to problems involving image synthesis, e.g., generating images from textual descriptions and reconstructing an input image from a compact representation. Supervised training of image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the mismatch between a generated image and its corresponding target image. We propose instead to use a loss function that is better calibrated to human perceptual judgments of image quality: the multiscale structural-similarity score (MS-SSIM) [1]. Because MS-SSIM is differentiable, it is easily incorporated into gradient-descent learning. We compare the consequences of using MS-SSIM versus PL loss on training autoencoders. Human observers reliably prefer images synthesized by MS-SSIM-optimized models over those synthesized by PL-optimized models, for two distinct PL measures (L1 and L2 distances). We also explore the effect of training objective on image encoding and analyze conditions under which perceptually-optimized representations yield better performance on image classification. Finally, we demonstrate the superiority of perceptually-optimized networks for super-resolution imaging. We argue that significant advances can be made in modeling images through the use of training objectives that are well aligned to characteristics of human perception.",
"title": ""
},
{
"docid": "c692dd35605c4af62429edef6b80c121",
"text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.",
"title": ""
},
{
"docid": "ab07e92f052a03aac253fabadaea4ab3",
"text": "As news is increasingly accessed on smartphones and tablets, the need for personalising news app interactions is apparent. We report a series of three studies addressing key issues in the development of adaptive news app interfaces. We first surveyed users' news reading preferences and behaviours; analysis revealed three primary types of reader. We then implemented and deployed an Android news app that logs users' interactions with the app. We used the logs to train a classifier and showed that it is able to reliably recognise a user according to their reader type. Finally we evaluated alternative, adaptive user interfaces for each reader type. The evaluation demonstrates the differential benefit of the adaptation for different users of the news app and the feasibility of adaptive interfaces for news apps.",
"title": ""
},
{
"docid": "d676598b1afe341079b4705284d6a911",
"text": "Quality of underwater image is poor due to the environment of water medium. The physical property of water medium causes attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper extends the methods of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kreis hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction.",
"title": ""
},
{
"docid": "04756d4dfc34215c8acb895ecfcfb406",
"text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.",
"title": ""
},
{
"docid": "bb0f1e1384d91412fe3f0f0a51e91b8a",
"text": "This paper reports on an integrated navigation algorithm for the visual simultaneous localization and mapping (SLAM) robotic area coverage problem. In the robotic area coverage problem, the goal is to explore and map a given target area within a reasonable amount of time. This goal necessitates the use of minimally redundant overlap trajectories for coverage efficiency; however, visual SLAM’s navigation estimate will inevitably drift over time in the absence of loop-closures. Therefore, efficient area coverage and good SLAM navigation performance represent competing objectives. To solve this decision-making problem, we introduce perception-driven navigation, an integrated navigation algorithm that automatically balances between exploration and revisitation using a reward framework. This framework accounts for SLAM localization uncertainty, area coverage performance, and the identification of good candidate regions in the environment for visual perception. Results are shown for both a hybrid simulation and real-world demonstration of a visual SLAM system for autonomous underwater ship hull inspection.",
"title": ""
},
{
"docid": "c6f17a0d5f91c3cab9183bbc5fa2dfc3",
"text": "In human beings, head is one of the most important parts. Injuries in this part can cause serious damages to overall health. In some cases, they can be fatal. The present paper analyses the deformations of a helmet mounted on a human head, using finite element method. It studies the amount of von Mises pressure and stress caused by a vertical blow from above on the skull. The extant paper aims at developing new methods for improving the design and achieving more energy absorption by applying more appropriate models. In this study, a thermoplastic damper is applied and modelled in order to reduce the amount of energy transferred to the skull and to minimize the damages inflicted on human head.",
"title": ""
},
{
"docid": "65f487474652d87022da819815e6bced",
"text": "Chinese input is one of the key challenges for Chinese PC users. This paper proposes a statistical approach to Pinyin-based Chinese input. This approach uses a trigram-based language model and a statistically based segmentation. Also, to deal with real input, it also includes a typing model which enables spelling correction in sentence-based Pinyin input, and a spelling model for English which enables modeless Pinyin input.",
"title": ""
},
{
"docid": "d9e0fd8abb80d6256bd86306b7112f20",
"text": "Visible light LEDs, due to their numerous advantages, are expected to become the dominant indoor lighting technology. These lights can also be switched ON/OFF at high frequency, enabling their additional use for wireless communication and indoor positioning. In this article, visible LED light--based indoor positioning systems are surveyed and classified into two broad categories based on the receiver structure. The basic principle and architecture of each design category, along with various position computation algorithms, are discussed and compared. Finally, several new research, implementation, commercialization, and standardization challenges are identified and highlighted for this relatively novel and interesting indoor localization technology.",
"title": ""
},
{
"docid": "90724c0dddf147d91a7562ef72666213",
"text": "Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision neural networks: improved robustness against some adversarial attacks, and in the worst case, performance that is on par with full-precision models. We focus on the very low-precision case where weights and activations are both quantized to ±1, and note that stochastically quantizing weights in just one layer can sharply reduce the impact of iterative attacks. We observe that non-scaled binary neural networks exhibit a similar effect to the original defensive distillation procedure that led to gradient masking, and a false notion of security. We address this by conducting both black-box and white-box experiments with binary models that do not artificially mask gradients.1",
"title": ""
},
{
"docid": "e28380d637a86b95f3c73f9cca812f6b",
"text": "Given the well-known tradeoffs between fairness, performance, and efficiency, modern cluster schedulers often prefer instantaneous fairness as their primary objective to ensure performance isolation between users and groups. However, instantaneous, short-term convergence to fairness often does not result in noticeable long-term benefits. Instead, we propose an altruistic, long-term approach, CARBYNE, where jobs yield fractions of their allocated resources without impacting their own completion times. We show that leftover resources collected via altruisms of many jobs can then be rescheduled to further secondary goals such as application-level performance and cluster efficiency without impacting performance isolation. Deployments and large-scale simulations show that CARBYNE closely approximates the stateof-the-art solutions (e.g., DRF [27]) in terms of performance isolation, while providing 1.26× better efficiency and 1.59× lower average job completion time.",
"title": ""
},
{
"docid": "08633aeea88d7938656c7ebd4a812f8a",
"text": "How important are friendships in determining success by individuals and teams in complex collaborative environments? By combining a novel data set containing the dynamics of millions of ad hoc teams from the popular multiplayer online first person shooter Halo: Reach with survey data on player demographics, play style, psychometrics and friendships derived from an anonymous online survey, we investigate the impact of friendship on collaborative and competitive performance. In addition to finding significant differences in player behavior across these variables, we find that friendships exert a strong influence, leading to both improved individual and team performance - even after controlling for the overall expertise of the team - and increased pro-social behaviors. Players also structure their in-game activities around social opportunities, and as a result hidden friendship ties can be accurately inferred directly from behavioral time series. Virtual environments that enable such friendship effects will thus likely see improved collaboration and competition.",
"title": ""
},
{
"docid": "e813eadbd5c8942f5ab01fdeda85c023",
"text": "Imagination is considered an important component of the creative process, and many psychologists agree that imagination is based on our perceptions, experiences, and conceptual knowledge, recombining them into novel ideas and impressions never before experienced. As an attempt to model this account of imagination, we introduce the Associative Conceptual Imagination (ACI) framework that uses associative memory models in conjunction with vector space models. ACI is a framework for learning conceptual knowledge and then learning associations between those concepts and artifacts, which facilitates imagining and then creating new and interesting artifacts. We discuss the implications of this framework, its creative potential, and possible ways to implement it in practice. We then demonstrate an initial prototype that can imagine and then generate simple images.",
"title": ""
},
{
"docid": "036526b572707282a50bc218b72e5862",
"text": "Linear classification is a useful tool in machine learning and data mining. For some data in a rich dimensional space, the performance (i.e., testing accuracy) of linear classifiers has shown to be close to that of nonlinear classifiers such as kernel methods, but training and testing speed is much faster. Recently, many research works have developed efficient optimization methods to construct linear classifiers and applied them to some large-scale applications. In this paper, we give a comprehensive survey on the recent development of this active research area.",
"title": ""
}
] |
scidocsrr
|
1de6cd2dddd2c198c34ed9b2cbf443da
|
SpecGuard: Spectrum misuse detection in dynamic spectrum access systems
|
[
{
"docid": "946330bdcc96711090f15dbaf772edf6",
"text": "This paper deals with the estimation of the channel impulse response (CIR) in orthogonal frequency division multiplexed (OFDM) systems. In particular, we focus on two pilot-aided schemes: the maximum likelihood estimator (MLE) and the Bayesian minimum mean square error estimator (MMSEE). The advantage of the former is that it is simpler to implement as it needs no information on the channel statistics. On the other hand, the MMSEE is expected to have better performance as it exploits prior information about the channel. Theoretical analysis and computer simulations are used in the comparisons. At SNR values of practical interest, the two schemes are found to exhibit nearly equal performance, provided that the number of pilot tones is sufficiently greater than the CIRs length. Otherwise, the MMSEE is superior. In any case, the MMSEE is more complex to implement.",
"title": ""
}
] |
[
{
"docid": "36810a398c951511d283c19a1077574b",
"text": "The laser dye rhodamine 123 is shown to be a specific probe for the localization of mitochondria in living cells. By virtue of its selectivity for mitochondria and its fluorescent properties, the detectability of mitochondria stained with rhodamine 123 is significantly improved over that provided by conventional light microscopic techniques. With the use of rhodamine 123, it is possible to detect alterations in mitochondrial distribution following transformation by Rous sarcoma virus and changes in the shape and organization of mitochondria induced by colchicine treatment.",
"title": ""
},
{
"docid": "a47b13043f033f211779379274a69e2f",
"text": "Attack techniques based on code reuse continue to enable real-world exploits bypassing all current mitigations. Code randomization defenses greatly improve resilience against code reuse. Unfortunately, sophisticated modern attacks such as JITROP can circumvent randomization by discovering the actual code layout on the target and relocating the attack payload on the fly. Hence, effective code randomization additionally requires that the code layout cannot be leaked to adversaries. Previous approaches to leakage-resilient diversity have either relied on hardware features that are not available in all processors, particularly resource-limited processors commonly found in mobile devices, or they have had high memory overheads. We introduce a code randomization technique that avoids these limitations and scales down to mobile and embedded devices: Leakage-Resilient Layout Randomization (LR2). Whereas previous solutions have relied on virtualization, x86 segmentation, or virtual memory support, LR2 merely requires the underlying processor to enforce a W⊕X policy—a feature that is virtually ubiquitous in modern processors, including mobile and embedded variants. Our evaluation shows that LR2 provides the same security as existing virtualization-based solutions while avoiding design decisions that would prevent deployment on less capable yet equally vulnerable systems. Although we enforce execute-only permissions in software, LR2 is as efficient as the best-in-class virtualization-based solution.",
"title": ""
},
{
"docid": "5bd3626d03619cf300efb70ce0664513",
"text": "A front-illuminated global-shutter CMOS image sensor has been developed with super 35-mm optical format. We have developed a chip-on-chip integration process to realize a front-illuminated image sensor stacked with 2 diced logic chips through 38K micro bump interconnections. The global-shutter pixel achieves a parasitic light sensitivity of −99.6dB. The stacked device allows highly parallel column ADCs and high-speed output interfaces to attain a frame rate of 480 fps with 8.3M-pixel resolution.",
"title": ""
},
{
"docid": "c88370dfcf79534c019fd797f055f393",
"text": "Mobile Online Social Networks (mOSNs) have recently grown in popularity. With the ubiquitous use of mobile devices and a rapid shift of technology and access to OSNs, it is important to examine the impact of mobile OSNs from a privacy standpoint. We present a taxonomy of ways to study privacy leakage and report on the current status of known leakages. We find that all mOSNs in our study exhibit some leakage of private information to third parties. Novel concerns include combination of new features unique to mobile access with the leakage in OSNs that we had examined earlier.",
"title": ""
},
{
"docid": "54cf7e25572775d52ae38a50fe345ae2",
"text": "The use of medical images has its main aim in the detection of potential abnormalities. This goal is accurately achieved with the synergy between the ability in recognizing unique image patterns and finding the relationship between them and possible diagnoses. One of the methods used to aid this process is the extrapolation of important features from the images called texture; texture is an important source of visual information and is a key component in image analysis.",
"title": ""
},
{
"docid": "a77c113c691a61101cba1136aaf4b90c",
"text": "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.",
"title": ""
},
{
"docid": "47baaddefd3476ce55d39a0f111ade5a",
"text": "We propose a novel method for classifying resume data of job applicants into 27 different job categories using convolutional neural networks. Since resume data is costly and hard to obtain due to its sensitive nature, we use domain adaptation. In particular, we train a classifier on a large number of freely available job description snippets and then use it to classify resume data. We empirically verify a reasonable classification performance of our approach despite having only a small amount of labeled resume data available.",
"title": ""
},
{
"docid": "f250e8879618f73d5e23676a96f02e81",
"text": "Brain oscillatory activity is associated with different cognitive processes and plays a critical role in meditation. In this study, we investigated the temporal dynamics of oscillatory changes during Sahaj Samadhi meditation (a concentrative form of meditation that is part of Sudarshan Kriya yoga). EEG was recorded during Sudarshan Kriya yoga meditation for meditators and relaxation for controls. Spectral and coherence analysis was performed for the whole duration as well as specific blocks extracted from the initial, middle, and end portions of Sahaj Samadhi meditation or relaxation. The generation of distinct meditative states of consciousness was marked by distinct changes in spectral powers especially enhanced theta band activity during deep meditation in the frontal areas. Meditators also exhibited increased theta coherence compared to controls. The emergence of the slow frequency waves in the attention-related frontal regions provides strong support to the existing claims of frontal theta in producing meditative states along with trait effects in attentional processing. Interestingly, increased frontal theta activity was accompanied reduced activity (deactivation) in parietal–occipital areas signifying reduction in processing associated with self, space and, time.",
"title": ""
},
{
"docid": "cd8c1b260455cdd286fbd3d80b4796f7",
"text": "As a mobile phone has various advanced functionalities or features, usability issues are increasingly challenging. Due to the particular characteristics of a mobile phone, typical usability evaluation methods and heuristics, most of which are relevant to a software system, might not effectively be applied to a mobile phone. Another point to consider is that usability evaluation activities should help designers find usability problems easily and produce better design solutions. To support usability practitioners of the mobile phone industry, we propose a framework for evaluating the usability of a mobile phone, based on a multilevel, hierarchical model of usability factors, in an analytic way. The model was developed on the basis of a set of collected usability problems and our previous study on a conceptual framework for identifying usability impact factors. It has multi-abstraction levels, each of which considers the usability of a mobile phone from a particular perspective. As there are goal-means relationships between adjacent levels, a range of usability issues can be interpreted in a holistic as well as diagnostic way. Another advantage is that it supports two different types of evaluation approaches: task-based and interface-based. To support both evaluation approaches, we developed four sets of checklists, each of which is concerned, respectively, with task-based evaluation and three different interface types: Logical User Interface (LUI), Physical User Interface (PUI) and Graphical User Interface (GUI). The proposed framework specifies an approach to quantifying usability so that several usability aspects are collectively measured to give a single score with the use of the checklists. A small case study was conducted in order to examine the applicability of the framework and to identify the aspects of the framework to be improved. It showed that it could be a useful tool for evaluating the usability of a mobile phone. Based on the case study, we improved the framework in order that usability practitioners can use it more easily and consistently. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "08ccf9eacd74773f035dfdce4c9ca250",
"text": "The postmodern organization has a design paradox in which leaders are concerned with efficiency and control as well as complex functioning. Traditional leadership theory has limited applicability to postmodern organizations as it is mainly focused on efficiency and control. As a result, a new theory of leadership that recognizes the design paradox has been proposed: complexity leadership theory. This theory conceptualizes the integration of formal leadership roles with complex functioning. Our particular focus is on leadership style and its effect as an enabler of complex functioning. We introduce dynamic network analysis, a new methodology for modeling and analyzing organizations as complex adaptive networks. Dynamic network analysis is a methodology that quantifies complexity leadership theory. Data was collected from a real-world network organization and dynamic network analysis used to explore the effects of leadership style as an enabler of complex functioning. Results and implications are discussed in relation to leadership theory and practice.",
"title": ""
},
{
"docid": "21dbc0447860c05f391809a57874d6b1",
"text": "BACKGROUND\nVegetarian and vegan diets have become more popular among adolescents and young adults. However, few studies have investigated the nutritional status of vegans, who may be at risk of nutritional deficiencies.\n\n\nOBJECTIVE\nTo compare dietary intake and nutritional status of Finnish long-term vegans and non-vegetarians.\n\n\nMETHODS\nDietary intake and supplement use were estimated using three-day dietary records. Nutritional status was assessed by measuring biomarkers in plasma, serum, and urine samples. Vegans' (n = 22) data was compared with those of sex- and age-matched non-vegetarians (n = 19).\n\n\nRESULTS\nAll vegans adhered strictly to their diet; however, individual variability was marked in food consumption and supplementation habits. Dietary intakes of key nutrients, vitamins B12 and D, were lower (P < 0.001) in vegans than in non-vegetarians. Nutritional biomarker measurements showed lower concentrations of serum 25-hydroxyvitamin D3 (25(OH)D3), iodine and selenium (corrected for multiple comparisons, P < 0.001), Vegans showed more favorable fatty acid profiles (P < 0.001) as well as much higher concentrations of polyphenols such as genistein and daidzein (P < 0.001). Eicosapentaenoic acid proportions in vegans were higher than expected. The median concentration of iodine in urine was below the recommended levels in both groups.\n\n\nCONCLUSIONS\nLong-term consumption of a vegan diet was associated with some favorable laboratory measures but also with lowered concentrations of key nutrients compared to reference values. This study highlights the need for nutritional guidance to vegans.",
"title": ""
},
{
"docid": "9b451aa93627d7b44acc7150a1b7c2d0",
"text": "BACKGROUND\nAerobic endurance exercise has been shown to improve higher cognitive functions such as executive control in healthy subjects. We tested the hypothesis that a 30-minute individually customized endurance exercise program has the potential to enhance executive functions in patients with major depressive disorder.\n\n\nMETHOD\nIn a randomized within-subject study design, 24 patients with DSM-IV major depressive disorder and 10 healthy control subjects performed 30 minutes of aerobic endurance exercise at 2 different workload levels of 40% and 60% of their predetermined individual 4-mmol/L lactic acid exercise capacity. They were then tested with 4 standardized computerized neuropsychological paradigms measuring executive control functions: the task switch paradigm, flanker task, Stroop task, and GoNogo task. Performance was measured by reaction time. Data were gathered between fall 2000 and spring 2002.\n\n\nRESULTS\nWhile there were no significant exercise-dependent alterations in reaction time in the control group, for depressive patients we observed a significant decrease in mean reaction time for the congruent Stroop task condition at the 60% energy level (p = .016), for the incongruent Stroop task condition at the 40% energy level (p = .02), and for the GoNogo task at both energy levels (40%, p = .025; 60%, p = .048). The exercise procedures had no significant effect on reaction time in the task switch paradigm or the flanker task.\n\n\nCONCLUSION\nA single 30-minute aerobic endurance exercise program performed by depressed patients has positive effects on executive control processes that appear to be specifically subserved by the anterior cingulate.",
"title": ""
},
{
"docid": "7ca8483e91485d29b58f0f98194c13a3",
"text": "Managing Network Function (NF) service chains requires careful system resource management. We propose NFVnice, a user space NF scheduling and service chain management framework to provide fair, efficient and dynamic resource scheduling capabilities on Network Function Virtualization (NFV) platforms. The NFVnice framework monitors load on a service chain at high frequency (1000Hz) and employs backpressure to shed load early in the service chain, thereby preventing wasted work. Borrowing concepts such as rate proportional scheduling from hardware packet schedulers, CPU shares are computed by accounting for heterogeneous packet processing costs of NFs, I/O, and traffic arrival characteristics. By leveraging cgroups, a user space process scheduling abstraction exposed by the operating system, NFVnice is capable of controlling when network functions should be scheduled. NFVnice improves NF performance by complementing the capabilities of the OS scheduler but without requiring changes to the OS's scheduling mechanisms. Our controlled experiments show that NFVnice provides the appropriate rate-cost proportional fair share of CPU to NFs and significantly improves NF performance (throughput and loss) by reducing wasted work across an NF chain, compared to using the default OS scheduler. NFVnice achieves this even for heterogeneous NFs with vastly different computational costs and for heterogeneous workloads.",
"title": ""
},
{
"docid": "b7ca3a123963bb2f0bfbe586b3bc63d0",
"text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.",
"title": ""
},
{
"docid": "b6ced605309f023c08e746d6edbc2e85",
"text": "Mobile money, also known as branchless banking, leverages ubiquitous cellular networks to bring much-needed financial services to the unbanked in the developing world. These services are often deployed as smartphone apps, and although marketed as secure, these applications are often not regulated as strictly as traditional banks, leaving doubt about the truth of such claims. In this article, we evaluate these claims and perform the first in-depth measurement analysis of branchless banking applications. We first perform an automated analysis of all 46 known Android mobile money apps across the 246 known mobile money providers from 2015. We then perform a comprehensive manual teardown of the registration, login, and transaction procedures of a diverse 15% of these apps. We uncover pervasive vulnerabilities spanning botched certification validation, do-it-yourself cryptography, and other forms of information leakage that allow an attacker to impersonate legitimate users, modify transactions, and steal financial records. These findings show that the majority of these apps fail to provide the protections needed by financial services. In an expanded re-evaluation one year later, we find that these systems have only marginally improved their security. Additionally, we document our experiences working in this sector for future researchers and provide recommendations to improve the security of this critical ecosystem. Finally, through inspection of providers’ terms of service, we also discover that liability for these problems unfairly rests on the shoulders of the customer, threatening to erode trust in branchless banking and hinder efforts for global financial inclusion.",
"title": ""
},
{
"docid": "8f9e330f8e9be18c249964a34fccba9a",
"text": "© Bulletin de la S. M. F., 1965, tous droits réservés. L’accès aux archives de la revue « Bulletin de la S. M. F. » (http://smf. emath.fr/Publications/Bulletin/Presentation.html) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.",
"title": ""
},
{
"docid": "f9d2ccdbbc2dd5a0ea5635c53a6b1e50",
"text": "OBJECTIVES\nThe article provides an overview of current trends in personal sensor, signal and imaging informatics, that are based on emerging mobile computing and communications technologies enclosed in a smartphone and enabling the provision of personal, pervasive health informatics services.\n\n\nMETHODS\nThe article reviews examples of these trends from the PubMed and Google scholar literature search engines, which, by no means claim to be complete, as the field is evolving and some recent advances may not be documented yet.\n\n\nRESULTS\nThere exist critical technological advances in the surveyed smartphone technologies, employed in provision and improvement of diagnosis, acute and chronic treatment and rehabilitation health services, as well as in education and training of healthcare practitioners. However, the most emerging trend relates to a routine application of these technologies in a prevention/wellness sector, helping its users in self-care to stay healthy.\n\n\nCONCLUSIONS\nSmartphone-based personal health informatics services exist, but still have a long way to go to become an everyday, personalized healthcare-provisioning tool in the medical field and in a clinical practice. Key main challenge for their widespread adoption involve lack of user acceptance striving from variable credibility and reliability of applications and solutions as they a) lack evidence- based approach; b) have low levels of medical professional involvement in their design and content; c) are provided in an unreliable way, influencing negatively its usability; and, in some cases, d) being industry-driven, hence exposing bias in information provided, for example towards particular types of treatment or intervention procedures.",
"title": ""
},
{
"docid": "f9d1fcca8fb8f83bdb2391d4fe0ba4ef",
"text": "Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.",
"title": ""
},
{
"docid": "2e04cc1954712b10b75bc9b6d8ab56f2",
"text": "Textual information found in scene images provides high level semantic information about the image and its context and it can be leveraged for better scene understanding. In this paper we address the problem of scene text retrieval: given a text query, the system must return all images containing the queried text. The novelty of the proposed model consists in the usage of a single shot CNN architecture that predicts at the same time bounding boxes and a compact text representation of the words in them. In this way, the text based image retrieval task can be casted as a simple nearest neighbor search of the query text representation over the outputs of the CNN over the entire image database. Our experiments demonstrate that the proposed architecture outperforms previous state-of-the-art while it offers a significant increase in processing speed.",
"title": ""
}
] |
scidocsrr
|
7ead1f7436dc8394a61a983028b8dcb0
|
How language production shapes language form and comprehension
|
[
{
"docid": "d1d3607b8a5cb0158d00de9e6d366f85",
"text": "This paper investigates the role of resource allocation as a source of processing difficulty in human sentence comprehension. The paper proposes a simple information-theoretic characterization of processing difficulty as the work incurred by resource reallocation during parallel, incremental, probabilistic disambiguation in sentence comprehension, and demonstrates its equivalence to the theory of Hale [Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of NAACL (Vol. 2, pp. 159-166)], in which the difficulty of a word is proportional to its surprisal (its negative log-probability) in the context within which it appears. This proposal subsumes and clarifies findings that high-constraint contexts can facilitate lexical processing, and connects these findings to well-known models of parallel constraint-based comprehension. In addition, the theory leads to a number of specific predictions about the role of expectation in syntactic comprehension, including the reversal of locality-based difficulty patterns in syntactically constrained contexts, and conditions under which increased ambiguity facilitates processing. The paper examines a range of established results bearing on these predictions, and shows that they are largely consistent with the surprisal theory.",
"title": ""
},
{
"docid": "bd9f01cad764a03f1e6cded149b9adbd",
"text": "Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax.",
"title": ""
}
] |
[
{
"docid": "45e93443f9479744404d950d8649f1c9",
"text": "In this paper we investigate the impact of candidate terms filtering using linguistic information on the accuracy of automatic keyphrase extraction from scientific papers. According to linguistic knowledge, the noun phrases are most likely to be keyphrases. However the definition of a noun phrase can vary from a system to another. We have identified five POS tag sequence definitions of a noun phrase in keyphrase extraction literature and proposed a new definition. We estimated experimentally the accuracy of a keyphrase extraction system using different noun phrase filters in order to determine which noun phrase definition yields to the best results. Conference Topic Text mining and information extraction",
"title": ""
},
{
"docid": "63bd93cf0294d71db4aa0eb7b9a39fa2",
"text": "Sleep researchers in different disciplines disagree about how fully dreaming can be explained in terms of brain physiology. Debate has focused on whether REM sleep dreaming is qualitatively different from nonREM (NREM) sleep and waking. A review of psychophysiological studies shows clear quantitative differences between REM and NREM mentation and between REM and waking mentation. Recent neuroimaging and neurophysiological studies also differentiate REM, NREM, and waking in features with phenomenological implications. Both evidence and theory suggest that there are isomorphisms between the phenomenology and the physiology of dreams. We present a three-dimensional model with specific examples from normally and abnormally changing conscious states.",
"title": ""
},
{
"docid": "3db1505c98ecb39ad11374d1a7a13ca3",
"text": "Distributed Denial-of-Service (DDoS) attacks are usually launched through the botnet, an “army” of compromised nodes hidden in the network. Inferential tools for DDoS mitigation should accordingly enable an early and reliable discrimination of the normal users from the compromised ones. Unfortunately, the recent emergence of attacks performed at the application layer has multiplied the number of possibilities that a botnet can exploit to conceal its malicious activities. New challenges arise, which cannot be addressed by simply borrowing the tools that have been successfully applied so far to earlier DDoS paradigms. In this paper, we offer basically three contributions: 1) we introduce an abstract model for the aforementioned class of attacks, where the botnet emulates normal traffic by continually learning admissible patterns from the environment; 2) we devise an inference algorithm that is shown to provide a consistent (i.e., converging to the true solution as time elapses) estimate of the botnet possibly hidden in the network; and 3) we verify the validity of the proposed inferential strategy on a test-bed environment. Our tests show that, for several scenarios of implementation, the proposed botnet identification algorithm needs an observation time in the order of (or even less than) 1 min to identify correctly almost all bots, without affecting the normal users’ activity.",
"title": ""
},
{
"docid": "c745458a3113a28cb0c7935e83b92ea1",
"text": "Reinforcement Learning (RL) has been effectively used to solve complex problems given careful design of the problem and algorithm parameters. However standard RL approaches do not scale particularly well with the size of the problem and often require extensive engineering on the part of the designer to minimize the search space. To alleviate this problem, we present a model-free policy-based approach called Exploration from Demonstration (EfD) that uses human demonstrations to guide search space exploration. We use statistical measures of RL algorithms to provide feedback to the user about the agent’s uncertainty and use this to solicit targeted demonstrations useful from the agent’s perspective. The demonstrations are used to learn an exploration policy that actively guides the agent towards important aspects of the problem. We instantiate our approach in a gridworld and a popular arcade game and validate its performance under different experimental conditions. We show how EfD scales to large problems and provides convergence speed-ups over traditional exploration and interactive learning methods.",
"title": ""
},
{
"docid": "b82750baa5a775a00b72e19d3fd5d2a1",
"text": "We assessed the rate of detection rate of recurrent prostate cancer by PET/CT using anti-3-18F-FACBC, a new synthetic amino acid, in comparison to that using 11C-choline as part of an ongoing prospective single-centre study. Included in the study were 15 patients with biochemical relapse after initial radical treatment of prostate cancer. All the patients underwent anti-3-18F-FACBC PET/CT and 11C-choline PET/CT within a 7-day period. The detection rates using the two compounds were determined and the target–to-background ratios (TBR) of each lesion are reported. No adverse reactions to anti-3-18F-FACBC PET/CT were noted. On a patient basis, 11C-choline PET/CT was positive in 3 patients and negative in 12 (detection rate 20 %), and anti-3-18F-FACBC PET/CT was positive in 6 patients and negative in 9 (detection rate 40 %). On a lesion basis, 11C-choline detected 6 lesions (4 bone, 1 lymph node, 1 local relapse), and anti-3-18F-FACBC detected 11 lesions (5 bone, 5 lymph node, 1 local relapse). All 11C-choline-positive lesions were also identified by anti-3-18F-FACBC PET/CT. The TBR of anti-3-18F-FACBC was greater than that of 11C-choline in 8/11 lesions, as were image quality and contrast. Our preliminary results indicate that anti-3-18F-FACBC may be superior to 11C-choline for the identification of disease recurrence in the setting of biochemical failure. Further studies are required to assess efficacy of anti-3-18F-FACBC in a larger series of prostate cancer patients.",
"title": ""
},
{
"docid": "bccb8e4cf7639dbcd3896e356aceec8d",
"text": "Over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy.",
"title": ""
},
{
"docid": "e2817500683f4eea7e4ed9e0484b303a",
"text": "This paper presents the Transport Disruption ontology, a formal framework for modelling travel and transport related events that have a disruptive impact on traveller’s journeys. We discuss related models, describe how transport events and their impacts are captured, and outline use of the ontology within an interlinked repository of the travel information to support intelligent transport systems.",
"title": ""
},
{
"docid": "8c51c464d9137eec4600a5df5c6b451a",
"text": "An increasing number of disasters (natural and man-made) with a large number of victims and significant social and economical losses are observed in the past few years. Although particular events can always be attributed to fate, it is improving the disaster management that have to contribute to decreasing damages and ensuring proper care for citizens in affected areas. Some of the lessons learned in the last several years give clear indications that the availability, management and presentation of geo-information play a critical role in disaster management. However, all the management techniques that are being developed are understood by, and confined to the intellectual community and hence lack mass participation. Awareness of the disasters is the only effective way in which one can bring about mass participation. Hence, any disaster management is successful only when the general public has some awareness about the disaster. In the design of such awareness program, intelligent mapping through analysis and data sharing also plays a very vital role. The analytical capabilities of GIS support all aspects of disaster management: planning, response and recovery, and records management. The proposed GIS based awareness program in this paper would improve the currently practiced disaster management programs and if implemented, would result in a proper dosage of awareness and caution to the general public, which in turn would help to cope with the dangerous activities of disasters in future.",
"title": ""
},
{
"docid": "b5cc41f689a1792b544ac66a82152993",
"text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: thanan@siit.tu.ac.th (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "800aa2ecdf0a29c7fa7860c6b0618a6b",
"text": "This paper presents three topological classes of dc-to-dc converters, totaling nine converters (each class with three buck, boost, and buck-boost voltage transfer function topologies), which offer continuous input and output energy flow, applicable and mandatory for renewable energy source, maximum power point tracking and maximum source energy extraction. A current sourcing output caters for converter module output parallel connection. The first class of three topologies employs both series input and output inductance, while anomalously the other two classes of six related topologies employ only either series input (three topologies) or series output (three topologies) inductance. All nine converter topologies employ the same elements, while additional load shunting capacitance creates a voltage sourcing output. Converter time-domain simulations and experimental results for the converters support and extol the concepts and analysis presented.",
"title": ""
},
{
"docid": "043b51b50f17840508b0dfb92c895fc9",
"text": "Over the years, several security measures have been employed to combat the menace of insecurity of lives and property. This is done by preventing unauthorized entrance into buildings through entrance doors using conventional and electronic locks, discrete access code, and biometric methods such as the finger prints, thumb prints, the iris and facial recognition. In this paper, a prototyped door security system is designed to allow a privileged user to access a secure keyless door where valid smart card authentication guarantees an entry. The model consists of hardware module and software which provides a functionality to allow the door to be controlled through the authentication of smart card by the microcontroller unit. (",
"title": ""
},
{
"docid": "c200b79726ca0b441bc1311975bf0008",
"text": "This article introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90nm to 22nm and beyond. At microarchitectural level, McPAT includes models for the fundamental components of a complete chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, and integrated system components such as memory controllers and Ethernet controllers. At circuit level, McPAT supports detailed modeling of critical-path timing, area, and power. At technology level, McPAT models timing, area, and power for the device types forecast in the ITRS roadmap. McPAT has a flexible XML interface to facilitate its use with many performance simulators.\n Combined with a performance simulator, McPAT enables architects to accurately quantify the cost of new ideas and assess trade-offs of different architectures using new metrics such as Energy-Delay-Area2 Product (EDA2P) and Energy-Delay-Area Product (EDAP). This article explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting trade-offs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies from cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks for manycore designs at the 22nm technology shows that 8-core clustering gives the best energy-delay product, whereas when die area is taken into account, 4-core clustering gives the best EDA2P and EDAP.",
"title": ""
},
{
"docid": "0c0d0b6d4697b1a0fc454b995bcda79a",
"text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.",
"title": ""
},
{
"docid": "e16b4b93913db0f37032224e07a0c057",
"text": "Large number of weights in deep neural networks makes the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as “inferencing as a service” environments on cloud. Prior work has considered reduction in the size of the models, through compression techniques like pruning, quantization, Huffman encoding etc. However, efficient inferencing using the compressed models has received little attention, specially with the Huffman encoding in place. In this paper, we propose efficient parallel algorithms for inferencing of single image and batches, under various memory constraints. Our experimental results show that our approach of using variable batch size for inferencing achieves 15-25% performance improvement in the inference throughput for AlexNet, while maintaining memory and latency constraints.",
"title": ""
},
{
"docid": "96e10f0858818ce150dba83882557aee",
"text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are converted into coarsegrained discrete pixel coordinates in a screen space. Motivated by such an observation and directly considering pixel coordinates in an embedding optimization process, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel method called PixelSNE, a highly-efficient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the significantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the minimal degradation in the embedding quality. Finally, the source code of our method is publicly available at https: //github.com/awesome-davian/sasne.",
"title": ""
},
{
"docid": "cf639e8a3037d94d2e110a2a11411dc6",
"text": "Memory-based collaborative filtering (CF) has been studied extensively in the literature and has proven to be successful in various types of personalized recommender systems. In this paper, we develop a probabilistic framework for memory-based CF (PMCF). While this framework has clear links with classical memory-based CF, it allows us to find principled solutions to known problems of CF-based recommender systems. In particular, we show that a probabilistic active learning method can be used to actively query the user, thereby solving the \"new user problem.\" Furthermore, the probabilistic framework allows us to reduce the computational cost of memory-based CF by working on a carefully selected subset of user profiles, while retaining high accuracy. We report experimental results based on two real-world data sets, which demonstrate that our proposed PMCF framework allows an accurate and efficient prediction of user preferences.",
"title": ""
},
{
"docid": "15d3618efa3413456c6aebf474b18c92",
"text": "The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms. In particular the reader can delve into the following subjects: present cryptographic schemes (symmetric and asymmetric), differences between quantum and classical computing, challenges in quantum computing, quantum algorithms (Shor’s and Grover’s), public key encryption schemes affected, symmetric schemes affected, the impact on hash functions, and post quantum cryptography. Specifically, the section of Post-Quantum Cryptography deals with different quantum key distribution methods and mathematicalbased solutions, such as the BB84 protocol, lattice-based cryptography, multivariate-based cryptography, hash-based signatures and code-based cryptography. Keywords—quantum computers; post-quantum cryptography; Shor’s algorithm; Grover’s algorithm; asymmetric cryptography; symmetric cryptography",
"title": ""
},
{
"docid": "ba0d63c3e6b8807e1a13b36bc30d5d72",
"text": "Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.",
"title": ""
},
{
"docid": "38c25450464202b975e2ab1f54b70f3a",
"text": "A neonatal intensive care unit (NICU) provides critical services to preterm and high-risk infants. Over the years, many tools and techniques have been introduced to support the clinical decisions made by specialists in the NICU. This study systematically reviewed the different technologies used in neonatal decision support systems (DSS), including cognitive analysis, artificial neural networks, data mining techniques, multi-agent systems, and highlighted their role in patient diagnosis, prognosis, monitoring, and healthcare management. Articles on NICU DSS were surveyed, Searches were based on the PubMed, Science Direct, and IEEE databases and only English articles published after 1990 were included. The overall search strategy was to retrieve articles that included terms that were related to “NICU Decision Support Systems” or “Artificial Intelligence” and “Neonatal”. Different methods and artificial intelligence techniques used in NICU decision support systems were assessed and related outcomes, variables, methods and performance measures was reported and discussed. Because of the dynamic, heterogeneous, and real-time environment of the NICU, the processes and medical rules that are followed within a NICU are complicated, and the data records that are produced are complex and frequent. Therefore, a single tool or technology could not cover all the needs of a NICU. However, it is important to examine and deploy new temporal data mining approaches and system architectures, such as multi-agent systems, services, and sensors, to provide integrated real-time solutions for NICU.",
"title": ""
},
{
"docid": "6a3210307c98b4311271c29da142b134",
"text": "Accelerating innovation in renewable energy (RE) requires not just more finance, but finance servicing the entire innovation landscape. Given that finance is not ‘neutral’, more information is required on the quality of finance that meets technology and innovation stage-specific financing needs for the commercialization of RE technologies. We investigate the relationship between different financial actors with investment in different RE technologies. We construct a new deal-level dataset of global RE asset finance from 2004 to 2014 based on Bloomberg New Energy Finance data, that distinguishes 10 investor types (e.g. private banks, public banks, utilities) and 11 RE technologies into which they invest. We also construct a heuristic investment risk measure that varies with technology, time and country of investment. We find that particular investor types have preferences for particular risk levels, and hence particular types of RE. Some investor types invested into far riskier portfolios than others, and financing of individual high-risk technologies depended on investment by specific investor types. After the 2008 financial crisis, state-owned or controlled companies and banks emerged as the high-risk taking locomotives of RE asset finance. We use these preliminary results to formulate new questions for future RE policy, and encourage further research.",
"title": ""
}
] |
scidocsrr
|
453fd2fcd597a406d77b6fa4aca788eb
|
Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention
|
[
{
"docid": "210a1dda2fc4390a5b458528b176341e",
"text": "Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "bef86730221684b8e9236cb44179b502",
"text": "secure software. In order to find the real-life issues, this case study was initiated to investigate whether the existing FDD can withstand requirements change and software security altogether. The case study was performed in controlled environment – in a course called Application Development—a four credit hours course at UTM. The course began by splitting up the class to seven software development groups and two groups were chosen to implement the existing process of FDD. After students were given an introduction to FDD, they started to adapt the processes to their proposed system. Then students were introduced to the basic concepts on how to make software systems secure. Though, they were still new to security and FDD, however, this study produced a lot of interest among the students. The students seemed to enjoy the challenge of creating secure system using FDD model.",
"title": ""
},
{
"docid": "f202e380dfd1022e77a04212394be7e1",
"text": "As usage of cloud computing increases, customers are mainly concerned about choosing cloud infrastructure with sufficient security. Concerns are greater in the multitenant environment on a public cloud. This paper addresses the security assessment of OpenStack open source cloud solution and virtual machine instances with different operating systems hosted in the cloud. The methodology and realized experiments target vulnerabilities from both inside and outside the cloud. We tested four different platforms and analyzed the security assessment. The main conclusions of the realized experiments show that multi-tenant environment raises new security challenges, there are more vulnerabilities from inside than outside and that Linux based Ubuntu, CentOS and Fedora are less vulnerable than Windows. We discuss details about these vulnerabilities and show how they can be solved by appropriate patches and other solutions. Keywords-Cloud Computing; Security Assessment; Virtualization.",
"title": ""
},
{
"docid": "a7607444b58f0e86000c7f2d09551fcc",
"text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.",
"title": ""
},
{
"docid": "b91f80bc17de9c4e15ec80504e24b045",
"text": "Motivated by the design of the well-known Enigma machine, we present a novel ultra-lightweight encryption scheme, referred to as Hummingbird, and its applications to a privacy-preserving identification and mutual authentication protocol for RFID applications. Hummingbird can provide the designed security with a small block size and is therefore expected to meet the stringent response time and power consumption requirements described in the ISO protocol without any modification of the current standard. We show that Hummingbird is resistant to the most common attacks such as linear and differential cryptanalysis. Furthermore, we investigate some properties for integrating the Hummingbird into a privacypreserving identification and mutual authentication protocol.",
"title": ""
},
{
"docid": "0f853c6ccf6ce4cf025050135662f725",
"text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.",
"title": ""
},
{
"docid": "b7673dbe46a1118511d811241940e328",
"text": "A 100-MHz–2-GHz closed-loop analog in-phase/ quadrature correction circuit for digital clocks is presented. The proposed circuit consists of a phase-locked loop- type architecture for quadrature error correction. The circuit corrects the phase error to within a 1.5° up to 1 GHz and to within 3° at 2 GHz. It consumes 5.4 mA from a 1.2 V supply at 2 GHz. The circuit was designed in UMC 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> mixed-mode CMOS with an active area of <inline-formula> <tex-math notation=\"LaTeX\">$102\\,\\,\\mu {\\mathrm{ m}} \\times 95\\,\\,\\mu {\\mathrm{ m}}$ </tex-math></inline-formula>. The impact of duty cycle distortion has been analyzed. High-frequency quadrature measurement related issues have been discussed. The proposed circuit was used in two different applications for which the functionality has been verified.",
"title": ""
},
{
"docid": "b32ec798991a2e02813ba617dc93a828",
"text": "To investigate the mechanisms of excitotoxic effects of glutamate on human neuroblastoma SH-SY5Y cells. SH-SY5Y cell viability was measured by MTT assay. Other damaged profile was detected by lactate dehydrogenase (LDH) release and by 4′, 6-diamidino-2-phenylindole (DAPI) staining. The cytosolic calcium concentration was tested by calcium influx assay. The glutamate-induced oxidative stress was analyzed by cytosolic glutathione assay, superoxide dismutase (SOD) assay and extracellular malondialdehyde (MDA) assay. Glutamate treatment caused damage in SHSY5Y cells, including the decrease of cell viability, the increase of LDH release and the alterations of morphological structures. Furthermore, the concentration of cytoplasmic calcium in SH-SY5Y cells was not changed within 20 min following glutamate treatment, while cytosolic calcium concentration significantly increased within 24 h after glutamate treatment, which could not be inhibited by MK801, an antagonist of NMDA receptors, or by LY341495, an antagonist of metabotropic glutamate receptors. On the other hand, oxidative damage was observed in SH-SY5Y cells treated with glutamate, including decreases in glutathione content and SOD activity, and elevation of MDA level, all of which could be alleviated by an antioxidant Tanshinone IIA (Tan IIA, a major active ingredient from a Chinese plant Salvia Miltiorrhiza Bge). Glutamate exerts toxicity in human neuroblastoma SH-SY5Y cells possibly through oxidative damage, not through calcium homeostasis destruction mediated by NMDA receptors. 探讨谷氨酸导致人神经母细胞瘤细胞(SH-SY5Y cells)兴奋性毒损伤的机制。 MTT法检测SH-SY5Y细胞存活率; 测定乳酸脱氢酶释放量观察细胞损伤程度; DAPI染色法观察细胞凋亡形态学特点; 钙流法检测胞浆钙离子浓度变化; 以胞内谷胱甘肽、 超氧化物歧化酶活性和胞外丙二醛含量检测谷氨酸引发SH-SY5Y细胞的氧化应激状态。 谷氨酸导致SH-SY5Y细胞受损, 包括存活率下降、 乳酸脱氢酶释放量增多及形态结构发生改变; 谷氨酸处理20 min 后, 胞浆钙离子浓度无显著改变, 而处理24 h 后, 胞浆钙离子大量增加, 且MK801 (NMDA受体拮抗剂)及LY341495 (代谢型谷氨酸受体拮抗剂)均不能抑制钙离子内流的增多; 谷氨酸可导致SH-SY5Y氧化损伤, 包括胞内谷胱甘肽含量减少、 超氧化物歧化酶活性降低、 胞外脂质过氧化产物丙二醛水平升高等, 而丹参酮IIA (一种抗氧化剂)可减轻这些氧化损伤。 谷氨酸导致SH-SY5Y细胞兴奋性毒损伤可能是通过氧化损伤产生的, 而不依赖于NMDA 受体介导的钙稳态的破坏。",
"title": ""
},
{
"docid": "3085d2de614b6816d7a66cb62823824e",
"text": "Plastic debris is known to undergo fragmentation at sea, which leads to the formation of microscopic particles of plastic; the so called 'microplastics'. Due to their buoyant and persistent properties, these microplastics have the potential to become widely dispersed in the marine environment through hydrodynamic processes and ocean currents. In this study, the occurrence and distribution of microplastics was investigated in Belgian marine sediments from different locations (coastal harbours, beaches and sublittoral areas). Particles were found in large numbers in all samples, showing the wide distribution of microplastics in Belgian coastal waters. The highest concentrations were found in the harbours where total microplastic concentrations of up to 390 particles kg(-1) dry sediment were observed, which is 15-50 times higher than reported maximum concentrations of other, similar study areas. The depth profile of sediment cores suggested that microplastic concentrations on the beaches reflect the global plastic production increase.",
"title": ""
},
{
"docid": "a16be992aa947c8c5d2a7c9899dfbcd8",
"text": "The effect of the Eureka Spring (ES) appliance was investigated on 37 consecutively treated, noncompliant patients with bilateral Class II malocclusions. Lateral cephalographs were taken at the start of orthodontic treatment (T1), at insertion of the ES (T2), and at removal of the ES (T3). The average treatment interval between T2 and T3 was four months. The Class II correction occurred almost entirely by dentoalveolar movement and was almost equally distributed between the maxillary and mandibular dentitions. The rate of molar correction was 0.7 mm/mo. There was no change in anterior face height, mandibular plane angle, palatal plane angle, or gonial angle with treatment. There was a 2 degrees change in the occlusal plane resulting from intrusion of the maxillary molar and the mandibular incisor. Based on the results in this sample, the ES appliance was very effective in correcting Class II malocclusions in noncompliant patients without increasing the vertical dimension.",
"title": ""
},
{
"docid": "04b14e2795afc0faaa376bc17ead0aaf",
"text": "In this paper, an integrated MEMS gyroscope array method composed of two levels of optimal filtering was designed to improve the accuracy of gyroscopes. In the firstlevel filtering, several identical gyroscopes were combined through Kalman filtering into a single effective device, whose performance could surpass that of any individual sensor. The key of the performance improving lies in the optimal estimation of the random noise sources such as rate random walk and angular random walk for compensating the measurement values. Especially, the cross correlation between the noises from different gyroscopes of the same type was used to establish the system noise covariance matrix and the measurement noise covariance matrix for Kalman filtering to improve the performance further. Secondly, an integrated Kalman filter with six states was designed to further improve the accuracy with the aid of external sensors such as magnetometers and accelerometers in attitude determination. Experiments showed that three gyroscopes with a bias drift of 35 degree per hour could be combined into a virtual gyroscope with a drift of 1.07 degree per hour through the first-level filter, and the bias drift was reduced to 0.53 degree per hour after the second-level filtering. It proved that the proposed integrated MEMS gyroscope array is capable of improving the accuracy of the MEMS gyroscopes, which provides the possibility of using these low cost MEMS sensors in high-accuracy application areas.",
"title": ""
},
{
"docid": "2fdf6538c561e05741baafe43ec6f145",
"text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"title": ""
},
{
"docid": "c7059c650323a08ac7453ad4185e6c4f",
"text": "Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.1",
"title": ""
},
{
"docid": "fb53b5d48152dd0d71d1816a843628f6",
"text": "Online banking and e-commerce have been experiencing rapid growth over the past few years and show tremendous promise of growth even in the future. This has made it easier for fraudsters to indulge in new and abstruse ways of committing credit card fraud over the Internet. This paper focuses on real-time fraud detection and presents a new and innovative approach in understanding spending patterns to decipher potential fraud cases. It makes use of Self Organization Map to decipher, filter and analyze customer behavior for detection of fraud.",
"title": ""
},
{
"docid": "e6c126454c7d7e99524ff55887d9b15d",
"text": "Dense 3D reconstruction of real world objects containing textureless, reflective and specular parts is a challenging task. Using general smoothness priors such as surface area regularization can lead to defects in the form of disconnected parts or unwanted indentations. We argue that this problem can be solved by exploiting the object class specific local surface orientations, e.g. a car is always close to horizontal in the roof area. Therefore, we formulate an object class specific shape prior in the form of spatially varying anisotropic smoothness terms. The parameters of the shape prior are extracted from training data. We detail how our shape prior formulation directly fits into recently proposed volumetric multi-label reconstruction approaches. This allows a segmentation between the object and its supporting ground. In our experimental evaluation we show reconstructions using our trained shape prior on several challenging datasets.",
"title": ""
},
{
"docid": "87993df44973bd83724baace13ea1aa7",
"text": "OBJECTIVE\nThe objective of this research was to determine the relative impairment associated with conversing on a cellular telephone while driving.\n\n\nBACKGROUND\nEpidemiological evidence suggests that the relative risk of being in a traffic accident while using a cell phone is similar to the hazard associated with driving with a blood alcohol level at the legal limit. The purpose of this research was to provide a direct comparison of the driving performance of a cell phone driver and a drunk driver in a controlled laboratory setting.\n\n\nMETHOD\nWe used a high-fidelity driving simulator to compare the performance of cell phone drivers with drivers who were intoxicated from ethanol (i.e., blood alcohol concentration at 0.08% weight/volume).\n\n\nRESULTS\nWhen drivers were conversing on either a handheld or hands-free cell phone, their braking reactions were delayed and they were involved in more traffic accidents than when they were not conversing on a cell phone. By contrast, when drivers were intoxicated from ethanol they exhibited a more aggressive driving style, following closer to the vehicle immediately in front of them and applying more force while braking.\n\n\nCONCLUSION\nWhen driving conditions and time on task were controlled for, the impairments associated with using a cell phone while driving can be as profound as those associated with driving while drunk.\n\n\nAPPLICATION\nThis research may help to provide guidance for regulation addressing driver distraction caused by cell phone conversations.",
"title": ""
},
{
"docid": "af08fa19de97eed61afd28893692e7ec",
"text": "OpenACC is a new accelerator programming interface that provides a set of OpenMP-like loop directives for the programming of accelerators in an implicit and portable way. It allows the programmer to express the offloading of data and computations to accelerators, such that the porting process for legacy CPU-based applications can be significantly simplified. This paper focuses on the performance aspects of OpenACC using two micro benchmarks and one real-world computational fluid dynamics application. Both evaluations show that in general OpenACC performance is approximately 50\\% lower than CUDA. However, for some applications it can reach up to 98\\% with careful manual optimizations. The results also indicate several limitations of the OpenACC specification that hamper full use of the GPU hardware resources, resulting in a significant performance gap when compared to a fully tuned CUDA code. The lack of a programming interface for the shared memory in particular results in as much as three times lower performance.",
"title": ""
},
{
"docid": "ced0328f339248158e8414c3315330c5",
"text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB",
"title": ""
},
{
"docid": "f85b08a0e3f38c1471b3c7f05e8a17ba",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. A state tracking module is primarily meant to act as support for a dialog policy but it can also be used as support for dialog corpus summarization and other kinds of information extraction from transcription of dialogs. From a probabilistic view, this is achieved by maintaining a posterior distribution over hidden dialog states composed, in the simplest case, of a set of context dependent variables. Once a dialog policy is defined, deterministic or learnt, it is in charge of selecting an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset that has been converted for the occasion in order to fit the relaxed assumption of a machine reading formulation where the true state is only provided at the very end of each dialog instead of providing the state updates at the utterance level. We show that the proposed tracker gives encouraging results. Finally, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management.",
"title": ""
},
{
"docid": "ed39af901c58a8289229550084bc9508",
"text": "Digital elevation maps are simple yet powerful representations of complex 3-D environments. These maps can be built and updated using various sensors and sensorial data processing algorithms. This paper describes a novel approach for modeling the dynamic 3-D driving environment, the particle-based dynamic elevation map, each cell in this map having, in addition to height, a probability distribution of speed in order to correctly describe moving obstacles. The dynamic elevation map is represented by a population of particles, each particle having a position, a height, and a speed. Particles move from one cell to another based on their speed vectors, and they are created, multiplied, or destroyed using an importance resampling mechanism. The importance resampling mechanism is driven by the measurement data provided by a stereovision sensor. The proposed model is highly descriptive for the driving environment, as it can easily provide an estimation of the height, speed, and occupancy of each cell in the grid. The system was proven robust and accurate in real driving scenarios, by comparison with ground truth data.",
"title": ""
},
{
"docid": "678ef706d4cb1c35f6b3d82bf25a4aa7",
"text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.",
"title": ""
}
] |
scidocsrr
|
1eb4b7ddd606e6b77edf769e8476d9fd
|
Basic Properties of the Blockchain: (Invited Talk)
|
[
{
"docid": "c8b9bba65b8561b48abe68a72c02f054",
"text": "The Bitcoin backbone protocol [Eurocrypt 2015] extracts basic properties of Bitcoin's underlying blockchain data structure, such as common pre x and chain quality, and shows how fundamental applications including consensus and a robust public transaction ledger can be built on top of them. The underlying assumptions are proofs of work (POWs), adversarial hashing power strictly less than 1/2 and no adversarial pre-computation or, alternatively, the existence of an unpredictable genesis block. In this paper we show how to remove the latter assumption, presenting a bootstrapped Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks from scratch in the presence of adversarial pre-computation. The only known previous result in the same setting (unauthenticated parties, no trusted setup) [Crypto 2015] is indirect in the sense of creating a PKI rst and then employing conventional PKI-based authenticated communication. With our construction we establish that consensus can be solved directly by a blockchain protocol without trusted setup assuming an honest majority (in terms of computational power). We also formalize miner unlinkability, a privacy property for blockchain protocols, and demonstrate that our protocol retains the same level of miner unlinkability as Bitcoin itself.",
"title": ""
}
] |
[
{
"docid": "9747be055df9acedfdfe817eb7e1e06e",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
},
{
"docid": "4f2e9b83a1a07cfdeb9b8330a5090152",
"text": "Periodic estimation of the incidence of global unintended pregnancy can help demonstrate the need for and impact of family planning programs. We draw upon multiple sources of data to estimate pregnancy incidence by intention status and outcome at worldwide, regional, and subregional levels in 2012 and to assess recent trends using previously published estimates for 2008 and 1995. We find that 213 million pregnancies occurred in 2012, up slightly from 211 million in 2008. The global pregnancy rate decreased only slightly from 2008 to 2012, after declining substantially between 1995 and 2008. Eighty-five million pregnancies, representing 40 percent of all pregnancies, were unintended in 2012. Of these, 50 percent ended in abortion, 13 percent ended in miscarriage, and 38 percent resulted in an unplanned birth. The unintended pregnancy rate continued to decline in Africa and in the Latin America and Caribbean region. If the aims of the London Summit on Family Planning are carried out, the incidence of unwanted and mistimed pregnancies should decline in the coming years.",
"title": ""
},
{
"docid": "0cb237a05e30a4bc419dc374f3a7b55a",
"text": "Question-and-answer (Q&A) websites, such as Yahoo! Answers, Stack Overflow and Quora, have become a popular and powerful platform for Web users to share knowledge on a wide range of subjects. This has led to a rapidly growing volume of information and the consequent challenge of readily identifying high quality objects (questions, answers and users) in Q&A sites. Exploring the interdependent relationships among different types of objects can help find high quality objects in Q&A sites more accurately. In this paper, we specifically focus on the ranking problem of co-ranking questions, answers and users in a Q&A website. By studying the tightly connected relationships between Q&A objects, we can gain useful insights toward solving the co-ranking problem. However, co-ranking multiple objects in Q&A sites is a challenging task: a) With the large volumes of data in Q&A sites, it is important to design a model that can scale well; b) The large-scale Q&A data makes extracting supervised information very expensive. In order to address these issues, we propose an unsupervised Network-based Co-Ranking framework (NCR) to rank multiple objects in Q&A sites. Empirical studies on real-world Yahoo! Answers datasets demonstrate the effectiveness and the efficiency of the proposed NCR method.",
"title": ""
},
{
"docid": "f4cb0eb6d39c57779cf9aa7b13abef14",
"text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.",
"title": ""
},
{
"docid": "498bef8984fc69a7bbce978a9c4b9edb",
"text": "Recurrent neural networks (RNNs) have achieved the state-of-the-art performance on various sequence learning tasks due to their powerful sequence modeling capability. However, RNNs usually require a large number of parameters and high computational complexity. Hence, it is quite challenging to implement complex RNNs on embedded devices with stringent memory and latency requirement. In this paper, we first present a novel hybrid compression method for a widely used RNN variant, long–short term memory (LSTM), to tackle these implementation challenges. By properly using circulant matrices, forward nonlinear function approximation, and efficient quantization schemes with a retrain-based training strategy, the proposed compression method can reduce more than 95% of memory usage with negligible accuracy loss when verified under language modeling and speech recognition tasks. An efficient scalable parallel hardware architecture is then proposed for the compressed LSTM. With an innovative chessboard division method for matrix–vector multiplications, the parallelism of the proposed hardware architecture can be freely chosen under certain latency requirement. Specifically, for the circulant matrix–vector multiplications employed in the compressed LSTM, the circulant matrices are judiciously reorganized to fit in with the chessboard division and minimize the number of memory accesses required for the matrix multiplications. The proposed architecture is modeled using register transfer language (RTL) and synthesized under the TSMC 90-nm CMOS technology. With 518.5-kB on-chip memory, we are able to process a $512 \\times 512$ compressed LSTM in 1.71 $\\mu {{{\\text{s}}}}$ , corresponding to 2.46 TOPS on the uncompressed one, at a cost of 30.77-mm2 chip area. The implementation results demonstrate that the proposed design can achieve significantly high flexibility and area efficiency, which satisfies many real-time applications on embedded devices. It is worth mentioning that the memory-efficient approach of accelerating LSTM developed in this paper is also applicable to other RNN variants.",
"title": ""
},
{
"docid": "59405c31da09ea58ef43a03d3fc55cf4",
"text": "The Quality of Service (QoS) management is one of the urgent problems in networking which doesn't have an acceptable solution yet. In the paper the approach to this problem based on multipath routing protocol in SDN is considered. The proposed approach is compared with other QoS management methods. A structural and operation schemes for its practical implementation is proposed.",
"title": ""
},
{
"docid": "466bb7b70fc1c5973fbea3ade7ebd845",
"text": "High-speed and heavy-load stacking robot technology is a common key technique in nonferrous metallurgy areas. Specific layer stacking robot of aluminum ingot continuous casting production line, which has four-DOF, is designed in this paper. The kinematics model is built and studied in detail by D-H method. The transformation matrix method is utilized to solve the kinematics equation of robot. Mutual motion relations between each joint variables and the executive device of robot is got. The kinematics simulation of the robot is carried out via the ADAMS-software. The results of simulation verify the theoretical analysis and lay the foundation for following static and dynamic characteristics analysis of the robot.",
"title": ""
},
{
"docid": "2da6c199c7561855fde9be6f4798a4af",
"text": "Ontogenetic development of the digestive system in golden pompano (Trachinotus ovatus, Linnaeus 1758) larvae was histologically and enzymatically studied from hatch to 32 day post-hatch (DPH). The development of digestive system in golden pompano can be divided into three phases: phase I starting from hatching and ending at the onset of exogenous feeding; phase II starting from first feeding (3 DPH) and finishing at the formation of gastric glands; and phase III starting from the appearance of gastric glands on 15 DPH and continuing onward. The specific activities of trypsin, amylase, and lipase increased sharply from the onset of first feeding to 5–7 DPH, followed by irregular fluctuations. Toward the end of this study, the specific activities of trypsin and amylase showed a declining trend, while the lipase activity remained at similar levels as it was at 5 DPH. The specific activity of pepsin was first detected on 15 DPH and increased with fish age. The dynamics of digestive enzymes corresponded to the structural development of the digestive system. The enzyme activities tend to be stable after the formation of the gastric glands in fish stomach on 15 DPH. The composition of digestive enzymes in larval pompano indicates that fish are able to digest protein, lipid and carbohydrate at early developmental stages. Weaning of larval pompano is recommended from 15 DPH onward. Results of the present study lead to a better understanding of the ontogeny of golden pompano during the larval stage and provide a guide to feeding and weaning of this economically important fish in hatcheries.",
"title": ""
},
{
"docid": "5089dff6e717807450d7f185158cc542",
"text": "Previous work has demonstrated that in the context of Massively Open Online Courses (MOOCs), doing activities is more predictive of learning than reading text or watching videos (Koedinger et al., 2015). This paper breaks down the general behaviors of reading and watching into finer behaviors, and considers how these finer behaviors may provide evidence for active learning as well. By characterizing learner strategies through patterns in their data, we can evaluate which strategies (or measures of them) are predictive of learning outcomes. We investigated strategies such as page re-reading (active reading) and video watching in response to an incorrect attempt (active watching) and found that they add predictive power beyond mere counts of the amount of doing, reading, and watching.",
"title": ""
},
{
"docid": "873e49598b513d78719ba71fe735c338",
"text": "An Italian patient with a pure dysgraphia who incorrectly spelled words and nonwords is described. The spelling errors made by the patient were not affected by lexical factors (e.g., frequency, form class) and were qualitatively the same for words and nonwords. The pattern of writing performance is discussed in relation to current models of writing and, specifically, in relation to the role of the Output Grapheme Buffer and Phoneme-Grapheme Conversion in writing.",
"title": ""
},
{
"docid": "dd289b9e7b8e1f40863d4e2097f5f29a",
"text": "Successful software development is becoming increasingly important as software basedsystems are at the core of a company`s new products. However, recent surveys show that most projects fail to meet their targets highlighting the inadequacies of traditional project management techniques to cope with the unique characteristics of this field. Despite the major breakthroughs in the discipline of software engineering, improvement of management methodologies has not occurred, and it is now recognised that the major opportunities for better results are to be found in this area. Poor strategic management and related human factors have been cited as a major cause for failures in several industries. Traditional project management techniques have proven inadequate to incorporate explicitly these higher-level and softer issues. System Dynamics emerged as a methodology for modelling the behaviour of complex socio-economic systems. There has been a number of applications to project management, and in particular in the field of software development. This new approach provides the opportunity for an alternative view in which the major project influences are considered and quantified explicitly. Grounded on a holistic perspective it avoids consideration of the detail required by the traditional tools and ensures that the key aspects of the general project behaviour are the main priority. However, if the approach is to play a core role in future of software project management it needs to embedded within the traditional decision-making framework. The authors developed a conceptual integrated model, the PMIM, which is now being tested and improved within a large on-going software project. Such a framework should specify the roles of system dynamics models, how they are to be used within the traditional management process, how they exchange information with the traditional models, and a general method to support model development. This paper identifies the distinctive contribution of System Dynamics to software management, proposes a conceptual model for an integrated management framework, and discusses its underlying principles. Research News Join our email list to receive details of when new research papers are published and the quarterly departmental newsletter. To subscribe send a blank email to managementsciencesubscribe@egroups.com. Details of our research papers can be found at www.mansci.strath.ac.uk/papers.html. Management Science, University of Strathclyde, Graham Hills Building, 40 George Street, Glasgow, Scotland. Email: mgtsci@mansci.strath.ac.uk Tel: +44 (0)141 548 3613 Fax: +44 (0)141 552 6686",
"title": ""
},
{
"docid": "3f292307824ed0b4d7fd59824ff9dd2b",
"text": "The aim of this qualitative study was to obtain a better understanding of the developmental trajectories of persistence and desistence of childhood gender dysphoria and the psychosexual outcome of gender dysphoric children. Twenty five adolescents (M age 15.88, range 14-18), diagnosed with a Gender Identity Disorder (DSM-IV or DSM-IV-TR) in childhood, participated in this study. Data were collected by means of biographical interviews. Adolescents with persisting gender dysphoria (persisters) and those in whom the gender dysphoria remitted (desisters) indicated that they considered the period between 10 and 13 years of age to be crucial. They reported that in this period they became increasingly aware of the persistence or desistence of their childhood gender dysphoria. Both persisters and desisters stated that the changes in their social environment, the anticipated and actual feminization or masculinization of their bodies, and the first experiences of falling in love and sexual attraction had influenced their gender related interests and behaviour, feelings of gender discomfort and gender identification. Although, both persisters and desisters reported a desire to be the other gender during childhood years, the underlying motives of their desire seemed to be different.",
"title": ""
},
{
"docid": "30c67c52cb258f86998263b378e0c66d",
"text": "This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of $1082\\times 728$ . The executable code and our collected data set are publicly available.",
"title": ""
},
{
"docid": "1b5fc0a7b39bedcac9bdc52584fb8a22",
"text": "Neem (Azadirachta indica) is a medicinal plant of containing diverse chemical active substances of several biological properties. So, the aim of the current investigation was to assess the effects of water leaf extract of neem plant on the survival and healthy status of Nile tilapia (Oreochromis niloticus), African cat fish (Clarias gariepinus) and zooplankton community. The laboratory determinations of lethal concentrations (LC 100 and LC50) through a static bioassay test were performed. The 24 h LC100 of neem leaf extract was estimated as 4 and 11 g/l, for juvenile's O. niloticus and C. gariepinus, respectively, while, the 96-h LC50 was 1.8 and 4 g/l, respectively. On the other hand, the 24 h LC100 for cladocera and copepoda were 0.25 and 0.45 g/l, respectively, while, the 96-h LC50 was 0.1 and 0.2 g/l, respectively. At the highest test concentrations, adverse effects were obvious with significant reductions in several cladoceran and copepod species. Some alterations in glucose levels, total protein, albumin, globulin as well as AST and ALT in plasma of treated O. niloticus and C. gariepinus with /2 and /10 LC50 of neem leaf water extract compared with non-treated one after 2 and 7 days of exposure were recorded and discussed. It could be concluded that the application of neem leaf extract can be used to control unwanted organisms in ponds as environment friendly material instead of deleterious pesticides. Also, extensive investigations should be established for the suitable methods of application in aquatic animal production facilities to be fully explored in future.",
"title": ""
},
{
"docid": "29f1c91fccfbeaa7ec352bdbe1c300c6",
"text": "Absorption in the stellar Lyman-alpha (Lyalpha) line observed during the transit of the extrasolar planet HD 209458b in front of its host star reveals high-velocity atomic hydrogen at great distances from the planet. This has been interpreted as hydrogen atoms escaping from the planet's exosphere, possibly undergoing hydrodynamic blow-off, and being accelerated by stellar radiation pressure. Energetic neutral atoms around Solar System planets have been observed to form from charge exchange between solar wind protons and neutral hydrogen from the planetary exospheres, however, and this process also should occur around extrasolar planets. Here we show that the measured transit-associated Lyalpha absorption can be explained by the interaction between the exosphere of HD 209458b and the stellar wind, and that radiation pressure alone cannot explain the observations. As the stellar wind protons are the source of the observed energetic neutral atoms, this provides a way of probing stellar wind conditions, and our model suggests a slow and hot stellar wind near HD 209458b at the time of the observations.",
"title": ""
},
{
"docid": "d2c0e71db2957621eca42bdc221ffb8f",
"text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.",
"title": ""
},
{
"docid": "6215c6ca6826001291314405ea936dda",
"text": "This paper describes a text mining tool that performs two tasks, namely document clustering and text summarization. These tasks have, of course, their corresponding counterpart in “conventional” data mining. However, the textual, unstructured nature of documents makes these two text mining tasks considerably more difficult than their data mining counterparts. In our system document clustering is performed by using the Autoclass data mining algorithm. Our text summarization algorithm is based on computing the value of a TF-ISF (term frequency – inverse sentence frequency) measure for each word, which is an adaptation of the conventional TF-IDF (term frequency – inverse document frequency) measure of information retrieval. Sentences with high values of TF-ISF are selected to produce a summary of the source text. The system has been evaluated on real-world documents, and the results are satisfactory.",
"title": ""
},
{
"docid": "f22726c7c8aef5fec8f81c76b5a3cb54",
"text": "In this paper, current silicon carbide (SiC) MOSFETs from two different manufacturers are evaluated including static and dynamic characteristics for different gate resistances, different load currents and at various temperatures. These power semiconductors are operated continuously at a high switching frequency of 1MHz comparing a hard- and a soft-switching converter. A calorimetric power loss measurement method is realized in order to achieve a good measurement accuracy, and the results are compared to the electrical measurements.",
"title": ""
},
{
"docid": "ed0444685c9a629c7d1fda7c4912fd55",
"text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.",
"title": ""
},
{
"docid": "c549fd965f95eb3a22bbc5f574b32b9e",
"text": "Branchial cleft cysts are benign lesions caused by anomalous development of the brachial cleft. This report describes a 20-year-old girl with swelling on the right lateral aspect of the neck, which expanded slowly but progressively. The clinical suspicion was that of a branchial cleft cyst. Sonography revealed a homogeneously hypo- to anechoic mass with well-defined margins and no intralesional septa. Color Doppler reviewed no internal vascularization. The ultrasound examination confirmed the clinical diagnosis of a second branchial cleft cyst, demonstrating the cystic nature of the mass and excluding the presence of complications. For superficial lesions like these, ultrasound is the first-level imaging study of choice because it is non-invasive, rapid, low-cost, and does not involve exposure to ionizing radiation.",
"title": ""
}
] |
scidocsrr
|
f44166c3b9ebb676beb5e700a0737b2b
|
Enhancing learning and retarding forgetting: choices and consequences.
|
[
{
"docid": "64a23307a03e7b542378f4b1fa0b6aa6",
"text": "Once material has been learned to a criterion of one perfect trial, further study within the same session constitutes overlearning. Although overlearning is a popular learning strategy, its effect on long-term retention is unclear. In two experiments presented here, 218 college students learned geography facts (Experiment 1) or word definitions (Experiment 2). The degree of learning was manipulated and measured via multiple test-with-feedback trials, and participants returned for a final cued recall test between one and nine weeks later. The overlearners recalled far more than the low learners at the one-week test, but this difference decreased dramatically thereafter. These data suggest that overlearning (and its concomitant demand for additional study time) is an inefficient strategy for learning material for meaningfully long periods of time.",
"title": ""
}
] |
[
{
"docid": "098b9b80d27fddd6407ada74a8fd4590",
"text": "We have developed a 1.55-μm 40 Gbps electro-absorption modulator laser (EML)-based transmitter optical subassembly (TOSA) using a novel flexible printed circuit (FPC). The return loss at the junctions of the printed circuit board and the FPC, and of the FPC and the ceramic feedthrough connection was held better than 20 dB at up to 40 GHz by a newly developed three-layer FPC. The TOSA was fabricated and demonstrated a mask margin of >16% and a path penalty of <;0.63 dB for a 43 Gbps signal after 2.4-km SMF transmission over the entire case temperature range from -5° to 80 °C, demonstrating compliance with ITU-T G.693. These results are comparable to coaxial connector type EML modules. This TOSA is expected to be a strong candidate for 40 Gbps EML modules with excellent operating characteristics, economy, and a small footprint.",
"title": ""
},
{
"docid": "5ab1d4704e0f6c03fa96b6d530fcc6f8",
"text": "The deep convolutional neural networks have achieved significant improvements in accuracy and speed for single image super-resolution. However, as the depth of network grows, the information flow is weakened and the training becomes harder and harder. On the other hand, most of the models adopt a single-stream structure with which integrating complementary contextual information under different receptive fields is difficult. To improve information flow and to capture sufficient knowledge for reconstructing the high-frequency details, we propose a cascaded multi-scale cross network (CMSC) in which a sequence of subnetworks is cascaded to infer high resolution features in a coarse-to-fine manner. In each cascaded subnetwork, we stack multiple multi-scale cross (MSC) modules to fuse complementary multi-scale information in an efficient way as well as to improve information flow across the layers. Meanwhile, by introducing residual-features learning in each stage, the relative information between high-resolution and low-resolution features is fully utilized to further boost reconstruction performance. We train the proposed network with cascaded-supervision and then assemble the intermediate predictions of the cascade to achieve high quality image reconstruction. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over state-of-the-art superresolution methods.",
"title": ""
},
{
"docid": "aad6101a6f537aef1d4be7da8da94e66",
"text": "We discuss how to perform symbolic execution of large programs in a manner that is both compositional (hence more scalable) and demand-driven. Compositional symbolic execution means finding feasible interprocedural program paths by composing symbolic executions of feasible intraprocedural paths. By demand-driven, we mean that as few intraprocedural paths as possible are symbolically executed in order to form an interprocedural path leading to a specific target branch or statement of interest (like an assertion). A key originality of this work is that our demand-driven compositional interprocedural symbolic execution is performed entirely using first-order logic formulas solved with an off-the-shelf SMT (Satisfiability-Modulo-Theories) solver – no procedure in-lining or custom algorithm is required for the interprocedural part. This allows a uniform and elegant way of summarizing procedures at various levels of detail and of composing those using logic formulas. We have implemented a prototype of this novel symbolic execution technique as an extension of Pex, a general automatic testing framework for .NET applications. Preliminary experimental results are encouraging. For instance, our prototype was able to generate tests triggering assertion violations in programs with large numbers of program paths that were beyond the scope of non-compositional test generation.",
"title": ""
},
{
"docid": "506a6a98e87fb5a6dc7e5cbe9cf27262",
"text": "Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex innerand cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large innerand cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. Source (GTA5) Target (BDD) Figure 1: Exemplar guided image translation examples of GTA5→ BDD. Best viewed in color.",
"title": ""
},
{
"docid": "0d56b30aef52bfdf2cb6426a834126e5",
"text": "The wide adoption of social media has increased the competition among ideas for our finite attention. We employ a parsimonious agent-based model to study whether such a competition may affect the popularity of different memes, the diversity of information we are exposed to, and the fading of our collective interests for specific topics. Agents share messages on a social network but can only pay attention to a portion of the information they receive. In the emerging dynamics of information diffusion, a few memes go viral while most do not. The predictions of our model are consistent with empirical data from Twitter, a popular microblogging platform. Surprisingly, we can explain the massive heterogeneity in the popularity and persistence of memes as deriving from a combination of the competition for our limited attention and the structure of the social network, without the need to assume different intrinsic values among ideas.",
"title": ""
},
{
"docid": "9356fdd1d8487051869602813d84579c",
"text": "If organisations are seen as complex evolving systems, co-evolving within a social ‘ecosystem’, then our thinking about strategy and management changes. With the changed perspective comes a different way of acting and relating which could lead to a different way of working. In turn, the new types of relationship and approaches to work could well provide the conditions for the emergence of new organisational forms.",
"title": ""
},
{
"docid": "339ab236c1e19a09e008b3e48a2b23e6",
"text": "Trigonella foenum-graecum (fenugreek) seeds have been documented as a traditional plant treatment for diabetes. In the present study, the antidiabetic properties of a soluble dietary fibre (SDF) fraction of T. foenum-graecum were evaluated. Administration of SDF fraction (0 x 5 g/kg body weight) to normal, type 1 or type 2 diabetic rats significantly improved oral glucose tolerance. Total remaining unabsorbed sucrose in the gastrointestinal tract of non-diabetic and type 2 diabetic rats, following oral sucrose loading (2 x 5 g/kg body weight) was significantly increased by T. foenum-graecum (0 x 5 g/kg body weight). The SDF fraction suppressed the elevation of blood glucose after oral sucrose ingestion in both non-diabetic and type 2 diabetic rats. Intestinal disaccharidase activity and glucose absorption were decreased and gastrointestinal motility increased by the SDF fraction. Daily oral administration of SDF to type 2 diabetic rats for 28 d decreased serum glucose, increased liver glycogen content and enhanced total antioxidant status. Serum insulin and insulin secretion were not affected by the SDF fraction. Glucose transport in 3T3-L1 adipocytes and insulin action were increased by T. foenum-graecum. The present findings indicate that the SDF fraction of T. foenum-graecum seeds exerts antidiabetic effects mediated through inhibition of carbohydrate digestion and absorption, and enhancement of peripheral insulin action.",
"title": ""
},
{
"docid": "08e5e3dadfe5fa766b7941ba76d24372",
"text": "In the aging face, the lateral third of the brow ages first and ages most. Aesthetically, eyebrow shape is more significant than height and eyebrow shape is highly dependent on the level of the lateral brow complex. Surgical attempts to elevate the brow complex are usually successful medially, but often fail laterally. The \"modified lateral brow lift\" is a hybrid technique, incorporating features of an endoscopic brow lift (small hidden incisions, deep tissue fixation) and features of an open coronal brow lift (full thickness scalp excision). Sensory innervation of the scalp is preserved and secure fixation of the elevated lateral brow is achieved. Side effects and complications are minimal.",
"title": ""
},
{
"docid": "18d48a685e81430cc30847b1d56037cc",
"text": "Recent work in computational structural biology focuses on modeling intrinsically dynamic proteins important to human biology and health. The energy landscapes of these proteins are rich in minima that correspond to alternative structures with which a dynamic protein binds to molecular partners in the cell. On such landscapes, evolutionary algorithms that switch their objective from classic optimization to mapping are more informative of protein structure function relationships. While techniques for mapping energy landscapes have been developed in computational chemistry and physics, protein landscapes are more difficult for mapping due to their high dimensionality and multimodality. In this paper, we describe a memetic evolutionary algorithm that is capable of efficiently mapping complex landscapes. In conjunction with a hall of fame mechanism, the algorithm makes use of a novel, lineage- and neighborhood-aware local search procedure or better exploration and mapping of complex landscapes. We evaluate the algorithm on several benchmark problems and demonstrate the superiority of the novel local search mechanism. In addition, we illustrate its effectiveness in mapping the complex multimodal landscape of an intrinsically dynamic protein important to human health.",
"title": ""
},
{
"docid": "c67fbc6e0a2a66e0855dcfc7a70cfb86",
"text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.",
"title": ""
},
{
"docid": "55861f00c32e8b478428c1c83aec2450",
"text": "Synopsis The majority of people with whiplash-associated disorder do not have neurological deficit or fracture and are therefore largely managed with nonsurgical interventions such as exercise, patient education, and behavioral-based interventions. To date, clinical guidelines, systematic reviews, and the results of high-quality randomized controlled trials recommend exercise and patient education as the primary interventions for people in both acute and chronic stages after injury. However, the relatively weak evidence and small effect sizes in individual trials have led authors of some systematic reviews to reach equivocal recommendations for either exercise or patient education, and led policy makers and funders to question whether the more expensive intervention (exercise) should be funded at all. Physical therapists, one of the most commonly consulted professionals treating individuals with whiplash-associated disorder, need to look beyond the evidence for insights as to what role patient education and exercise should play in the future management of whiplash. This clinical commentary therefore will review the evidence for exercise, patient education, and behavioral-based interventions for whiplash and provide clinical insight as to the future role that exercise and patient education should play in the management of this complex condition. Possible subgroups of patients who may best respond to exercise will be explored using stratification based on impairments, treatment response, and risk/prognostic factors. J Orthop Sports Phys Ther 2017;47(7):481-491. Epub 16 Jun 2017. doi:10.2519/jospt.2017.7138.",
"title": ""
},
{
"docid": "301e061163b115126b8f0b9851ed265c",
"text": "Pressure ulcers are a common problem among older adults in all health care settings. Prevalence and incidence estimates vary by setting, ulcer stage, and length of follow-up. Risk factors associated with increased pressure ulcer incidence have been identified. Activity or mobility limitation, incontinence, abnormalities in nutritional status, and altered consciousness are the most consistently reported risk factors for pressure ulcers. Pain, infectious complications, prolonged and expensive hospitalizations, persistent open ulcers, and increased risk of death are all associated with the development of pressure ulcers. The tremendous variability in pressure ulcer prevalence and incidence in health care settings suggests that opportunities exist to improve outcomes for persons at risk for and with pressure ulcers.",
"title": ""
},
{
"docid": "567d165eb9ad5f9860f3e0602cbe3e03",
"text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.",
"title": ""
},
{
"docid": "dd0a7e506c11eef00f7bbd2f6c4c18aa",
"text": "Word sense induction (WSI) seeks to automatically discover the senses of a word in a corpus via unsupervised methods. We propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly. Topics are informed by the entire document, while senses are informed by the local context surrounding the ambiguous word. We also discuss unsupervised ways of enriching the original corpus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. We demonstrate significant improvements over the previous state-of-the-art, achieving the best results reported to date on the SemEval-2013 WSI task.",
"title": ""
},
{
"docid": "c70f8bd719642ed818efc5387ffb6b55",
"text": "In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it “Draw and Discard” because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of “Draw and Discard” and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the framework’s viability in practical deployments. We believe our framework is the first deployed distributed machine learning approach that operates in the local privacy model.",
"title": ""
},
{
"docid": "9eea1fd3ba2b877045e88ed2d1661525",
"text": "A geometric calibration method that determines a complete description of source-detector geometry was adapted to a mobile C-arm for cone-beam computed tomography (CBCT). The non-iterative calibration algorithm calculates a unique solution for the positions of the source (X(s), Y(s), Z(s)), detector (X(d), Y(d), Z(d)), piercing point (U(o), V(o)), and detector rotation angles (phi, theta, eta) based on projections of a phantom consisting of two plane-parallel circles of ball bearings encased in a cylindrical acrylic tube. The prototype C-arm system was based on a Siemens PowerMobil modified to provide flat-panel CBCT for image-guided interventions. The magnitude of geometric nonidealities in the source-detector orbit was measured, and the short-term (approximately 4 h) and long-term (approximately 6 months) reproducibility of the calibration was evaluated. The C-arm exhibits large geometric nonidealities due to mechanical flex, with maximum departures from the average semicircular orbit of deltaU(o) = 15.8 mm and deltaV(o) = 9.8 mm (for the piercing point), deltaX and deltaY = 6-8 mm and deltaZ = 1 mm (for the source and detector), and deltaphi approximately 2.9 degrees, deltatheta approximately 1.9 degrees, and delta eta approximately 0.8 degrees (for the detector tilt/rotation). Despite such significant departures from a semicircular orbit, these system parameters were found to be reproducible, and therefore correctable by geometric calibration. Short-term reproducibility was < 0.16 mm (subpixel) for the piercing point coordinates, < 0.25 mm for the source-detector X and Y, < 0.035 mm for the source-detector Z, and < 0.02 degrees for the detector angles. Long-term reproducibility was similarly high, demonstrated by image quality and spatial resolution measurements over a period of 6 months. For example, the full-width at half-maximum (FWHM) in axial images of a thin steel wire increased slightly as a function of the time (delta) between calibration and image acquisition: FWHM=0.62, 0.63, 0.66, 0.71, and 0.72 mm at delta = 0 s, 1 h, 1 day, 1 month, and 6 months, respectively. For ongoing clinical trials in CBCT-guided surgery at our institution, geometric calibration is conducted monthly to provide sufficient three-dimensional (3D) image quality while managing time and workflow considerations of the calibration and quality assurance process. The sensitivity of 3D image quality to each of the system parameters was investigated, as was the tolerance to systematic and random errors in the geometric parameters, showing the most sensitive parameters to be the piercing point coordinates (U(o), V(o)) and in-plane positions of the source (X(s), Y(s)) and detector (X(d), Y(d)). Errors in the out-of-plane position of the source (Z(s)) and detector (Z(d)) and the detector angles (phi, theta, eta) were shown to have subtler effects on 3D image quality.",
"title": ""
},
{
"docid": "d9732d90f1da24b57f62661412f20be3",
"text": "With the ubiquity of mobile devices, spatial crowdsourcing is emerging as a new platform, enabling spatial tasks (i.e., tasks related to a location) assigned to and performed by human workers. In this paper, for the first time we introduce a taxonomy for spatial crowdsourcing. Subsequently, we focus on one class of this taxonomy, in which workers send their locations to a centralized server and thereafter the server assigns to every worker his nearby tasks with the objective of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (or MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space. Finally, our experimental evaluations on both real-world and synthetic data verify the applicability of our proposed approaches and compare them by measuring both the number of assigned tasks and the travel cost of the workers.",
"title": ""
},
{
"docid": "70b900d196f689caf9c3051cc27792ae",
"text": "This paper describes the hardware and software design of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2007. The robots are used as a vehicle for research in control of locomotion and behavior of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints. The Humanoid League of RoboCup provides an ideal testbed for such aspects of dynamics in motion and autonomous behavior as the problem of generating and maintaining statically or dynamically stable bipedal locomotion is predominant for all types of vision guided motions during a soccer game. A modular software architecture as well as further technologies have been developed for efficient and effective implementation and test of modules for sensing, planning, behavior, and actions of humanoid robots.",
"title": ""
},
{
"docid": "9c43ce72f77582848fd7603b9c5a9319",
"text": "This article discusses the various algorithms that make up the Netflix recommender system, and describes its business purpose. We also describe the role of search and related algorithms, which for us turns into a recommendations problem as well. We explain the motivations behind and review the approach that we use to improve the recommendation algorithms, combining A/B testing focused on improving member retention and medium term engagement, as well as offline experimentation using historical member engagement data. We discuss some of the issues in designing and interpreting A/B tests. Finally, we describe some current areas of focused innovation, which include making our recommender system global and language aware.",
"title": ""
},
{
"docid": "e67986714c6bda56c03de25168c51e6b",
"text": "With the development of modern technology and Android Smartphone, Smart Living is gradually changing people’s life. Bluetooth technology, which aims to exchange data wirelessly in a short distance using short-wavelength radio transmissions, is providing a necessary technology to create convenience, intelligence and controllability. In this paper, a new Smart Living system called home lighting control system using Bluetooth-based Android Smartphone is proposed and prototyped. First Smartphone, Smart Living and Bluetooth technology are reviewed. Second the system architecture, communication protocol and hardware design aredescribed. Then the design of a Bluetooth-based Smartphone application and the prototype are presented. It is shown that Android Smartphone can provide a platform to implement Bluetooth-based application for Smart Living.",
"title": ""
}
] |
scidocsrr
|
6815e821a6586c4a59da2a3ec7d95ce2
|
Security and Privacy System Requirements for Adopting Cloud Computing in Healthcare Data Sharing Scenarios
|
[
{
"docid": "530ef3f5d2f7cb5cc93243e2feb12b8e",
"text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.",
"title": ""
},
{
"docid": "ac2f02b46a885cf662c41a16f976819e",
"text": "This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.",
"title": ""
}
] |
[
{
"docid": "51fec678a2e901fdf109d4836ef1bf34",
"text": "BACKGROUND\nFoot-and-mouth disease (FMD) is an acute, highly contagious disease that infects cloven-hoofed animals. Vaccination is an effective means of preventing and controlling FMD. Compared to conventional inactivated FMDV vaccines, the format of FMDV virus-like particles (VLPs) as a non-replicating particulate vaccine candidate is a promising alternative.\n\n\nRESULTS\nIn this study, we have developed a co-expression system in E. coli, which drove the expression of FMDV capsid proteins (VP0, VP1, and VP3) in tandem by a single plasmid. The co-expressed FMDV capsid proteins (VP0, VP1, and VP3) were produced in large scale by fermentation at 10 L scale and the chromatographic purified capsid proteins were auto-assembled as VLPs in vitro. Cattle vaccinated with a single dose of the subunit vaccine, comprising in vitro assembled FMDV VLP and adjuvant, developed FMDV-specific antibody response (ELISA antibodies and neutralizing antibodies) with the persistent period of 6 months. Moreover, cattle vaccinated with the subunit vaccine showed the high protection potency with the 50 % bovine protective dose (PD50) reaching 11.75 PD50 per dose.\n\n\nCONCLUSIONS\nOur data strongly suggest that in vitro assembled recombinant FMDV VLPs produced from E. coli could function as a potent FMDV vaccine candidate against FMDV Asia1 infection. Furthermore, the robust protein expression and purification approaches described here could lead to the development of industrial level large-scale production of E. coli-based VLPs against FMDV infections with different serotypes.",
"title": ""
},
{
"docid": "9b57abdb1a9d06bc380373b42a0d8805",
"text": "The earth moverpsilas distance (EMD) is an important perceptually meaningful metric for comparing histograms, but it suffers from high (O(N3 logN)) computational complexity. We present a novel linear time algorithm for approximating the EMD for low dimensional histograms using the sum of absolute values of the weighted wavelet coefficients of the difference histogram. EMD computation is a special case of the Kantorovich-Rubinstein transshipment problem, and we exploit the Holder continuity constraint in its dual form to convert it into a simple optimization problem with an explicit solution in the wavelet domain. We prove that the resulting wavelet EMD metric is equivalent to EMD, i.e. the ratio of the two is bounded. We also provide estimates for the bounds. The weighted wavelet transform can be computed in time linear in the number of histogram bins, while the comparison is about as fast as for normal Euclidean distance or chi2 statistic. We experimentally show that wavelet EMD is a good approximation to EMD, has similar performance, but requires much less computation.",
"title": ""
},
{
"docid": "7f27b01099a38a1413df06b6a250425c",
"text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identification from unconstrained spoken customer responses.",
"title": ""
},
{
"docid": "56c97efc4788102acf3106f2a13016ae",
"text": "In this paper a new technique is introduced for automatically building recognisable moving 3D models of individual people. Realistic modelling of people is essential for advanced multimedia, augmented reality and immersive virtual reality. Current systems for whole-body model capture are based on active 3D sensing to measure the shape of the body surface. Such systems are prohibitively expensive and do not enable capture of high-quality photo-realistic colour. This results in geometrically accurate but unrealistic human models. The goal of this research is to achieve automatic low-cost modelling of people suitable for personalised avatars to populate virtual worlds. A model-based approach is presented for automatic reconstruction of recognisable avatars from a set of lowcost colour images of a person taken from four orthogonal views. A generic 3D human model represents both the human shape and kinematic joint structure. The shape of a specific person is captured by mapping 2D silhouette information from the orthogonal view colour images onto the generic 3D model. Colour texture mapping is achieved by projecting the set of images onto the deformed 3D model. This results in the capture of a recognisable 3D facsimile of an individual person suitable for articulated movement in a virtual world. The system is low-cost, requires single-shot capture, is reliable for large variations in shape and size and can cope with clothing of moderate complexity.",
"title": ""
},
{
"docid": "4aab8656b410817697071a1a58023a19",
"text": "In addition to deciding what to say, s p e a k ers must decide how t o s a y it. The central premise of studies on the relationship between syntax and discourse function is that a speaker's use of a particular structural option is constrained by s p e-ciic aspects of the context of utterance. Work in discourse has uncovered a variety of speciic discourse functions served by individual syntactic constructions. 1 More recently, in Birner & Ward 1998 we examine generalizations that apply across constructions, identifying ways in which a given functional principle is variously realized in similar but distinct constructions. 1 We use the term`construction' in the conventional sense, to refer to each of the various grammatical conngurations of constituents within a particular language. See Fillmore 1988, Prince 1994, and Goldberg 1995, inter alia, for alternative views of what constitutes a linguistic construction.",
"title": ""
},
{
"docid": "40bb8660fd02dc402d80e0f5970fa9dc",
"text": "Dengue is the second most common mosquito-borne disease affecting human beings. In 2009, WHO endorsed new guidelines that, for the first time, consider neurological manifestations in the clinical case classification for severe dengue. Dengue can manifest with a wide range of neurological features, which have been noted--depending on the clinical setting--in 0·5-21% of patients with dengue admitted to hospital. Furthermore, dengue was identified in 4-47% of admissions with encephalitis-like illness in endemic areas. Neurological complications can be categorised into dengue encephalopathy (eg, caused by hepatic failure or metabolic disorders), encephalitis (caused by direct virus invasion), neuromuscular complications (eg, Guillain-Barré syndrome or transient muscle dysfunctions), and neuro-ophthalmic involvement. However, overlap of these categories is possible. In endemic countries and after travel to these regions, dengue should be considered in patients presenting with fever and acute neurological manifestations.",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
},
{
"docid": "ae687136682fd78e9a92797c2c24ddb0",
"text": "Not all global health issues are truly global, but the neglected epidemic of stillbirths is one such urgent concern. The Lancet’s fi rst Series on stillbirths was published in 2011. Thanks to tenacious eff orts by the authors of that Series, led by Joy Lawn, together with the impetus of a wider maternal and child health community, stillbirths have been recognised as an essential part of the post-2015 sustainable development agenda, expressed through a new Global Strategy for Women’s, Children’s and Adolescents’ Health which was launched at the UN General Assembly in 2015. But recognising is not the same as doing. We now present a second Series on stillbirths, which is predicated on the idea of ending preventable stillbirth deaths by 2030. As this Series amply proves, such an ambitious goal is possible. The fi ve Series papers off er a roadmap for eliminating one of the most neglected tragedies in global health today. Perhaps the greatest obstacle to addressing stillbirths is stigma. The utter despair and hopelessness felt by families who suff er a stillbirth is often turned inwards to fuel feelings of shame and failure. The idea of demanding action would be anathema for many women and men who have experienced the loss of a child in this appalling way. This Series dispels any notion that such self-recrimination is justifi ed. Most stillbirths have preventable causes—maternal infections, chronic diseases, undernutrition, obesity, to name only a few. The solutions to ending preventable stillbirths are therefore practicable, feasible, and cost eff ective. They form a core part of the continuum of care—from prenatal care and antenatal care, through skilled birth attendance, to newborn care. The number of stillbirths remains alarmingly high: 2·6 million stillbirths annually, with little reduction this past decade. But the truly horrifi c fi gure is 1·3 million intrapartum stillbirths. The idea of a child being alive at the beginning of labour and dying for entirely preventable reasons during the next few hours should be a health scandal of international proportions. Yet it is not. Our Series aims to make it so. When a stillbirth does occur, the health system can fail parents further by the absence of respectful, empathetic services, including bereavement care. Yet provision of such care is not only humane and necessary, it can also mitigate a range of negative emotional and psychological symptoms that mothers and fathers experience after the death of their baby, some of which can persist long after their loss. Ten nations account for two-thirds of stillbirths: India, Nigeria, Pakistan, China, Ethiopia, Democratic Republic of the Congo, Bangladesh, Indonesia, Tanzania, and Niger. Although 98% of stillbirths take place in low-income and middle-income countries, stillbirth rates also remain unacceptably high in high-income settings. Why? Partly because stillbirths are strongly linked to adverse social and economic determinants of health. The health system alone cannot address entirely the predicament of stillbirths. Only by tackling the causes of the causes of stillbirths will rates be defl ected downwards in high-income settings. There is one action we believe off ers promising prospects for accelerating progress to end stillbirths—stronger independent accountability both within countries and globally. By accountability, we mean better monitoring (with investment in high-quality data collection), stronger review (including, especially, civil society organisations), and more robust action (high-level political leadership, and not merely from a Ministry of Health). The UN’s new Independent Accountability Panel has an important part to play in this process. But the really urgent need is for stronger independent accountability in countries. And here is where a virtuous alliance might lie between health professionals, clinical and public health scientists, and civil society, including bereaved parents. We believe this Series off ers the spark to ignite a new alliance of common interests to end preventable stillbirths by 2030.",
"title": ""
},
{
"docid": "c1492f5eb2fafc52da81902a9d19d480",
"text": "A compact dual-band multiple-input-multiple-output (MIMO)/diversity antenna is proposed. This antenna is designed for 2.4/5.2/5.8GHz WLAN and 2.5/3.5/5.5 GHz WiMAX applications in portable mobile devices. It consists of two back-to-back monopole antennas connected with a T-shaped stub, where two rectangular slots are cut from the ground, which significantly reduces the mutual coupling between the two ports at the lower frequency band. The volume of this antenna is 40mm ∗ 30mm ∗ 1mm including the ground plane. Measured results show the isolation is better than −20 dB at the lower frequency band from 2.39 to 3.75GHz and −25 dB at the higher frequency band from 5.03 to 7 GHz, respectively. Moreover, acceptable radiation patterns, antenna gain, and envelope correlation coefficient are obtained. These characteristics indicate that the proposed antenna is suitable for some portable MIMO/diversity equipments.",
"title": ""
},
{
"docid": "0bd4e5d64f4f9f67dee5ee98557a851f",
"text": "In this paper, we present the vision for an open, urban-scale wireless networking testbed, called CitySense, with the goal of supporting the development and evaluation of novel wireless systems that span an entire city. CitySense is currently under development and will consist of about 100 Linux-based embedded PCs outfitted with dual 802.11a/b/g radios and various sensors, mounted on buildings and streetlights across the city of Cambridge. CitySense takes its cue from citywide urban mesh networking projects, but will differ substantially in that nodes will be directly programmable by end users. The goal of CitySense is explicitly not to provide public Internet access, but rather to serve as a new kind of experimental apparatus for urban-scale distributed systems and networking research efforts. In this paper we motivate the need for CitySense and its potential to support a host of new research and application developments. We also outline the various engineering challenges of deploying such a testbed as well as the research challenges that we face when building and supporting such a system.",
"title": ""
},
{
"docid": "74d7f4b3cc7458c35120e83acbd74f08",
"text": "Machine learning (ML) has the potential to revolutionize the field of radiation oncology, but there is much work to be done. In this article, we approach the radiotherapy process from a workflow perspective, identifying specific areas where a data-centric approach using ML could improve the quality and efficiency of patient care. We highlight areas where ML has already been used, and identify areas where we should invest additional resources. We believe that this article can serve as a guide for both clinicians and researchers to start discussing issues that must be addressed in a timely manner.",
"title": ""
},
{
"docid": "63f3147a04a23867d40d6ff4f65868cd",
"text": "The chemistry of graphene oxide is discussed in this critical review. Particular emphasis is directed toward the synthesis of graphene oxide, as well as its structure. Graphene oxide as a substrate for a variety of chemical transformations, including its reduction to graphene-like materials, is also discussed. This review will be of value to synthetic chemists interested in this emerging field of materials science, as well as those investigating applications of graphene who would find a more thorough treatment of the chemistry of graphene oxide useful in understanding the scope and limitations of current approaches which utilize this material (91 references).",
"title": ""
},
{
"docid": "966205d925e2c0840fcc9064fa450462",
"text": "Three diierent algorithms for obstacle detection are presented in this paper each based on diierent assumptions. The rst two algorithms are qualitative in that they return only yes/no answers regarding the presence of obstacles in the eld of view; no 3D reconstruction is performed. They have the advantage of fast determination of the existence of obstacles in a scene based on the solvability of a linear system. The rst algorithm uses information about the ground plane, while the second only assumes that the ground is planar. The third algorithm is quantitative in that it continuously estimates the ground plane and reconstructs partial 3D structures by determining the height above the ground plane of each point in the scene. Experimental results are presented for real and simulated data, and the performance of the three algorithms under diierent noise levels is compared in simulation. We conclude that in terms of the robustness of performance, the third algorithm is superior to the other two.",
"title": ""
},
{
"docid": "ae9bc4e21d6e2524f09e5f5fbb9e4251",
"text": "Arvaniti, Ladd and Mennen (1998) reported a phenomenon of ‘segmental anchoring’: the beginning and end of a linguistically significant pitch movement are anchored to specific locations in segmental structure, which means that the slope and duration of the pitch movement vary according to the segmental material with which it is associated. This finding has since been replicated and extended in several languages. One possible analysis is that autosegmental tones corresponding to the beginning and end of the pitch movement show secondary association with points in structure; however, problems with this analysis have led some authors to cast doubt on the ‘hypothesis’ of segmental anchoring. I argue here that segmental anchoring is not a hypothesis expressed in terms of autosegmental phonology, but rather an empirical phonetic finding. The difficulty of describing segmental anchoring as secondary association does not disprove the ‘hypothesis’, but shows the error of using a symbolic phonological device (secondary association) to represent gradient differences of phonetic detail that should be expressed quantitatively. I propose that treating pitch movements as gestures (in the sense of Articulatory Phonology) goes some way to resolving some of the theoretical questions raised by segmental anchoring, but suggest that pitch gestures have a variety of ‘domains’ which are in need of empirical study before we can successfully integrate segmental anchoring into our understanding of speech production.",
"title": ""
},
{
"docid": "dd2130111524f0fcb5b2e0440b79a9b3",
"text": "Conventional moving objects detection and tracking using visible light image was often affected by the change of moving objects, change of illumination conditions, interference of complex backgrounds, shaking of camera, shadow of moving objects and moving objects of selfocclusion or mutual-occlusion phenomenon. We propose a human detection method using HOG features of head and shoulder based on depth map and detecting moving objects in particular scene in this paper. In-depth study on Kinect to get depth map with foreground objects. Through the comprehensive analysis based on distance information of the moving objects segmentation extraction removal diagram of background information, by analyzing and comprehensively applying segmentation a method based on distance information to extract pedestrian’s Histograms of Oriented Gradients (HOG) features of head and shoulder[1], then make a comparison to the SVM classifier. SVM classifier isolate regions of interest (features of head and shoulder) and judge to achieve real-time detection of objects (pedestrian). The human detection method by using features of head and shoulder based on depth map is a good solution to the problem of low efficiency and identification in traditional human detection system. The detection accuracy of our algorithm is approximate at 97.4% and the average time processing per frame is about 51.76 ms.",
"title": ""
},
{
"docid": "df78d3cc688a0223ebdd680279dd9022",
"text": "This paper studies cache-aided interference networks with arbitrary number of transmitters and receivers, whereby each transmitter has a cache memory of finite size. Each transmitter fills its cache memory from a content library of files in the placement phase. In the subsequent delivery phase, each receiver requests one of the library files, and the transmitters are responsible for delivering the requested files from their caches to the receivers. The objective is to design schemes for the placement and delivery phases to maximize the sum degrees of freedom (sum-DoF) which expresses the capacity of the interference network at the high signal-to-noise ratio regime. Our work mainly focuses on a commonly used uncoded placement strategy. We provide an information-theoretic bound on the sumDoF for this placement strategy. We demonstrate by an example that the derived bound is tighter than the bounds existing in the literature for small cache sizes. We propose a novel delivery scheme with a higher achievable sum-DoF than those previously given in the literature. The results reveal that the reciprocal of sum-DoF decreases linearly as the transmitter cache size increases. Therefore, increasing cache sizes at transmitters translates to increasing the sum-DoF and, hence, the capacity of the interference networks. Index Terms Coded caching, Interference networks, Degrees of freedom, Interference management.",
"title": ""
},
{
"docid": "4c165c15a3c6f069f702a54d0dab093c",
"text": "We propose a simple method for improving the security of hashed passwords: the maintenance of additional ``honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the ``honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.",
"title": ""
},
{
"docid": "a6a364819f397a8e28ac0b19480253cc",
"text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.",
"title": ""
}
] |
scidocsrr
|
cc6d1e3784ad84ca66e8a06859c38db1
|
Towards Better Analysis of Deep Convolutional Neural Networks
|
[
{
"docid": "db8325925cb9fd1ebdcf7480735f5448",
"text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"title": ""
}
] |
[
{
"docid": "86f0fa880f2a72cd3bf189132cc2aa44",
"text": "The advent of new technical solutions has offered a vast scope to encounter the existing challenges in tablet coating technology. One such outcome is the usage of innovative aqueous coating compositions to meet the limitations of organic based coating. The present study aimed at development of delayed release pantoprazole sodium tablets by coating with aqueous acrylic system belonging to methacrylic acid copolymer and to investigate the ability of the dosage form to protect the drug from acid milieu and to release rapidly in the duodenal pH. The core tablets were produced by direct compression using different disintegrants in variable concentrations. The physicochemical properties of all the tablets were consistent and satisfactory. Crosspovidone at 7.5% proved to be a better disintegrant with rapid disintegration with a minute, owing to its wicking properties. The optimized formulations were seal coated using HPMC dispersion to act as a barrier between the acid liable drug and enteric film coatings. The subcoating process was followed by enteric coating of tablets by the application of acryl-Eze at different theoretical weight gains. Enteric coated formulations were subjected to disintegration and dissolution tests by placing them in 0.1 N HCl for 2 h and then in pH 6.8 phosphate buffer for 1 h. The coated tablets remained static without peeling or cracking in the acid media, however instantly disintegrated in the intestinal pH. In the in vitro release studies, the optimized tablets released 0.16% in the acid media and 96% in the basic media which are well within the selected criteria. Results of the stability tests were satisfactory with the dissolution rate and assays were within acceptable limits. The results ascertained the acceptability of the aqueous based enteric coating composition for the successful development of delayed release, duodenal specific dosage forms for proton pump inhibitors.",
"title": ""
},
{
"docid": "3145c3bfa5bf76c0c08a2ccae1465ba0",
"text": "The purpose of this study is to examine whether supportive interactions on social networking sites mediate the influence of SNS use and the number of SNS friends on perceived social support, affect, sense of community, and life satisfaction. Employing momentary sampling, the current study also looked at the relationship between supportive interaction and immediate affect after the interaction over a period of 5 days. An analysis of 339 adult participants revealed a positive relationship between supportive interaction and positive affect after the interaction. A path model revealed positive associations among the number of SNS friends, supportive interactions, affect, perceived social support, sense of community, and life satisfaction. Implications for the research of online social networking and social support are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4d987e2c0f3f49609f70149460201889",
"text": "Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation is riddled with many challenges such as occlusions, non-uniform density, intra-scene and inter-scene variations in scale and perspective. Nevertheless, over the last few years, crowd count analysis has evolved from earlier methods that are often limited to small variations in crowd density and scales to the current state-of-the-art methods that have developed the ability to perform successfully on a wide range of scenarios. The success of crowd counting methods in the recent years can be largely attributed to deep learning and publications of challenging datasets. In this paper, we provide a comprehensive survey of recent Convolutional Neural Network (CNN) based approaches that have demonstrated significant improvements over earlier methods that rely largely on hand-crafted representations. First, we briefly review the pioneering methods that use hand-crafted representations and then we delve in detail into the deep learning-based approaches and recently published datasets. Furthermore, we discuss the merits and drawbacks of existing CNN-based approaches and identify promising avenues of research in this rapidly evolving field. c © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e1885f9c373c355a4df9307c6d90bf83",
"text": "Ricinulei possess movable, slender pedipalps with small chelae. When ricinuleids walk, they occasionally touch the soil surface with the tips of their pedipalps. This behavior is similar to the exploration movements they perform with their elongated second legs. We studied the distal areas of the pedipalps of the cavernicolous Mexican species Pseudocellus pearsei with scanning and transmission electron microscopy. Five different surface structures are characteristic for the pedipalps: (1) slender sigmoidal setae with smooth shafts resembling gustatory terminal pore single-walled (tp-sw) sensilla; (2) conspicuous long, mechanoreceptive slit sensilla; (3) a single, short, clubbed seta inside a deep pit representing a no pore single walled (np-sw) sensillum; (4) a single pore organ containing one olfactory wall pore single-walled (wp-sw) sensillum; and (5) gustatory terminal pore sensilla in the fingers of the pedipalp chela. Additionally, the pedipalps bear sensilla which also occur on the other appendages. With this sensory equipment, the pedipalps are highly effective multimodal short range sensory organs which complement the long range sensory function of the second legs. In order to present the complete sensory equipment of all appendages of the investigated Pseudocellus a comparative overview is provided.",
"title": ""
},
{
"docid": "8075a20a706397e448ce19e4c3fa6ec2",
"text": "BACKGROUND\nA single cycle of the Canadian Community Health Survey (CCHS) may not meet researchers' analytical needs. This article presents methods of combining CCHS cycles and discusses issues to consider if these data are to be combined. An empirical example illustrates the proposed methods.\n\n\nDATA AND METHODS\nTwo methods can be used to combine CCHS cycles: the separate approach and the pooled approach. With the separate approach, estimates are calculated for each cycle separately and then combined. The pooled approach combines data at the micro-data level, and the resulting dataset is treated as if it is a sample from one population.\n\n\nRESULTS\nFor the separate approach, it is recommended that the simple average of the estimates be used. For the pooled approach, it is recommended that weights be scaled by a constant factor where a period estimate covering the time periods of the individual cycles can be created. The choice of method depends on the aim of the analysis and the availability of data.\n\n\nINTERPRETATION\nCombining cycles should be considered only if the most current period estimates do not suffice. Both methods will obscure cycle-to-cycle trends and will not reveal changing behaviours related to public health initiatives.",
"title": ""
},
{
"docid": "e4ebb6d41393f0bd672f1f5985af98b4",
"text": "We propose a new framework to rank image attractiveness using a novel pairwise deep network trained with a large set of side-by-side multi-labeled image pairs from a web image index. The judges only provide relative ranking between two images without the need to directly assign an absolute score, or rate any predefined image attribute, thus making the rating more intuitive and accurate. We investigate a deep attractiveness rank net (DARN), a combination of deep convolutional neural network and rank net, to directly learn an attractiveness score mean and variance for each image and the underlying criteria the judges use to label each pair. The extension of this model (DARN-V2) is able to adapt to individual judge's personal preference. We also show the attractiveness of search results are significantly improved by using this attractiveness information in a real commercial search engine. We evaluate our model against other state-of-the-art models on our side-by-side web test data and another public aesthetic data set. With much less judgments (1M vs 50M), our model outperforms on side-by-side labeled data, and is comparable on data labeled by absolute score.",
"title": ""
},
{
"docid": "4746703f20b8fd902c451e658e44f49b",
"text": "This paper describes the development of a Latvian speech-to-text (STT) system at LIMSI within the Quaero project. One of the aims of the speech processing activities in the Quaero project is to cover all official European languages. However, for some of the languages only very limited, if any, training resources are available via corpora agencies such as LDC and ELRA. The aim of this study was to show the way, taking Latvian as example, an STT system can be rapidly developed without any transcribed training data. Following the scheme proposed in this paper, the Latvian STT system was developed in about a month and obtained a word error rate of 20% on broadcast news and conversation data in the Quaero 2012 evaluation campaign.",
"title": ""
},
{
"docid": "eaaead74f4b458897f8bef756d84ced0",
"text": "BACKGROUND\nIn recent years the video game industry has surpassed both the music and video industries in sales. Currently violent video games are among the most popular video games played by consumers, most specifically First-Person Shooters (FPS). Technological advancements in game play experience including the ability to play online has accounted for this increase in popularity. Previous research, utilising the General Aggression Model (GAM), has identified that violent video games increase levels of aggression. Little is known, however, as to the effect of playing a violent video game online.\n\n\nMETHODS/PRINCIPAL FINDINGS\nParticipants (N = 101) were randomly assigned to one of four experimental conditions; neutral video game--offline, neutral video game--online, violent video game--offline and violent video game--online. Following this they completed questionnaires to assess their attitudes towards the game and engaged in a chilli sauce paradigm to measure behavioural aggression. The results identified that participants who played a violent video game exhibited more aggression than those who played a neutral video game. Furthermore, this main effect was not particularly pronounced when the game was played online.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese findings suggest that both playing violent video games online and offline compared to playing neutral video games increases aggression.",
"title": ""
},
{
"docid": "331df0bd161470558dd5f5061d2b1743",
"text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.",
"title": ""
},
{
"docid": "62319a41108f8662f6237a3935ffa8c6",
"text": "This interpretive study examined how the marriage renewal ritual reflects the social construction of marriage in the United States. Two culturally prominent ideologies of marriage were interwoven in our interviews of 25 married persons who had renewed their marriage vows: (a) a dominant ideology of community and (b) a more muted ideology of individualism. The ideology of community was evidenced by a construction of marriage featuring themes of public accountability, social embeddedness, and permanence. By contrast, the ideology of individualism constructed marriage around themes of love, choice, and individual growth. Most interpersonal communication scholars approach the study of marriage in one of two ways: (a) marriage as context, or (b) marriage as outcome. In contrast, in the present study we adopt an alternative way to envision marriage: marriage as cultural performance. We frame this study using two complementary theoretical perspectives: social constructionism and ritual performance theory. In particular, we examine how the cultural performance of marriage renewal rituals reflects the social construction of marriage in the United States. In an interpretive analysis of interviews with marital partners who had recently renewed their marriage vows, we examine the extent to which the two most prominent ideological perspectives on marriage—individualism and community—organize the meaning of marriage for our participants. B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 2 The Socially Contested Construction of Marriage Communication scholars interested in face-to-face interaction tend to adopt one of two general approaches to the study of marriage, what Whitchurch and Dickson (1999) have called the interpersonal communication approach and the family communication approach. The family communication approach, with which the present study is aligned, views communication as constitutive of the family. That is, through their communicative practices, parties construct their social reality of who their family is and the meanings that organize it. From this constitutive, or social constructionist perspective, social reality is an ongoing process of producing and reproducing meanings and social patterns through the interchanges among people (Berger & Luckmann, 1966; Burr, 1995; Gergen, 1994). From a family communication perspective, marriage is thus an ongoing discursive accomplishment. It is achieved through a myriad of interaction practices, including but not limited to, private exchanges between husbands and wives, exchanges between the couple and their extended kinship and friendship networks, public and private rituals such as weddings and anniversaries, and public discourse by politicians and others surrounding family values. Whitchurch and Dickson (1999) argued that, by contrast, the interpersonal communication approach views marriage as an independent or a dependent variable whose functioning in the cause-and-effect world of human behavior can be determined. For example, interpersonal communication scholars often frame marriage as an antecedent contextual variable in examining how various communicative phenomena are enacted in married couples compared with nonmarried couples, or in the premarital compared with postmarital stages of relationship development. Interpersonal communication scholars often also consider marriage as a dependent variable in examining which causal variables lead courtship pairs to marry or keep married couples from breaking up, such as the extent to which such communication phenomena as conflict or disclosive openness during courtship predict whether a couple will wed. Advocates of a constitutive or social constructionist perspective argue that the discursive production and reproduction of the social order is far from the univocal, consensually based model that scholars once envisioned (Baxter & Montgomery, 1996). Instead, the social world is a cross-current of multiple, often competing, conflictual perspectives. The social order is wrought from multivocal negotiations in which different interests, ideologies, and beliefs interact on an ongoing basis. The process of “social ordering” is not a monologic conversation of seamless coherence and consensus; rather, it is a pluralistic cacophony of discursive renderings, a multiplicity of negotiations in which different lived experiences and different systems of meaning are at stake (Billig, Condor, Edwards, Gane, Middleton, & Radley, 1988; Shotter, 1993). As Bakhtin (1981) expressed: “Every concrete utterance . . . serves as a point where centrifugal as well as centripetal forces are brought to bear. The processes of centralization and decentralization, of unification and disunification, intersect in the utterance” (p. 272). Thus, interaction events are enacted dialogically, with multiple “voices,” or perspectives, competing for discursive dominance or privilege as the hegemonic, centripetal center of a given cultural conversation in the moment. Social life is a collection of dialogues between centripetal and centrifugal groups, beliefs, ideologies, and perspectives. B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 3 In modern American society, the institution of marriage is subject to endless negotiation by those who enact and discuss it. Existing research suggests that marriage is a contested terrain whose boundary is disputed by scholars and laypersons alike. One belief is that marriage is essentially the isolated domain of the two married spouses, a private haven separate from the obligations and constraints of the broader social order. The other belief is that marriage is a social institution that is embedded practically and morally in the broader society. Bellah and his colleagues (Bellah, Madsen, Sullivan, Swidler, & Tipton, 1985) have argued that this “boundary dispute” surrounding marriage reflects an omnipresent ideological tension in the American society that can be traced to precolonial times—a tension between the cultural strands of utilitarian/expressive individualism and moral/ social community. The marriage of utilitarian/expressive individualism emphasizes freedom from societal traditions and obligations, privileging instead its private existence in fulfilling the emotional and psychological needs of the two spouses. Marriage, according to this ideology, is not conceived as a binding obligation; rather, it is viewed as existing only as the expression of the choices of the free selves who constitute the union. Marriage is built on love for partner, expressive openness between partners, self-development, and self-gratification. It is a psychological contract negotiated between self-fulfilled individuals acting in their own self-interests. Should marriage cease to be gratifying to the selves in it, it should naturally end. Bellah et al. (1985) argue that this conception of marriage dominates the discursive landscape of modern American society, occupying, in Bakhtin’s (1981) terms, the centripetal center. By contrast, the moral/social community view of marriage emphasizes its existence as a social institution with obligations to uphold traditional values of life-long commitment and duty, and to cohere with other social institutions in maintaining the existing moral and social order. According to this second ideology, marriage is anchored by social obligation—expectations, duties, and accountabilities to others. In this way, marriage is grounded in its ties to the larger society and is not simply a private haven for emotional gratification and intimacy for the two spouses. Bellah et al. (1985) argue that this view of marriage, although clearly distinguishable in the discursive landscape of modern American society, occupies the centrifugal margin rather than the hegemonic center in modern social constructions of marriage in the United States. These two cultural ideologies of marriage also are readily identifiable in existing social scientific research on marital communication (Allan, 1993). The “private haven” ideology is the one that dominates existing research on communication in marriage (Milardo & Wellman, 1992). In this sort of research on marital communication, scholars draw a clear boundary demarcation around the spousal unit and proceed to understand how marriage works by directing their empirical gaze inward to the psychological characteristics of the two married persons and the interactions that take place within this dyad (Duck, 1993). By contrast, other more sociologically oriented scholars who study communication in marriage emphasize that the marital relationship is different from its nonmarital counterparts of romantic and cohabiting couples precisely because of its status as an institutionalized social unit (e.g., McCall, McCall, Denzin, Suttles, & Kurth, 1970). Scholars who adopt the latter view direct their empirical gaze outside marital dyads to examine how marriage is B AXTE R AND B RAITHWAITE , SOUTHE RN C OM M UNICA TION J OURNA L 6 7 (2 0 0 2 ) 4 enacted in the presence of societal influences, such as legitimization and acceptance of a pair by their kinship and friendship networks, and societal barriers to marital dissolution (e.g., Milardo, 1988). A third approach to the study of marriage is identifiable in the growing number of dialogically oriented scholars interested in communication in personal relationships who are pointing to the status of marriage as simultaneously a private culture of two as well as an institutionalized element of the broader social order (e.g., Brown, Airman, Werner, 1992; Montgomery, 1992). According to Shotter (1993) and Bellah et al. (1985), couples face this dilemma of double accountability on an ongoing basis. Although the ideology of utilitarian/expressive individualism is given dominance, “most Americans are, in fact, caught between ideals of freedom and obligation” (Bellah et al., p. 102). For example, Shotter",
"title": ""
},
{
"docid": "b4b66392aec0c4e00eb6b1cabbe22499",
"text": "ADJ: Adjectives that occur with the NP CMC: Orthographic features of the NP CPL: Phrases that occur with the NP VERB: Verbs that appear with the NP Task: Predict whether a noun phrase (NP) belongs to a category (e.g. “city”) Category # Examples animal 20,733 beverage 18,932 bird 19,263 bodypart 21,840 city 21,778 disease 21,827 drug 20,452 fish 19,162 food 19,566 fruit 18,911 muscle 21,606 person 21,700 protein 21,811 river 21,723 vegetable 18,826",
"title": ""
},
{
"docid": "a855ff1c74517b5f7148b5286a42dd5d",
"text": "Currently the treatment mainstay of sepsis is early and appropriate antibiotic therapy, accompanied by aggressive fluid administration, the use of vasopressors when needed and the prompt initiation of measures to support each failing organ. Activated protein C and hydrocortisone, when used accordingly can affect mortality. As the pathophysiologic events that take place during sepsis are being elucidated, new molecules that target each step of those pathways are being tested. However, a lot of those molecules affect various mediators of the sepsis cascade including inflammatory cytokines, cellular receptors, nuclear transcription factors, coagulation activators and apoptosis regulators. Over the last decade, a multitude of clinical trials and animal studies have investigated strategies that aimed to restore immune homeostasis either by reducing inflammation or by stimulating the innate and adaptive immune responses. Antibiotics, statins and other molecules with multipotent immunomodulatory actions have also been studied in the treatment of sepsis.",
"title": ""
},
{
"docid": "d3c3e9877695a8abb2783e685f254eef",
"text": "Software systems are constantly evolving, with new versions and patches being released on a continuous basis. Unfortunately, software updates present a high risk, with many releases introducing new bugs and security vulnerabilities. \n We tackle this problem using a simple but effective multi-version based approach. Whenever a new update becomes available, instead of upgrading the software to the new version, we run the new version in parallel with the old one; by carefully coordinating their executions and selecting the behaviour of the more reliable version when they diverge, we create a more secure and dependable multi-version application. \n We implemented this technique in Mx, a system targeting Linux applications running on multi-core processors, and show that it can be applied successfully to several real applications such as Coreutils, a set of user-level UNIX applications; Lighttpd, a popular web server used by several high-traffic websites such as Wikipedia and YouTube; and Redis, an advanced key-value data structure server used by many well-known services such as GitHub and Flickr.",
"title": ""
},
{
"docid": "361b2d1060aada23f790a64e6698909e",
"text": "Decimation filter has wide application in both the analog and digital system for data rate conversion as well as filtering. In this paper, we have discussed about efficient structure of a decimation filter. We have three class of filters FIR, IIR and CIC filters. IIR filters are simpler in structure but do not satisfy linear phase requirements which are required in time sensitive features like a video or a speech. FIR filters have a well defined frequency response but they require lot of hardware to store the filter coefficients. CIC filters don’t have this drawback they are coefficient less so hardware requirement is much reduced but as they don’t have well defined frequency response. So another structure is proposed which takes advantage of good feature of both the structures and thus have a cascade of CIC and FIR filters. They exhibit both the advantage of FIR and CIC filters and hence more efficient over all in terms of hardware and frequency response requirements.",
"title": ""
},
{
"docid": "3be38e070678e358e23cb81432033062",
"text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system",
"title": ""
},
{
"docid": "95d767d1b9a2ba2aecdf26443b3dd4af",
"text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.",
"title": ""
},
{
"docid": "d58f60013b507b286fcfc9f19304fea6",
"text": "The outcome of patients suffering from spondyloarthritis is determined by chronic inflammation and new bone formation leading to ankylosis. The latter process manifests by new cartilage and bone formation leading to joint or spine fusion. This article discusses the main mechanisms of new bone formation in spondyloarthritis. It reviews the key molecules and concepts of new bone formation and ankylosis in animal models of disease and translates these findings to human disease. In addition, proposed biomarkers of new bone formation are evaluated and the translational current and future challenges are discussed with regards to new bone formation in spondyloarthritis.",
"title": ""
},
{
"docid": "5ccec0df167000ba28601ef7a489e95f",
"text": "Living cells represent open, nonequilibrium, self organizing, and dissipative systems maintained with the continuous supply of outside and inside material, energy, and information flows. The energy in the form of adenosine triphosphate is utilized in biochemical cycles, transport processes, protein synthesis, reproduction, and performing other biological work. The processes in molecular and cellular biological systems are stochastic in nature with varying spatial and time scales, and bounded with conservation laws, kinetic laws, and thermodynamic constraints, which should be taken into account by any approach for modeling biological systems. In component biology, this review focuses on the modeling of enzyme kinetics and fluctuation of single biomolecules acting as molecular motors, while in systems biology it focuses on modeling biochemical cycles and networks in which all the components of a biological system interact functionally over time and space. Biochemical cycles emerge from collective and functional efforts to devise a cyclic flow of optimal energy degradation rate, which can only be described by nonequilibrium thermodynamics. Therefore, this review emphasizes the role of nonequilibrium thermodynamics through the formulations of thermodynamically coupled biochemical cycles, entropy production, fluctuation theorems, bioenergetics, and reaction-diffusion systems. Fluctuation theorems relate the forward and backward dynamical randomness of the trajectories or paths, bridge the microscopic and macroscopic domains, and link the time-reversible and irreversible descriptions of biological systems. However, many of these approaches are in their early stages of their development and no single computational or experimental technique is able to span all the relevant and necessary spatial and temporal scales. Wide range of experimental and novel computational techniques with high accuracy, precision, coverage, and efficiency are necessary for understanding biochemical cycles.",
"title": ""
},
{
"docid": "436900539406faa9ff34c1af12b6348d",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
},
{
"docid": "20f5aa19163c0d57f3d98f6c2c7d7a42",
"text": "Feature detection and extraction are essential in computer vision applications such as image matching and object recognition. The Scale-Invariant Feature Transform (SIFT) algorithm is one of the most robust approaches to detect and extract distinctive invariant features from images. However, high computational complexity makes it difficult to apply the SIFT algorithm to mobile applications. Recent developments in mobile processors have enabled heterogeneous computing on mobile devices, such as smartphones and tablets. In this paper, we present an OpenCL-based implementation of the SIFT algorithm on a smartphone, taking advantage of the mobile GPU. We carefully analyze the SIFT workloads and identify the parallelism. We implemented major steps of the SIFT algorithm using both serial C++ code and OpenCL kernels targeting mobile processors, to compare the performance of different workflows. Based on the profiling results, we partition the SIFT algorithm between the CPU and GPU in a way that best exploits the parallelism and minimizes the buffer transferring time to achieve better performance. The experimental results show that we are able to achieve 8.5 FPS for keypoints detection and 19 FPS for descriptor generation without reducing the number and the quality of the keypoints. Moreover, the heterogeneous implementation can reduce energy consumption by 41% compared to an optimized CPU-only implementation.",
"title": ""
}
] |
scidocsrr
|
1cc8a5031c770e16fbab84d62fccd60e
|
LiDAR Scan Matching Aided Inertial Navigation System in GNSS-Denied Environments
|
[
{
"docid": "7399a8096f56c46a20715b9f223d05bf",
"text": "Recently, Rao-Blackwellized particle filters (RBPF) have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a particle filter in which each particle carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a RBPF for learning grid maps. We propose an approach to compute an accurate proposal distribution, taking into account not only the movement of the robot, but also the most recent observation. This drastically decreases the uncertainty about the robot's pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations, which seriously reduces the problem of particle depletion. Experimental results carried out with real mobile robots in large-scale indoor, as well as outdoor, environments illustrate the advantages of our methods over previous approaches",
"title": ""
},
{
"docid": "55160cc3013b03704555863c710e6d21",
"text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.",
"title": ""
}
] |
[
{
"docid": "7f1c7a0887917937147c8f5b2dbe2df3",
"text": "We consider the problem of learning probabilistic models fo r c mplex relational structures between various types of objects. A model can hel p us “understand” a dataset of relational facts in at least two ways, by finding in terpretable structure in the data, and by supporting predictions, or inferences ab out whether particular unobserved relations are likely to be true. Often there is a t radeoff between these two aims: cluster-based models yield more easily interpret abl representations, while factorization-based approaches have given better pr edictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relatio ns in a nonparametric Bayesian clustering framework. Inference is fully Bayesia n but scales well to large data sets. The model simultaneously discovers interp retable clusters and yields predictive performance that matches or beats previo us probabilistic models for relational data.",
"title": ""
},
{
"docid": "12b178a26ba5f81a02568810df24b50f",
"text": "BACKGROUND\nMost previous studies of allied health professionals' evidence based practice (EBP) attitudes, knowledge and behaviours have been conducted with profession specific questionnaires of variable psychometric strength. This study compared the self-report EBP profiles of allied health professionals/trainees in an Australian university.\n\n\nMETHODS\nThe Evidence-Based Practice Profile (EBP2) questionnaire assessed five domains (Relevance, Terminology, Practice, Confidence, Sympathy) in 918 subjects from five professional disciplines. One and 2-way factorial analysis of variance (ANOVA) and t-tests analysed differences based on prior exposure to EBP, stage of training, professional discipline, age and gender.\n\n\nRESULTS\nThere were significant differences between stages of training (p < 0.001) for all domains and between EBP exposure groups for all but one domain (Sympathy). Professional discipline groups differed for Relevance, Terminology, Practice (p < 0.001) and Confidence (p = 0.006). Males scored higher for Confidence (p = 0.002) and females for Sympathy (p = 0.04), older subjects (> 24 years) scored higher for all domains (p < 0.05). Age and exposure affected all domains (p < 0.02). Differences in stages of training largely explained age-related differences in Confidence and Practice (p ≤ 0.001) and exposure-related differences in Confidence, Practice and Sympathy (p ≤ 0.023).\n\n\nCONCLUSIONS\nAcross five allied health professions, self-report EBP characteristics varied with EBP exposure, across stages of training, with profession and with age.",
"title": ""
},
{
"docid": "ea1a56c7bcf4871d1c6f2f9806405827",
"text": "—Prior to the successful use of non-contact photoplethysmography, several engineering issues regarding this monitoring technique must be considered. These issues include ambient light and motion artefacts, the wide dynamic signal range and the effect of direct light source coupling. The latter issue was investigated and preliminary results show that direct coupling can cause attenuation of the detected PPG signal. It is shown that a physical offset can be introduced between the light source and the detector in order to reduce this effect.",
"title": ""
},
{
"docid": "027681fed6a8932935ea8ef9e49cea13",
"text": "Nowadays smartphones are ubiquitous and - to some extent - already used to support sports training, e.g. runners or bikers track their trip with a gps-enabled smartphone. But recent mobile technology has powerful processors that allow even more complex tasks like image or graphics processing. In this work we address the question on how mobile technology can be used for collaborative boulder training. More specifically, we present a mobile augmented reality application to support various parts of boulder training. The proposed approach also incorporates sharing and other social features. Thus our solution supports collaborative training by providing an intuitive way to create, share and define goals and challenges together with friends. Furthermore we propose a novel method of trackable generation for augmented reality. Synthetically generated images of climbing walls are used as trackables for real, existing walls.",
"title": ""
},
{
"docid": "02c00f1fd4efffd5f01c165b64fb3f5e",
"text": "A fundamental question in human memory is how the brain represents sensory-specific information during the process of retrieval. One hypothesis is that regions of sensory cortex are reactivated during retrieval of sensory-specific information (1). Here we report findings from a study in which subjects learned a set of picture and sound items and were then given a recall test during which they vividly remembered the items while imaged by using event-related functional MRI. Regions of visual and auditory cortex were activated differentially during retrieval of pictures and sounds, respectively. Furthermore, the regions activated during the recall test comprised a subset of those activated during a separate perception task in which subjects actually viewed pictures and heard sounds. Regions activated during the recall test were found to be represented more in late than in early visual and auditory cortex. Therefore, results indicate that retrieval of vivid visual and auditory information can be associated with a reactivation of some of the same sensory regions that were activated during perception of those items.",
"title": ""
},
{
"docid": "6a19410817766b052a2054b2cb3efe42",
"text": "Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (‘bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.",
"title": ""
},
{
"docid": "bc2dcf4e3fdb4303e9fc56f102cb1226",
"text": "The main objective of this research was to determine the effects of a long-term ketogenic diet, rich in polyunsaturated fatty acids, on aerobic performance and exercise metabolism in off-road cyclists. Additionally, the effects of this diet on body mass and body composition were evaluated, as well as those that occurred in the lipid and lipoprotein profiles due to the dietary intervention. The research material included eight male subjects, aged 28.3 ± 3.9 years, with at least five years of training experience that competed in off-road cycling. Each cyclist performed a continuous exercise protocol on a cycloergometer with varied intensity, after a mixed and ketogenic diet in a crossover design. The ketogenic diet stimulated favorable changes in body mass and body composition, as well as in the lipid and lipoprotein profiles. Important findings of the present study include a significant increase in the relative values of maximal oxygen uptake (VO2max) and oxygen uptake at lactate threshold (VO2 LT) after the ketogenic diet, which can be explained by reductions in body mass and fat mass and/or the greater oxygen uptake necessary to obtain the same energy yield as on a mixed diet, due to increased fat oxidation or by enhanced sympathetic activation. The max work load and the work load at lactate threshold were significantly higher after the mixed diet. The values of the respiratory exchange ratio (RER) were significantly lower at rest and during particular stages of the exercise protocol following the ketogenic diet. The heart rate (HR) and oxygen uptake were significantly higher at rest and during the first three stages of exercise after the ketogenic diet, while the reverse was true during the last stage of the exercise protocol conducted with maximal intensity. Creatine kinase (CK) and lactate dehydrogenase (LDH) activity were significantly lower at rest and during particular stages of the 105-min exercise protocol following the low carbohydrate ketogenic diet. The alterations in insulin and cortisol concentrations due to the dietary intervention confirm the concept that the glucostatic mechanism controls the hormonal and metabolic responses to exercise.",
"title": ""
},
{
"docid": "7f70eb577d9f76b95222377e2ad0bf4c",
"text": "Designing high-performance and scalable applications on GPU clusters requires tackling several challenges. The key challenge is the separate host memory and device memory, which requires programmers to use multiple programming models, such as CUDA and MPI, to operate on data in different memory spaces. This challenge becomes more difficult to tackle when non-contiguous data in multidimensional structures is used by real-world applications. These challenges limit the programming productivity and the application performance. We propose the GPU-Aware MPI to support data communication from GPU to GPU using standard MPI. It unifies the separate memory spaces, and avoids explicit CPU-GPU data movement and CPU/GPU buffer management. It supports all MPI datatypes on device memory with two algorithms: a GPU datatype vectorization algorithm and a vector based GPU kernel data pack and unpack algorithm. A pipeline is designed to overlap the non-contiguous data packing and unpacking on GPUs, the data movement on the PCIe, and the RDMA data transfer on the network. We incorporate our design with the open-source MPI library MVAPICH2 and optimize a production application: the multiphase 3D LBM. Besides the increase of programming productivity, we observe up to 19.9 percent improvement in application-level performance on 64 GPUs of the Oakley supercomputer.",
"title": ""
},
{
"docid": "8e7cfad4f1709101e5790343200d1e16",
"text": "Although electronic commerce experts often cite privacy concerns as barriers to consumer electronic commerce, there is a lack of understanding about how these privacy concerns impact consumers' willingness to conduct transactions online. Therefore, the goal of this study is to extend previous models of e-commerce adoption by specifically assessing the impact that consumers' concerns for information privacy (CFIP) have on their willingness to engage in online transactions. To investigate this, we conducted surveys focusing on consumers’ willingness to transact with a well-known and less well-known Web merchant. Results of the study indicate that concern for information privacy affects risk perceptions, trust, and willingness to transact for a wellknown merchant, but not for a less well-known merchant. In addition, the results indicate that merchant familiarity does not moderate the relationship between CFIP and risk perceptions or CFIP and trust. Implications for researchers and practitioners are discussed. 1 Elena Karahanna was the accepting senior editor. Kathy Stewart Schwaig and David Gefen were the reviewers. This paper was submitted on October 12, 2004, and went through 4 revisions. Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 416 Introduction Although information privacy concerns have long been cited as barriers to consumer adoption of business-to-consumer (B2C) e-commerce (Hoffman et al., 1999, Sullivan, 2005), the results of studies focusing on privacy concerns have been equivocal. Some studies find that mechanisms intended to communicate information about privacy protection such as privacy seals and policies increase intentions to engage in online transactions (Miyazaki and Krishnamurthy, 2002). In contrast, others find that these mechanisms have no effect on consumer willingness to engage in online transactions (Kimery and McCord, 2002). Understanding how consumers’ concerns for information privacy (CFIP), or their concerns about how organizations use and protect personal information (Smith et al., 1996), impact consumers’ willingness to engage in online transactions is important to our knowledge of consumer-oriented e-commerce. For example, if CFIP has a strong direct impact on willingness to engage in online transactions, both researchers and practitioners may want to direct efforts at understanding how to allay some of these concerns. In contrast, if CFIP only impacts willingness to transact through other factors, then efforts may be directed at influencing these factors through both CFIP as well as through their additional antecedents. Prior research on B2C e-commerce examining consumer willingness to transact has focused primarily on the role of trust and trustworthiness either using trust theory or using acceptance, and adoption-based theories as frameworks from which to study trust. The research based on trust theories tends to focus on the structure of trust or on antecedents to trust (Bhattacherjee, 2002; Gefen, 2000; Jarvenpaa et al., 2000; McKnight et al., 2002a). Adoptionand acceptance-based research includes studies using the Technology Acceptance Model (Gefen et al., 2003) and diffusion theory (Van Slyke et al., 2004) to examine the effects of trust within well-established models. To our knowledge, studies of the effects of trust in the context of e-commerce transactions have not included CFIP as an antecedent in their models. The current research addresses this by examining the effect of CFIP on willingness to transact within a nomological network of additional antecedents (i.e., trust and risk) that we expect will be influenced by CFIP. In addition, familiarity with the Web merchant may moderate the relationship between CFIP and both trust and risk perceptions. As an individual becomes more familiar with the Web merchant and how it collects and protects personal information, perceptions may be driven more by knowledge of the merchant than by information concerns. This differential relationship between factors for more familiar (e.g. experienced) and less familiar merchants is similar to findings of previous research on user acceptance for potential and repeat users of technology (Karahanna et al., 1999) and e-commerce customers (Gefen et al., 2003). Thus, this research has two goals. The first goal is to better understand the role that consumers’ concerns for information privacy (CFIP) have on their willingness to engage in online transactions. The second goal is to investigate whether familiarity moderates the effects of CFIP on key constructs in our nomological network. Specifically, the following research questions are investigated: How do consumers’ concerns for information privacy affect their willingness to engage in online transactions? Does consumers' familiarity with a Web merchant moderate the impact of concern for information privacy on risk and on trust? Information Privacy and Online Consumer Purchasing/Van Slyke et al. Journal of the Association for Information Systems Vol. 7 No. 6, pp. 415-444/June 2006 417 This paper is organized as follows. First, we provide background information regarding the existing literature and the constructs of interest. Next, we present our research model and develop the hypotheses arising from the model. We then describe the method by which we investigated the hypotheses. This is followed by a discussion of the results of our analysis. We conclude the paper by discussing the implications and limitations of our work, along with suggestions for future research. Research Model and Hypotheses Figure 1 presents this study's research model. Given that concern for information privacy is the central focus of the study, we embed the construct within a nomological network of willingness to transact in prior research. Specifically, we include risk, familiarity with the merchant, and trust (Bhattacherjee, 2002; Gefen et al., 2003; Jarvenpaa and Tractinsky, 1999; Van Slyke et al., 2004) constructs that CFIP is posited to influence and that have been found to influence. We first discuss CFIP and then present the theoretical rationale that underlies the relationships presented in the research model. We begin our discussion of the research model by providing an overview of CFIP, focusing on this construct in the context of e-commerce.",
"title": ""
},
{
"docid": "d0aac3268bacebc6e0328ca13cd6eccd",
"text": "Food allergies are classified into three types, \"IgE-mediated,\" \"combined IgE- and cell-mediated\" and \"cell-mediated/non-IgE-mediated,\" depending on the involvement of IgE in their pathogenesis. Patients who develop predominantly cutaneous and/or respiratory symptoms belong to the IgE-mediated food allergy type. On the other hand, patients with gastrointestinal food allergy (GI allergy) usually develop gastrointestinal symptoms several hours after ingestion of offending foods; they belong to the cell-mediated/non-IgE-mediated or combined IgE- and cell-mediated food allergy types. GI allergies are also classified into a number of different clinical entities: food protein-induced enterocolitis syndrome (FPIES), food protein-induced proctocolitis (FPIP), food protein-induced enteropathy (Enteropathy) and eosinophilic gastrointestinal disorders (EGID). In the case of IgE-mediated food allergy, the diagnostic approaches and pathogenic mechanisms are well characterized. In contrast, the diagnostic approaches and pathogenic mechanisms of GI allergy remain mostly unclear. In this review, we summarized each type of GI allergy in regard to its historical background and updated clinical features, offending foods, etiology, diagnosis, examinations, treatment and pathogenesis. There are still many problems, especially in regard to the diagnostic approaches for GI allergy, that are closely associated with the definition of each disease. In addition, there are a number of unresolved issues regarding the pathogenic mechanisms of GI allergy that need further study and elucidation. Therefore, we discussed some of the diagnostic and research issues for GI allergy that need further investigation.",
"title": ""
},
{
"docid": "f8c654b24abbe7d0239db559513021aa",
"text": "We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions. Our key insight is that these four fundamental vision problems are coupled through geometric constraints. Consequently, learning to solve them together simplifies the problem because the solutions can reinforce each other. We go beyond previous work by exploiting geometry more explicitly and segmenting the scene into static and moving regions. To that end, we introduce Competitive Collaboration, a framework that facilitates the coordinated training of multiple specialized neural networks to solve complex problems. Competitive Collaboration works much like expectation-maximization, but with neural networks that act as both competitors to explain pixels that correspond to static or moving regions, and as collaborators through a moderator that assigns pixels to be either static or independently moving. Our novel method integrates all these problems in a common framework and simultaneously reasons about the segmentation of the scene into moving objects and the static background, the camera motion, depth of the static scene structure, and the optical flow of moving objects. Our model is trained without any supervision and achieves state-of-the-art performance among joint unsupervised methods on all sub-problems. .",
"title": ""
},
{
"docid": "492b01d63bbe0e26522958e8d6147592",
"text": "In this paper, an original method to reduce the height of a dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The miniaturization technique consists in adding a capacitive load between vertical plates. The height of the radiating element is reduced to 0.1λ0, where λ0 is the wavelength at the lowest operation frequency for a Standing Wave Ratio (SWR) <2.5, which corresponds to a reduction factor of 37.5%. The measured input impedance bandwidth is 64% from 1.6 GHz to 3.1 GHz with a SWR <2.5.",
"title": ""
},
{
"docid": "1042329bbc635f1b39a5d15d795be8a3",
"text": "In this work we present a method to estimate a 3D face shape from a single image. Our method is based on a cascade regression framework that directly estimates face landmarks locations in 3D. We include the knowledge that a face is a 3D object into the learning pipeline and show how this information decreases localization errors while keeping the computational time low. We predict the actual positions of the landmarks even if they are occluded due to face rotation. To support the ability of our method to reliably reconstruct 3D shapes, we introduce a simple method for head pose estimation using a single image that reaches higher accuracy than the state of the art. Comparison of 3D face landmarks localization with the available state of the art further supports the feasibility of a single-step face shape estimation. The code, trained models and our 3D annotations will be made available to the research community.",
"title": ""
},
{
"docid": "0c9b46fba19b6604570ff41fcb400640",
"text": "Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and design by using digital technology and at the same time taking advantage of human abilities to grasp and manipulate physical objects and materials. This paper discusses a model of TUI, key properties, genres, applications, and summarizes the contributions made by the Tangible Media Group and other researchers since the publication of the first Tangible Bits paper at CHI 1997. http://tangible.media.mit.edu/",
"title": ""
},
{
"docid": "0d13be9f5e2082af96c370d3c316204f",
"text": "We present a combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.",
"title": ""
},
{
"docid": "cdaa99f010b20906fee87d8de08e1106",
"text": "We propose a novel hierarchical clustering algorithm for data-sets in which only pairwise distances between the points are provided. The classical Hungarian method is an efficient algorithm for solving the problem of minimal-weight cycle cover. We utilize the Hungarian method as the basic building block of our clustering algorithm. The disjoint cycles, produced by the Hungarian method, are viewed as a partition of the data-set. The clustering algorithm is formed by hierarchical merging. The proposed algorithm can handle data that is arranged in non-convex sets. The number of the clusters is automatically found as part of the clustering process. We report an improved performance of our algorithm in a variety of examples and compare it to the spectral clustering algorithm.",
"title": ""
},
{
"docid": "f794b6914cc99fcd2a13b81e6fbe12d2",
"text": "An unprecedented rise in the number of asylum seekers and refugees was seen in Europe in 2015, and it seems that numbers are not going to be reduced considerably in 2016. Several studies have tried to estimate risk of infectious diseases associated with migration but only very rarely these studies make a distinction on reason for migration. In these studies, workers, students, and refugees who have moved to a foreign country are all taken to have the same disease epidemiology. A common disease epidemiology across very different migrant groups is unlikely, so in this review of infectious diseases in asylum seekers and refugees, we describe infectious disease prevalence in various types of migrants. We identified 51 studies eligible for inclusion. The highest infectious disease prevalence in refugee and asylum seeker populations have been reported for latent tuberculosis (9-45%), active tuberculosis (up to 11%), and hepatitis B (up to 12%). The same population had low prevalence of malaria (7%) and hepatitis C (up to 5%). There have been recent case reports from European countries of cutaneous diphtheria, louse-born relapsing fever, and shigella in the asylum-seeking and refugee population. The increased risk that refugees and asylum seekers have for infection with specific diseases can largely be attributed to poor living conditions during and after migration. Even though we see high transmission in the refugee populations, there is very little risk of spread to the autochthonous population. These findings support the efforts towards creating a common European standard for the health reception and reporting of asylum seekers and refugees.",
"title": ""
},
{
"docid": "b45d1003afac487dd3d5477621a85f74",
"text": "Creating, placing, and presenting social media content is a difficult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success. There are also interesting interactions between these factors. For example, the language of the title should be targeted to the community where the content is submitted, yet it should also highlight the distinctive nature of the content. In this paper, we examine how these factors interact to determine the popularity of social media content. We do so by studying resubmissions, i.e., content that has been submitted multiple times, with multiple titles, to multiple different communities. Such data allows us to ‘tease apart’ the extent to which each factor influences the success of that content. The models we develop help us understand how to better target social media content: by using the right title, for the right community, at the right time.",
"title": ""
},
{
"docid": "ac9bfa64fa41d4f22fc3c45adaadb099",
"text": "Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.",
"title": ""
}
] |
scidocsrr
|
b5610388a77222900338a807b195f4fa
|
The battle against phishing: Dynamic Security Skins
|
[
{
"docid": "f6bdd7cedb9ed3ac8c66b44dba546c69",
"text": "Current security systems su er from the fact that they fail to account for human factors. This paper considers two human limitations: First, people are slow and unreliable when comparing meaningless strings; and second, people have di culties in remembering strong passwords or PINs. We identify two applications where these human factors negatively a ect security: Validation of root keys in public-key infrastructures, and user authentication. Our approach to improve the security of these systems is to use hash visualization, a technique which replaces meaningless strings with structured images. We examine the requirements of such a system and propose the prototypical solution Random Art . We also show how to apply hash visualization to improve the real-world security of root key validation and user authentication.",
"title": ""
}
] |
[
{
"docid": "bce79146a0316fd10c6ee492ff0b5686",
"text": "Recent advances in deep learning for object recognition in natural images has prompted a surge of interest in applying a similar set of techniques to medical images. Most of the initial attempts largely focused on replacing the input to such a deep convolutional neural network from a natural image to a medical image. This, however, does not take into consideration the fundamental differences between these two types of data. More specifically, detection or recognition of an anomaly in medical images depends significantly on fine details, unlike object recognition in natural images where coarser, more global structures matter more. This difference makes it inadequate to use the existing deep convolutional neural networks architectures, which were developed for natural images, because they rely on heavily downsampling an image to a much lower resolution to reduce the memory requirements. This hides details necessary to make accurate predictions for medical images. Furthermore, a single exam in medical imaging often comes with a set of different views which must be seamlessly fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of more than one highresolution medical image. We evaluate this network on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 103 thousand images. We focus on investigating the impact of training set sizes and image sizes on the prediction accuracy. Our results highlight that performance clearly increases with the size of training set, and that the best performance can only be achieved using the images in the original resolution. This suggests the future direction of medical imaging research using deep neural networks is to utilize as much data as possible with the least amount of potentially harmful preprocessing.",
"title": ""
},
{
"docid": "11399ccfc503e9f43402896e5871a4d8",
"text": "This work investigated using n-grams, parts-ofspeech and support vector machines for detecting the customer intents in the user generated contents. The work demonstrated a system of categorization of customer intents that is concise and useful for business purposes. We examined possible sources of text posts to be analyzed using three text mining algorithms. We presented the three algorithms and the results of testing them in detecting different six intents. This work established that intent detection can be performed on text posts with approximately 61% accuracy. Keywords—Intent detection; text mining; support vector machines; N-grams; parts of speech",
"title": ""
},
{
"docid": "db2160b80dd593c33661a16ed2e404d1",
"text": "Steganalysis tools play an important part in saving time and providing new angles of attack for forensic analysts. StegExpose is a solution designed for use in the real world, and is able to analyse images for LSB steganography in bulk using proven attacks in a time efficient manner. When steganalytic methods are combined intelligently, they are able generate even more accurate results. This is the prime focus of StegExpose.",
"title": ""
},
{
"docid": "65b34f78e3b8d54ad75d32cdef487dac",
"text": "Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.",
"title": ""
},
{
"docid": "b98c34a4be7f86fb9506a6b1620b5d3e",
"text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.",
"title": ""
},
{
"docid": "eb3ce498729d7088a4acf525c6961f94",
"text": "Upon vascular injury, platelets are activated by adhesion to adhesive proteins, such as von Willebrand factor and collagen, or by soluble platelet agonists, such as ADP, thrombin, and thromboxane A(2). These adhesive proteins and soluble agonists induce signal transduction via their respective receptors. The various receptor-specific platelet activation signaling pathways converge into common signaling events that stimulate platelet shape change and granule secretion and ultimately induce the \"inside-out\" signaling process leading to activation of the ligand-binding function of integrin α(IIb)β(3). Ligand binding to integrin α(IIb)β(3) mediates platelet adhesion and aggregation and triggers \"outside-in\" signaling, resulting in platelet spreading, additional granule secretion, stabilization of platelet adhesion and aggregation, and clot retraction. It has become increasingly evident that agonist-induced platelet activation signals also cross talk with integrin outside-in signals to regulate platelet responses. Platelet activation involves a series of rapid positive feedback loops that greatly amplify initial activation signals and enable robust platelet recruitment and thrombus stabilization. Recent studies have provided novel insight into the molecular mechanisms of these processes.",
"title": ""
},
{
"docid": "87552ea79b92986de3ce5306ef0266bc",
"text": "This paper presents a novel secondary frequency and voltage control method for islanded microgrids based on distributed cooperative control. The proposed method utilizes a sparse communication network where each DG unit only requires local and its neighbors’ information to perform control actions. The frequency controller restores the system frequency to the nominal value while maintaining the equal generation cost increment value among DG units. The voltage controller simultaneously achieves the critical bus voltage restoration and accurate reactive power sharing. Subsequently, the case when the DG unit ac-side voltage reaches its limit value is discussed and a controller output limitation method is correspondingly provided to selectively realize the desired control objective. This paper also provides a small-signal dynamic model of the microgrid with the proposed controller to evaluate the system dynamic performance. Finally, simulation results on a microgrid test system are presented to validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "91d94889e178a336975644d31affecb1",
"text": "Vehicles face growing security threats as they become increasingly connected with the external world. Hackers, researchers, and car hobbyists have compromised security keys used by the electronic control units (ECUs) on vehicles, modified ECU software, and hacked wireless transmissions from vehicle key fobs and tire monitoring sensors, using low-cost commercially available tools. However, the most damaging security threats to vehicles are only emerging. One such threat is malware, which can infect vehicles in a variety of ways and cause severe consequences. Defending vehicles against malware attacks must address many unique challenges that have not been well addressed in other types of networks. This paper identifies those vehicle-specific challenges, discusses existing solutions and their limitations, and presents a cloud-assisted vehicle malware defense framework that can address these challenges.",
"title": ""
},
{
"docid": "d348d178b17d63ae49cfe6fd4e052758",
"text": "BACKGROUND & AIMS\nChanges in gut microbiota have been reported to alter signaling mechanisms, emotional behavior, and visceral nociceptive reflexes in rodents. However, alteration of the intestinal microbiota with antibiotics or probiotics has not been shown to produce these changes in humans. We investigated whether consumption of a fermented milk product with probiotic (FMPP) for 4 weeks by healthy women altered brain intrinsic connectivity or responses to emotional attention tasks.\n\n\nMETHODS\nHealthy women with no gastrointestinal or psychiatric symptoms were randomly assigned to groups given FMPP (n = 12), a nonfermented milk product (n = 11, controls), or no intervention (n = 13) twice daily for 4 weeks. The FMPP contained Bifidobacterium animalis subsp Lactis, Streptococcus thermophiles, Lactobacillus bulgaricus, and Lactococcus lactis subsp Lactis. Participants underwent functional magnetic resonance imaging before and after the intervention to measure brain response to an emotional faces attention task and resting brain activity. Multivariate and region of interest analyses were performed.\n\n\nRESULTS\nFMPP intake was associated with reduced task-related response of a distributed functional network (49% cross-block covariance; P = .004) containing affective, viscerosensory, and somatosensory cortices. Alterations in intrinsic activity of resting brain indicated that ingestion of FMPP was associated with changes in midbrain connectivity, which could explain the observed differences in activity during the task.\n\n\nCONCLUSIONS\nFour-week intake of an FMPP by healthy women affected activity of brain regions that control central processing of emotion and sensation.",
"title": ""
},
{
"docid": "fd809ccbf0042b84147e88e4009ab894",
"text": "Professional sports is a roughly $500 billion dollar industry that is increasingly data-driven. In this paper we show how machine learning can be applied to generate a model that could lead to better on-field decisions by managers of professional baseball teams. Specifically we show how to use regularized linear regression to learn pitcher-specific predictive models that can be used to help decide when a starting pitcher should be replaced. A key step in the process is our method of converting categorical variables (e.g., the venue in which a game is played) into continuous variables suitable for the regression. Another key step is dealing with situations in which there is an insufficient amount of data to compute measures such as the effectiveness of a pitcher against specific batters. \n For each season we trained on the first 80% of the games, and tested on the rest. The results suggest that using our model could have led to better decisions than those made by major league managers. Applying our model would have led to a different decision 48% of the time. For those games in which a manager left a pitcher in that our model would have removed, the pitcher ended up performing poorly 60% of the time.",
"title": ""
},
{
"docid": "cfebf44f0d3ec7d1ffe76b832704a6d2",
"text": "In practical scenario the transmission of signal or data from source to destination is very challenging. As there is a lot of surrounding environmental changes which influence the transmitted signal. The ISI, multipath will corrupt the data and this data appears at the receiver or destination. Due to this time varying multipath fading different channel estimation filter at the receiver are used to improve the performance. The performance of LMS and RLS adaptive algorithms are analyzed over a AWGN and Rayleigh channels under different multipath fading environments for estimating the time-varying channel.",
"title": ""
},
{
"docid": "f1018166da0922b5428bd1b37e2120ee",
"text": "In many water distribution systems, a significant amount of water is lost because of leakage during transit from the water treatment plant to consumers. As a result, water leakage detection and localization have been a consistent focus of research. Typically, diagnosis or detection systems based on sensor signals incur significant computational and time costs, whereas the system performance depends on the features selected as input to the classifier. In this paper, to solve this problem, we propose a novel, fast, and accurate water leakage detection system with an adaptive design that fuses a one-dimensional convolutional neural network and a support vector machine. We also propose a graph-based localization algorithm to determine the leakage location. An actual water pipeline network is represented by a graph network and it is assumed that leakage events occur at virtual points on the graph. The leakage location at which costs are minimized is estimated by comparing the actual measured signals with the virtually generated signals. The performance was validated on a wireless sensor network based test bed, deployed on an actual WDS. Our proposed methods achieved 99.3% leakage detection accuracy and a localization error of less than 3 m.",
"title": ""
},
{
"docid": "bcb1688082db907ceb5cb51cc4df203e",
"text": "Decision-making is one of the most important functions of managers in any kind of organization. Among different manager's decisions strategic decision-making is a complex process that must be understood completely before it can be practiced effectively. Those responsible for strategic decision-making face a task of extreme complexity and ambiguity. For these reasons, over the past decades, numerous studies have been conducted to the construction of models to aid managers and executives in making better decisions concerning the complex and highly uncertain business environment. In spite of much work that has been conducted in the area of strategic decision-making especially during the 1990's, we still know little about strategic decision-making process and factors affecting it. This paper builds on previous theoretical and empirical studies to determine the extent to which contextual factors impact the strategic decision-making processes. Results showed that researches on contextual factors effecting strategic decision-making process are either limited or have produced contradictory results, especially studies relating decision’s familiarity, magnitude of impact, organizational size, firm’s performance, dynamism, hostility, heterogeneity, industry, cognitive diversity, cognitive conflict, and manager’s need for achievement to strategic decision-making processes. Thus, the study of strategic decision-making process remains very important and much more empirical research is required before any definitive conclusion can be reached.",
"title": ""
},
{
"docid": "006455a7f150b51530c4da6312e36975",
"text": "Traditional peer-to-peer (P2P) networks do not provide service differentiation and incentive for users. Consequently, users can obtain services without themselves contributing any information or service to a P2P community. This leads to the \"free-riding\" and \"tragedy of the commons\" problems, in which the majority of information requests are directed towards a small number of P2P nodes willing to share their resources. The objective of this work is to enable service differentiation in a P2P network based on the amount of services each node has provided to its community, thereby encouraging all network nodes to share resources. We first introduce a resource distribution mechanism between all information sharing nodes. The mechanism is driven by a distributed algorithm which has linear time complexity and guarantees Pareto-optimal resource allocation. Besides giving incentive, the mechanism distributes resources in a way that increases the aggregate utility of the whole network. Second, we model the whole resource request and distribution process as a competition game between the competing nodes. We show that this game has a Nash equilibrium and is collusion-proof. To realize the game, we propose a protocol in which all competing nodes interact with the information providing node to reach Nash equilibrium in a dynamic and efficient manner. Experimental results are reported to illustrate that the protocol achieves its service differentiation objective and can induce productive information sharing by rational network nodes. Finally, we show that our protocol can properly adapt to different node arrival and departure events, and to different forms of network congestion.",
"title": ""
},
{
"docid": "6ad7c4f9a76ff6d036d691923033c8c7",
"text": "Aerial imagery captured via unmanned aerial vehicles (UAVs) is playing an increasingly important role in disaster response. Unlike satellite imagery, aerial imagery can be captured and processed within hours rather than days. In addition, the spatial resolution of aerial imagery is an order of magnitude higher than the imagery produced by the most sophisticated commercial satellites today. Both the United States Federal Emergency Management Agency (FEMA) and the European Commission's Joint Research Center (JRC) have noted that aerial imagery will inevitably present a big data challenge. The purpose of this article is to get ahead of this future challenge by proposing a hybrid crowdsourcing and real-time machine learning solution to rapidly process large volumes of aerial data for disaster response in a time-sensitive manner. Crowdsourcing can be used to annotate features of interest in aerial images (such as damaged shelters and roads blocked by debris). These human-annotated features can then be used to train a supervised machine learning system to learn to recognize such features in new unseen images. In this article, we describe how this hybrid solution for image analysis can be implemented as a module (i.e., Aerial Clicker) to extend an existing platform called Artificial Intelligence for Disaster Response (AIDR), which has already been deployed to classify microblog messages during disasters using its Text Clicker module and in response to Cyclone Pam, a category 5 cyclone that devastated Vanuatu in March 2015. The hybrid solution we present can be applied to both aerial and satellite imagery and has applications beyond disaster response such as wildlife protection, human rights, and archeological exploration. As a proof of concept, we recently piloted this solution using very high-resolution aerial photographs of a wildlife reserve in Namibia to support rangers with their wildlife conservation efforts (SAVMAP project, http://lasig.epfl.ch/savmap ). The results suggest that the platform we have developed to combine crowdsourcing and machine learning to make sense of large volumes of aerial images can be used for disaster response.",
"title": ""
},
{
"docid": "ed7ce515d15c506ddcaab29fdc7eab01",
"text": "Normally, the primary purpose of an information display is to convey information. If information displays can be aesthetically interesting, that might be an added bonus. This paper considers an experiment in reversing this imperative. It describes the Kandinsky system which is designed to create displays which are first aesthetically interesting, and then as an added bonus, able to convey information. The Kandinsky system works on the basis of aesthetic properties specified by an artist (in a visual form). It then explores a space of collages composed from information bearing images, using an optimization technique to find compositions which best maintain the properties of the artist's aesthetic expression.",
"title": ""
},
{
"docid": "e3b91b1133a09d7c57947e2cd85a17c7",
"text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.",
"title": ""
},
{
"docid": "d60ea5f80654adeb4442f6aaa0c2f164",
"text": "Repetition and semantic-associative priming effects have been demonstrated for words in nonstructured contexts (i.e., word pairs or lists of words) in numerous behavioral and electrophysiological studies. The processing of a word has thus been shown to benefit from the prior presentation of an identical or associated word in the absence of a constraining context. An examination of such priming effects for words that are embedded within a meaningful discourse context provides information about the interaction of different levels of linguistic analysis. This article reviews behavioral and electrophysiological research that has examined the processing of repeated and associated words in sentence and discourse contexts. It provides examples of the ways in which eye tracking and event-related potentials might be used to further explore priming effects in discourse. The modulation of lexical priming effects by discourse factors suggests the interaction of information at different levels in online language comprehension.",
"title": ""
},
{
"docid": "fa604c528539ac5cccdbd341a9aebbf7",
"text": "BACKGROUND\nAn understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts.\n\n\nMETHODS\nThe uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles.\n\n\nRESULTS/CONCLUSIONS\nP-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.",
"title": ""
},
{
"docid": "dd9a09431e7816e6774aaf7b2ce33a6f",
"text": "Image based social networks are among the most popular social networking services in recent years. With tremendous images uploaded everyday, understanding users’ preferences to the user-generated images and recommending them to users have become an urgent need. However, this is a challenging task. On one hand, we have to overcome the extremely data sparsity issue in image recommendation. On the other hand, we have to model the complex aspects that influence users’ preferences to these highly subjective content from the heterogeneous data. In this paper, we develop an explainable social contextual image recommendation model to simultaneously explain and predict users’ preferences to images. Specifically, in addition to user interest modeling in the standard recommendation, we identify three key aspects that affect each user’s preference on the social platform, where each aspect summarizes a contextual representation from the complex relationships between users and images. We design a hierarchical attention model in recommendation process given the three contextual aspects. Particularly, the bottom layered attention networks learn to select informative elements of each aspect from heterogeneous data, and the top layered attention network learns to score the aspect importance of the three identified aspects for each user. In this way, we could overcome the data sparsity issue by leveraging the social contextual aspects from heterogeneous data, and explain the underlying reasons for each user’s behavior with the learned hierarchial attention scores. Extensive experimental results on realworld datasets clearly show the superiority of our proposed model.",
"title": ""
}
] |
scidocsrr
|
02fae2744068b59a24fd8deff0e950f7
|
A Deep Neural Network for Modeling Music
|
[
{
"docid": "59b928fab5d53519a0a020b7461690cf",
"text": "Musical genres are categorical descriptions that are used to describe music. They are commonly used to structure the increasing amounts of music available in digital form on the Web and are important for music information retrieval. Genre categorization for audio has traditionally been performed manually. A particular musical genre is characterized by statistical properties related to the instrumentation, rhythmic structure and form of its members. In this work, algorithms for the automatic genre categorization of audio signals are described. More specifically, we propose a set of features for representing texture and instrumentation. In addition a novel set of features for representing rhythmic structure and strength is proposed. The performance of those feature sets has been evaluated by training statistical pattern recognition classifiers using real world audio collections. Based on the automatic hierarchical genre classification two graphical user interfaces for browsing and interacting with large audio collections have been developed.",
"title": ""
},
{
"docid": "c87fa26d080442b1527fcc6a74df7ec4",
"text": "We present MIRtoolbox, an integrated set of functions written in Matlab, dedicated to the extraction of musical features from audio files. The design is based on a modular framework: the different algorithms are decomposed into stages, formalized using a minimal set of elementary mechanisms, and integrating different variants proposed by alternative approaches – including new strategies we have developed –, that users can select and parametrize. This paper offers an overview of the set of features, related, among others, to timbre, tonality, rhythm or form, that can be extracted with MIRtoolbox. Four particular analyses are provided as examples. The toolbox also includes functions for statistical analysis, segmentation and clustering. Particular attention has been paid to the design of a syntax that offers both simplicity of use and transparent adaptiveness to a multiplicity of possible input types. Each feature extraction method can accept as argument an audio file, or any preliminary result from intermediary stages of the chain of operations. Also the same syntax can be used for analyses of single audio files, batches of files, series of audio segments, multichannel signals, etc. For that purpose, the data and methods of the toolbox are organised in an object-oriented architecture. 1. MOTIVATION AND APPROACH MIRToolbox is a Matlab toolbox dedicated to the extraction of musically-related features from audio recordings. It has been designed in particular with the objective of enabling the computation of a large range of features from databases of audio files, that can be applied to statistical analyses. Few softwares have been proposed in this area. The most important one, Marsyas [1], provides a general architecture for connecting audio, soundfiles, signal processing blocks and machine learning (see section 5 for more details). One particularity of our own approach relies in the use of the Matlab computing environment, which offers good visualisation capabilities and gives access to a large variety of other toolboxes. In particular, the MIRToolbox makes use of functions available in recommended public-domain toolboxes such as the Auditory Toolbox [2], NetLab [3], or SOMtoolbox [4]. Other toolboxes, such as the Statistics toolbox or the Neural Network toolbox from MathWorks, can be directly used for further analyses of the features extracted by MIRToolbox without having to export the data from one software to another. Such computational framework, because of its general objectives, could be useful to the research community in Music Information Retrieval (MIR), but also for educational purposes. For that reason, particular attention has been paid concerning the ease of use of the toolbox. In particular, complex analytic processes can be designed using a very simple syntax, whose expressive power comes from the use of an object-oriented paradigm. The different musical features extracted from the audio files are highly interdependent: in particular, as can be seen in figure 1, some features are based on the same initial computations. In order to improve the computational efficiency, it is important to avoid redundant computations of these common components. Each of these intermediary components, and the final musical features, are therefore considered as building blocks that can been freely articulated one with each other. Besides, in keeping with the objective of optimal ease of use of the toolbox, each building block has been conceived in a way that it can adapt to the type of input data. For instance, the computation of the MFCCs can be based on the waveform of the initial audio signal, or on the intermediary representations such as spectrum, or mel-scale spectrum (see Fig. 1). Similarly, autocorrelation is computed for different range of delays depending on the type of input data (audio waveform, envelope, spectrum). This decomposition of all the set of feature extraction algorithms into a common set of building blocks has the advantage of offering a synthetic overview of the different approaches studied in this domain of research. 2. FEATURE EXTRACTION 2.1. Feature overview Figure 1 shows an overview of the main features implemented in the toolbox. All the different processes start from the audio signal (on the left) and form a chain of operations proceeding to right. The vertical disposition of the processes indicates an increasing order of complexity of the operations, from simplest computation (top) to more detailed auditory modelling (bottom). Each musical feature is related to one of the musical dimensions traditionally defined in music theory. Boldface characters highlight features related to pitch, to tonality (chromagram, key strength and key Self-Organising Map, or SOM) and to dynamics (Root Mean Square, or RMS, energy). Bold italics indicate features related to rhythm, namely tempo, pulse clarity and fluctuation. Simple italics highlight a large set of features that can be associated to timbre. Among them, all the operators in grey italics can be in fact applied to many others different representations: for instance, statistical moments such as centroid, kurtosis, etc., can be applied to either spectra, envelopes, but also to histograms based on any given feature. One of the simplest features, zero-crossing rate, is based on a simple description of the audio waveform itself: it counts the number of sign changes of the waveform. Signal energy is computed using root mean square, or RMS [5]. The envelope of the audio signal offers timbral characteristics of isolated sonic event. FFT-based spectrum can be computed along the frequency domain or along Mel-bands, with linear or decibel energy scale, and",
"title": ""
}
] |
[
{
"docid": "789a49a47e7d1a4096e69a08dcf23b5e",
"text": "Osha Saeed Al Neyadi is a B.Ed graduate from Al Ain Women's College. She now teaches at Asim Bin Thabet Primary School for Boys in Al Markaneya, Al Ain. Introduction This is a study of the effects of using games to practice vocabulary in the teaching of English to young learners. Teaching vocabulary through games was chosen as the focus area for my research for several reasons. Firstly, I observed during the course of many teaching practice placements during my undergraduate studies that new vocabulary in English lessons in UAE schools is mostly taught through the use of flashcards. Secondly, I observed that it is often taught out of context, as isolated words, and thirdly, I noticed that there is minimal variation in the teaching style used in English language teaching in UAE schools. The study was conducted with twenty-nine students in Grade Six in a primary girls' school in in the United Arab Emirates (UAE). According to my observations of how vocabulary is taught in schools, it relies on drilling the vocabulary to get the students to produce the correct pronunciation of words. Other strategies such as implementing games are very occasionally used to teach vocabulary; however, they are only used for a limited time. Using games is considered time consuming, so teachers prefer to use drilling as an immediate way of teaching and practicing vocabulary. In the school where the research was conducted, Arabic is the medium of instruction. In English class, students are encouraged to speak in English when they answer, and while they interact with their classmates. Translation is generally avoided, but it is sometimes used to clarify difficult linguistic concepts, and also to clarify meaning.",
"title": ""
},
{
"docid": "cfd8458a802341eb20ffc14644cd9fad",
"text": "Wireless Sensor Networks (WSNs) are crucial in supporting continuous environmental monitoring, where sensor nodes are deployed and must remain operational to collect and transfer data from the environment to a base-station. However, sensor nodes have limited energy in their primary power storage unit, and this energy may be quickly drained if the sensor node remains operational over long periods of time. Therefore, the idea of harvesting ambient energy from the immediate surroundings of the deployed sensors, to recharge the batteries and to directly power the sensor nodes, has recently been proposed. The deployment of energy harvesting in environmental field systems eliminates the dependency of sensor nodes on battery power, drastically reducing the maintenance costs required to replace batteries. In this article, we review the state-of-the-art in energy-harvesting WSNs for environmental monitoring applications, including Animal Tracking, Air Quality Monitoring, Water Quality Monitoring, and Disaster Monitoring to improve the ecosystem and human life. In addition to presenting the technologies for harvesting energy from ambient sources and the protocols that can take advantage of the harvested energy, we present challenges that must be addressed to further advance energy-harvesting-based WSNs, along with some future work directions to address these challenges.",
"title": ""
},
{
"docid": "9c349ef0f3a48eaeaf678b8730d4b82c",
"text": "This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects’ eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects’ eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved. Keywords—Biometric, EEG, Wavelet Packet Decomposition, Neural Networks",
"title": ""
},
{
"docid": "6b7c0ce61dba453ac26684b9214a752f",
"text": "After a century of controversy, the notion that the immune system regulates cancer development is experiencing a new resurgence. An overwhelming amount of data from animal models--together with compelling data from human patients--indicate that a functional cancer immunosurveillance process indeed exists that acts as an extrinsic tumor suppressor. However, it has also become clear that the immune system can facilitate tumor progression, at least in part, by sculpting the immunogenic phenotype of tumors as they develop. The recognition that immunity plays a dual role in the complex interactions between tumors and the host prompted a refinement of the cancer immunosurveillance hypothesis into one termed \"cancer immunoediting.\" In this review, we summarize the history of the cancer immunosurveillance controversy and discuss its resolution and evolution into the three Es of cancer immunoediting--elimination, equilibrium, and escape.",
"title": ""
},
{
"docid": "a40fab738589a9efbf3f87b6c7668601",
"text": "AUTOSAR supports the re-use of software and hardware components of automotive electronic systems. Therefore, amongst other things, AUTOSAR defines a software architecture that is used to decouple software components from hardware devices. This paper gives an overview about the different layers of that architecture. In addition, the upper most layer that concerns the application specific part of automotive electronic systems is presented.",
"title": ""
},
{
"docid": "e94e4d9a63fab5f10ef21ce0758292fd",
"text": "Mobile devices are gradually changing people's computing behaviors. However, due to the limitations of physical size and power consumption, they are not capable of delivering a 3D graphics rendering experience comparable to desktops. Many applications with intensive graphics rendering workloads are unable to run on mobile platforms directly. This issue can be addressed with the idea of remote rendering: the heavy 3D graphics rendering computation runs on a powerful server and the rendering results are transmitted to the mobile client for display. However, the simple remote rendering solution inevitably suffers from the large interaction latency caused by wireless networks, and is not acceptable for many applications that have very strict latency requirements.\n In this article, we present an advanced low-latency remote rendering system that assists mobile devices to render interactive 3D graphics in real-time. Our design takes advantage of an image based rendering technique: 3D image warping, to synthesize the mobile display from the depth images generated on the server. The research indicates that the system can successfully reduce the interaction latency while maintaining the high rendering quality by generating multiple depth images at the carefully selected viewpoints. We study the problem of viewpoint selection, propose a real-time reference viewpoint prediction algorithm, and evaluate the algorithm performance with real-device experiments.",
"title": ""
},
{
"docid": "17c8766c5fcc9b6e0d228719291dcea5",
"text": "In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.",
"title": ""
},
{
"docid": "9ffdee7d929c8b5efb1baf0b2b46a7a4",
"text": "Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to nontabular reinforcement learning. This pseudocount was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma’s Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting Bellemare et al.’s approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma’s Revenge.",
"title": ""
},
{
"docid": "d1c2936521b0a3270163ea4d9123e4da",
"text": "Large-scale instance-level image retrieval aims at retrieving specific instances of objects or scenes. Simultaneously retrieving multiple objects in a test image adds to the difficulty of the problem, especially if the objects are visually similar. This paper presents an efficient approach for per-exemplar multi-label image classification, which targets the recognition and localization of products in retail store images. We achieve runtime efficiency through the use of discriminative random forests, deformable dense pixel matching and genetic algorithm optimization. Cross-dataset recognition is performed, where our training images are taken in ideal conditions with only one single training image per product label, while the evaluation set is taken using a mobile phone in real-life scenarios in completely different conditions. In addition, we provide a large novel dataset and labeling tools for products image search, to motivate further research efforts on multi-label retail products image classification. The proposed approach achieves promising results in terms of both accuracy and runtime efficiency on 680 annotated images of our dataset, and 885 test images of GroZi-120 dataset. We make our dataset of 8350 different product images and the 680 test images from retail stores with complete annotations available to the wider community.",
"title": ""
},
{
"docid": "5cde30d7be98b6247e5f856a3bc898a7",
"text": "A novel ridge-port Rotman-lens is described, which operates as a lens with tapered slot-line ports. The lens parallel-plates mirror the ridge-ports to tapered slot-line ports. The lens height is half the height of the antenna array row, and two lenses can be stacked and feed one dual-polarized antenna array row, thus yielding a compact antenna system. The lens is air-filled, so it is easy to manufacture and repeatable in performance with no dielectric tolerances and losses, and it is lightweight compared to a dielectric lens. The lens with elongated tapered ports operates down to the antenna array low frequency, thus utilizing the large antenna bandwidth. These features make the ridge-port air-filled lens more useful than a conventional microstrip Rotman lens.",
"title": ""
},
{
"docid": "1208d83ec167185bcf241a8b2bd67057",
"text": "This paper presents a novel one-layer recurrent neural network modeled by means of a differential inclusion for solving nonsmooth optimization problems, in which the number of neurons in the proposed neural network is the same as the number of decision variables of optimization problems. Compared with existing neural networks for nonsmooth optimization problems, the global convexity condition on the objective functions and constraints is relaxed, which allows the objective functions and constraints to be nonconvex. It is proven that the state variables of the proposed neural network are convergent to optimal solutions if a single design parameter in the model is larger than a derived lower bound. Numerical examples with simulation results substantiate the effectiveness and illustrate the characteristics of the proposed neural network.",
"title": ""
},
{
"docid": "01055f9b1195cd7d03b404f3d530bb55",
"text": "In recent years there has been an increasing interest in approaches to scientific summarization that take advantage of the citations a research paper has received in order to extract its main contributions. In this context, the CL-SciSumm 2017 Shared Task has been proposed to address citation-based information extraction and summarization. In this paper we present several systems to address three of the CL-SciSumm tasks. Notably, unsupervised systems to match citing and cited sentences (Task 1A), a supervised approach to identify the type of information being cited (Task 1B), and a supervised citation-based summarizer (Task 2).",
"title": ""
},
{
"docid": "b1f98cbb045f8c15f53d284c9fa9d881",
"text": "If the pace of increase in life expectancy in developed countries over the past two centuries continues through the 21st century, most babies born since 2000 in France, Germany, Italy, the UK, the USA, Canada, Japan, and other countries with long life expectancies will celebrate their 100th birthdays. Although trends differ between countries, populations of nearly all such countries are ageing as a result of low fertility, low immigration, and long lives. A key question is: are increases in life expectancy accompanied by a concurrent postponement of functional limitations and disability? The answer is still open, but research suggests that ageing processes are modifiable and that people are living longer without severe disability. This finding, together with technological and medical development and redistribution of work, will be important for our chances to meet the challenges of ageing populations.",
"title": ""
},
{
"docid": "079308f1068f8c2e9fc68c941d4ad763",
"text": "Location-based social networks (LBSNs) have attracted an increasing number of users in recent years, resulting in large amounts of geographical and social data. Such LBSN data provide an unprecedented opportunity to study the human movement from their socio-spatial behavior, in order to improve location-based applications like location recommendation. As users can check-in at new places, traditional work on location prediction that relies on mining a user’s historical moving trajectories fails as it is not designed for the cold-start problem of recommending new check-ins. While previous work on LBSNs attempting to utilize a user’s social connections for location recommendation observed limited help from social network information. In this work, we propose to address the cold-start location recommendation problem by capturing the correlations between social networks and geographical distance on LBSNs with a geo-social correlation model. The experimental results on a real-world LBSN dataset demonstrate that our approach properly models the geo-social correlations of a user’s cold-start check-ins and significantly improves the location recommendation performance.",
"title": ""
},
{
"docid": "3b2544b08907a5bc235add86c4034f7e",
"text": "Understanding and managing landslides and soil erosion can be challenging in British Columbia. Hillslope processes occur on a complex template, composed of a wide array of topographies, climates, geologies, and ecosystems (Chapter 2, “Physiography of British Columbia”). British Columbia is a diverse province with mountain ranges and incised plateaus. The mountainous topography is responsible for creating a variety of climates. On the Coast, mountain slopes support rainforests under a maritime climate, and at higher elevations have an extensive cover of snow and ice. The interior mountains are drier and colder in winter. In the southern Interior, some valleys are semi-arid grasslands. In the northern Interior, mountain ranges and plateaus have sporadic permafrost. Underlying these climates and topographies is a diverse geology. Bedrock types and surficial materials vary greatly throughout the province (Chapter 2, “Physiography of British Columbia”). In general though, bedrock types range from flat-lying sedimentary rock in the northeast, to faulted and folded sedimentary rocks in the Rocky Mountains, extrusive volcanics in the central Interior, and igneous intrusive rock on the Coast. Surficial sediments are eroded from this bedrock and re-deposited by glaciers, water, wind, and mass movement. Some sediments are deposited directly by glaciers (till), some settle out in running water (fluvial), in lakes (lacustrine/glaciolacustrine), or in seas (marine/glaciomarine), and others are deposited by wind (eolian) or by landslides (colluvium). This chapter is divided into three parts. The first describes landslides and landslide processes, the second explains soil erosion and related processes, and the third examines the reading and interpretation of the landscape. However, several important geomorphic processes are not included in this chapter. Although snow avalanches are important hazards in western Canada, with thousands occurring each year and claiming more than 10 lives annually, a treatment of these avalanche processes is beyond the scope of this chapter. Snow avalanches are further discussed in Chapter 9 (“Forest Management Effects on Hillslope Processes”), and an excellent discussion of snow avalanche processes is given in the Ministry of Forests and Range’s Land Management Handbook 55 (Weir 2002). Periglacial processes, such as nivation and solifluction, are also ubiquitous in British Columbia’s mountains. Although these processes may play a role in the priming of alpine areas for debris flows and rock slides, a large body of literature already exists on periglacial geomorphology. Hillslope Processes Chapter 8",
"title": ""
},
{
"docid": "1670dda371458257c8f86390b398b3f8",
"text": "Latent topic model such as Latent Dirichlet Allocation (LDA) has been designed for text processing and has also demonstrated success in the task of audio related processing. The main idea behind LDA assumes that the words of each document arise from a mixture of topics, each of which is a multinomial distribution over the vocabulary. When applying the original LDA to process continuous data, the wordlike unit need be first generated by vector quantization (VQ). This data discretization usually results in information loss. To overcome this shortage, this paper introduces a new topic model named GaussianLDA for audio retrieval. In the proposed model, we consider continuous emission probability, Gaussian instead of multinomial distribution. This new topic model skips the vector quantization and directly models each topic as a Gaussian distribution over audio features. It avoids discretization by this way and integrates the procedure of clustering. The experiments of audio retrieval demonstrate that GaussianLDA achieves better performance than other compared methods. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4d46796427b515f24ced47acf14f041e",
"text": "Aripiprazole is a novel atypical antipsychotic for the treatment of schizophrenia. It is a D2 receptor partial agonist with partial agonist activity at 5-HT1A receptors and antagonist activity at 5-HT2A receptors. The long-term efficacy and safety of aripiprazole (30 mg/d) relative to haloperidol (10 mg/d) were investigated in two 52-wk, randomized, double-blind, multicentre studies (using similar protocols which were prospectively identified to be pooled for analysis) in 1294 patients in acute relapse with a diagnosis of chronic schizophrenia and who had previously responded to antipsychotic medications. Aripiprazole demonstrated long-term efficacy that was comparable or superior to haloperidol across all symptoms measures, including significantly greater improvements for PANSS negative subscale scores and MADRS total score (p<0.05). The time to discontinuation for any reason was significantly greater with aripiprazole than with haloperidol (p=0.0001). Time to discontinuation due to adverse events or lack of efficacy was significantly greater with aripiprazole than with haloperidol (p=0.0001). Aripiprazole was associated with significantly lower scores on all extrapyramidal symptoms assessments than haloperidol (p<0.001). In summary, aripiprazole demonstrated efficacy equivalent or superior to haloperidol with associated benefits for safety and tolerability. Aripiprazole represents a promising new option for the long-term treatment of schizophrenia.",
"title": ""
},
{
"docid": "dbd9f188d32a2397b5cf26dc33788645",
"text": "Salinization, a widespread threat to the structure and ecological functioning of inland and coastal wetlands, is currently occurring at an unprecedented rate and geographic scale. The causes of salinization are diverse and include alterations to freshwater flows, land-clearance, irrigation, disposal of wastewater effluent, sea level rise, storm surges, and applications of de-icing salts. Climate change and anthropogenic modifications to the hydrologic cycle are expected to further increase the extent and severity of wetland salinization. Salinization alters the fundamental physicochemical nature of the soil-water environment, increasing ionic concentrations and altering chemical equilibria and mineral solubility. Increased concentrations of solutes, especially sulfate, alter the biogeochemical cycling of major elements including carbon, nitrogen, phosphorus, sulfur, iron, and silica. The effects of salinization on wetland biogeochemistry typically include decreased inorganic nitrogen removal (with implications for water quality and climate regulation), decreased carbon storage (with implications for climate regulation and wetland accretion), and increased generation of toxic sulfides (with implications for nutrient cycling and the health/functioning of wetland biota). Indeed, increased salt and sulfide concentrations induce physiological stress in wetland biota and ultimately can result in large shifts in wetland communities and their associated ecosystem functions. The productivity and composition of freshwater species assemblages will be highly altered, and there is a high potential for the disruption of existing interspecific interactions. Although there is a wealth of information on how salinization impacts individual ecosystem components, relatively few studies have addressed the complex and often non-linear feedbacks that determine ecosystem-scale responses or considered how wetland salinization will affect landscape-level processes. Although the salinization of wetlands may be unavoidable in many cases, these systems may also prove to be a fertile testing ground for broader ecological theories including (but not limited to): investigations into alternative stable states and tipping points, trophic cascades, disturbance-recovery processes, and the role of historical events and landscape context in driving community response to disturbance.",
"title": ""
},
{
"docid": "dea235c392f876cae8004166209ace3d",
"text": "Vehicular ad hoc networking is an emerging technology for future on-the-road communications. Due to the virtue of vehicle-to-vehicle and vehicle-to-infrastructure communications, vehicular ad hoc networks (VANETs) are expected to enable a plethora of communication-based automotive applications including diverse in-vehicle infotainment applications and road safety services. Even though vehicles are organized mostly in an ad hoc manner in the network topology, directly applying the existing communication approaches designed for traditional mobile ad hoc networks to large-scale VANETs with fast-moving vehicles can be ineffective and inefficient. To achieve success in a vehicular environment, VANET-specific communication solutions are imperative. In this paper, we provide a comprehensive overview of various radio channel access protocols and resource management approaches, and discuss their suitability for infotainment and safety service support in VANETs. Further, we present recent research activities and related projects on vehicular communications. Potential challenges and open research issues are also",
"title": ""
},
{
"docid": "7e8b58b88a1a139f9eb6642a69eb697a",
"text": "We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers. The complex structure of the light field is thus reduced to a comparatively low-dimensional representation, which can be decoded in a variety of ways. The different pathways of upconvolution we currently support are for disparity estimation and separation of the lightfield into diffuse and specular intrinsic components. The key idea is that we can jointly perform unsupervised training for the autoencoder path of the network, and supervised training for the other decoders. This way, we find features which are both tailored to the respective tasks and generalize well to datasets for which only example light fields are available. We provide an extensive evaluation on synthetic light field data, and show that the network yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.",
"title": ""
}
] |
scidocsrr
|
1b06ad9789ade91b356d083891fa4581
|
Android Malware Detection using Large-scale Network Representation Learning
|
[
{
"docid": "5757d96fce3e0b3b3303983b15d0030d",
"text": "Malicious applications pose a threat to the security of the Android platform. The growing amount and diversity of these applications render conventional defenses largely ineffective and thus Android smartphones often remain unprotected from novel malware. In this paper, we propose DREBIN, a lightweight method for detection of Android malware that enables identifying malicious applications directly on the smartphone. As the limited resources impede monitoring applications at run-time, DREBIN performs a broad static analysis, gathering as many features of an application as possible. These features are embedded in a joint vector space, such that typical patterns indicative for malware can be automatically identified and used for explaining the decisions of our method. In an evaluation with 123,453 applications and 5,560 malware samples DREBIN outperforms several related approaches and detects 94% of the malware with few false alarms, where the explanations provided for each detection reveal relevant properties of the detected malware. On five popular smartphones, the method requires 10 seconds for an analysis on average, rendering it suitable for checking downloaded applications directly on the device.",
"title": ""
},
{
"docid": "05a4ec72afcf9b724979802b22091fd4",
"text": "Convolutional neural networks (CNNs) have greatly improved state-of-the-art performances in a number of fields, notably computer vision and natural language processing. In this work, we are interested in generalizing the formulation of CNNs from low-dimensional regular Euclidean domains, where images (2D), videos (3D) and audios (1D) are represented, to high-dimensional irregular domains such as social networks or biological networks represented by graphs. This paper introduces a formulation of CNNs on graphs in the context of spectral graph theory. We borrow the fundamental tools from the emerging field of signal processing on graphs, which provides the necessary mathematical background and efficient numerical schemes to design localized graph filters efficient to learn and evaluate. As a matter of fact, we introduce the first technique that offers the same computational complexity than standard CNNs, while being universal to any graph structure. Numerical experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs, as long as the graph is well-constructed.",
"title": ""
}
] |
[
{
"docid": "7029d1f66732c45816ce9b7b5554f884",
"text": "The most critical problem in the world is to meet the energy demand, because of steadily increasing energy consumption. Refrigeration systems` electricity consumption has big portion in overall consumption. Therefore, considerable attention has been given to refrigeration capacity modulation system in order to decrease electricity consumption of these systems. Capacity modulation is used to meet exact amount of load at partial load and lowered electricity consumption by avoiding over capacity using. Variable speed refrigeration systems are the most common capacity modulation method for commercially and household purposes. Although the vapor compression refrigeration designed to satisfy the maximum load, they work at partial load conditions most of their life cycle and they are generally regulated as on/off controlled. The experimental chiller system contains four main components: compressor, condenser, expansion device, and evaporator in Fig.1 where this study deals with effects of different control methods on variable speed compressor (VSC) and electronic expansion valve (EEV). This chiller system has a scroll type VSC and a stepper motor controlled EEV.",
"title": ""
},
{
"docid": "0aa0c63a4617bf829753df08c5544791",
"text": "The paper discusses the application program interface (API). Most software projects reuse components exposed through APIs. In fact, current-day software development technologies are becoming inseparable from the large APIs they provide. An API is the interface to implemented functionality that developers can access to perform various tasks. APIs support code reuse, provide high-level abstractions that facilitate programming tasks, and help unify the programming experience. A study of obstacles that professional Microsoft developers faced when learning to use APIs uncovered challenges and resulting implications for API users and designers. The article focuses on the obstacles to learning an API. Although learnability is only one dimension of usability, there's a clear relationship between the two, in that difficult-to-use APIs are likely to be difficult to learn as well. Many API usability studies focus on situations where developers are learning to use an API. The author concludes that as APIs keep growing larger, developers will need to learn a proportionally smaller fraction of the whole. In such situations, the way to foster more efficient API learning experiences is to include more sophisticated means for developers to identify the information and the resources they need-even for well-designed and documented APIs.",
"title": ""
},
{
"docid": "b505c23c5b3c924242ca6cf65fd4efc7",
"text": "Adolescent idiopathic scoliosis is a common disease with an overall prevalence of 0.47-5.2 % in the current literature. The female to male ratio ranges from 1.5:1 to 3:1 and increases substantially with increasing age. In particular, the prevalence of curves with higher Cobb angles is substantially higher in girls than in boys: The female to male ratio rises from 1.4:1 in curves from 10° to 20° up to 7.2:1 in curves >40°. Curve pattern and prevalence of scoliosis is not only influenced by gender, but also by genetic factors and age of onset. These data obtained from school screening programs have to be interpreted with caution, since methods and cohorts of the different studies are not comparable as age groups of the cohorts and diagnostic criteria differ substantially. We do need data from studies with clear standards of diagnostic criteria and study protocols that are comparable to each other.",
"title": ""
},
{
"docid": "2d26560f6ae654a546db8f4463ed87be",
"text": "Linked Data promises to serve as a disruptor of traditional approaches to data management and use, promoting the push from the traditional Web of documents to a Web of data. The ability for data consumers to adopt a follow your nose approach, traversing links defined within a dataset or across independently-curated datasets, is an essential feature of this new Web of Data, enabling richer knowledge retrieval thanks to synthesis across multiple sources of, and views on, inter-related datasets. But for the Web of Data to be successful, we must design novel ways of interacting with the corresponding very large amounts of complex, interlinked, multi-dimensional data throughout its management cycle. The design of user interfaces for Linked Data, and more specifically interfaces that represent the data visually, play a central role in this respect. Contributions to this special issue on Linked Data visualisation investigate different approaches to harnessing visualisation as a tool for exploratory discovery and basic-to-advanced analysis. The papers in this volume illustrate the design and construction of intuitive means for end-users to obtain new insight and gather more knowledge, as they follow links defined across datasets over the Web of Data.",
"title": ""
},
{
"docid": "e04bc357c145c38ed555b3c1fa85c7da",
"text": "This paper presents Hybrid (RSA & AES) encryption algorithm to safeguard data security in Cloud. Security being the most important factor in cloud computing has to be dealt with great precautions. This paper mainly focuses on the following key tasks: 1. Secure Upload of data on cloud such that even the administrator is unaware of the contents. 2. Secure Download of data in such a way that the integrity of data is maintained. 3. Proper usage and sharing of the public, private and secret keys involved for encryption and decryption. The use of a single key for both encryption and decryption is very prone to malicious attacks. But in hybrid algorithm, this problem is solved by the use of three separate keys each for encryption as well as decryption. Out of the three keys one is the public key, which is made available to all, the second one is the private key which lies only with the user. In this way, both the secure upload as well as secure download of the data is facilitated using the two respective keys. Also, the key generation technique used in this paper is unique in its own way. This has helped in avoiding any chances of repeated or redundant key.",
"title": ""
},
{
"docid": "4244af4f70e49c3e08e3943a88c79645",
"text": "From a dynamic system point of view, bat locomotion stands out among other forms of flight. During a large part of bat wingbeat cycle the moving body is not in a static equilibrium. This is in sharp contrast to what we observe in other simpler forms of flight such as insects, which stay at their static equilibrium. Encouraged by biological examinations that have revealed bats exhibit periodic and stable limit cycles, this work demonstrates that one effective approach to stabilize articulated flying robots with bat morphology is locating feasible limit cycles for these robots; then, designing controllers that retain the closed-loop system trajectories within a bounded neighborhood of the designed periodic orbits. This control design paradigm has been evaluated in practice on a recently developed bio-inspired robot called Bat Bot (B2).",
"title": ""
},
{
"docid": "9f9910c9b51c6da269dd2eb0279bb6a1",
"text": "The distribution between sediments and water plays a key role in the food-chain transfer of hydrophobic organic chemicals. Current models and assessment methods of sediment-water distribution predominantly rely on chemical equilibrium partitioning despite several observations reporting an \"enrichment\" of chemical concentrations in suspended sediments. In this study we propose and derive a fugacity based model of chemical magnification due to organic carbon decomposition throughout the process of sediment diagenesis. We compare the behavior of the model to observations of bottom sediment-water, suspended sediments-water, and plankton-water distribution coefficients of a range of hydrophobic organic chemicals in five Great Lakes. We observe that (i) sediment-water distribution coefficients of organic chemicals between bottom sediments and water and between suspended sediments and water are considerably greaterthan expected from chemical partitioning and that the degree sediment-water disequilibrium appears to follow a relationship with the depth of the lake; (ii) concentrations increase from plankton to suspended sediments to bottom sediments and follow an inverse ratherthan a proportional relationship with the organic carbon content and (iii) the degree of disequilibrium between bottom sediment and water, suspended sediments and water, and plankton and water increases when the octanol-water partition coefficient K(ow) drops. We demonstrate that these observations can be explained by a proposed organic carbon mineralization model. Our findings imply that sediment-water distribution is not solely a chemical partitioning process but is to a large degree controlled by lake specific organic carbon mineralization processes.",
"title": ""
},
{
"docid": "b55a0ae61e2b0c36b5143ef2b7b2dbf0",
"text": "This study reports a comparison of screening tests for dyslexia, dyspraxia and Meares-Irlen (M-I) syndrome in a Higher Education setting, the University of Worcester. Using a sample of 74 volunteer students, we compared the current tutor-delivered battery of 15 subtests with a computerized test, the Lucid Adult Dyslexia Screening test (LADS), and both of these with data on assessment outcomes. The sensitivity of this tutor battery was higher than LADS in predicting dyslexia, dyspraxia or M-I syndrome (91% compared with 66%) and its specificity was lower (79% compared with 90%). Stepwise logistic regression on these tests was used to identify a better performing subset of tests, when combined with a change in practice for M-I syndrome screening. This syndrome itself proved to be a powerful discriminator for dyslexia and/or dyspraxia, and we therefore recommend it as the first stage in a two-stage screening process. The specificity and sensitivity of the new battery, the second part of which comprises LADS plus four of the original tutor delivered subtests, provided the best overall performance: 94% sensitivity and 92% specificity. We anticipate that the new two-part screening process would not take longer to complete.",
"title": ""
},
{
"docid": "6e940b1713dbd09b3fe9a05ad3f683d9",
"text": "In this paper, the design and implementation of an online examination system for medical students are introduced. The system aims to improve teaching quality, motivate students' self-learning and refresh basic and clinical knowledge for medical students and young doctors. In the design of this system, we applied Unified Modeling Language (UML) modeling, such as use case diagrams, sequence diagrams, activity diagrams, etc. The functional requirements are traced from use case models to component model via analysis and design models, thus the components derived using this approach form the components of Model-View-Controller (MVC) architecture. It's proven very helpful to smooth the communication between medical teachers and software developers, increase code reuse and speed up the development.",
"title": ""
},
{
"docid": "d43f56f13fee5b45cb31233e61aa20d0",
"text": "An automated brain tumor segmentation method was developed and validated against manual segmentation with three-dimensional magnetic resonance images in 20 patients with meningiomas and low-grade gliomas. The automated method (operator time, 5-10 minutes) allowed rapid identification of brain and tumor tissue with an accuracy and reproducibility comparable to those of manual segmentation (operator time, 3-5 hours), making automated segmentation practical for low-grade gliomas and meningiomas.",
"title": ""
},
{
"docid": "0064810e7029ca234d901193d885b52a",
"text": "In a Real-Time System, the correctness of the system is not only depending on the logical result of the computation but also on the time at which result is produced is very important. In real time system, scheduling is effected using certain criteria that ensure processes complete their various tasks at a specific time of completion. The quality of real-time scheduling algorithm has a direct impact on real-time system's working. We studied popular scheduling algorithms mainly Earliest Deadline First, Rate Monotonic, Deadline Monotonic, Least laxity First, Group Earliest Deadline First and Group Priority Earliest Deadline First for periodic task. We observe that the choice of a scheduling algorithm is important in designing a real-time system. We conclude by discussing the results of the Real-Time scheduling algorithm survey.",
"title": ""
},
{
"docid": "ef898f8ae69263fea2519d9224aeb9a3",
"text": "In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.",
"title": ""
},
{
"docid": "5f6670c7e05b2e96175ba51a5259e7a2",
"text": "The development of the Measure of Job Satisfaction (MJS) for use in a longitudinal study of the morale of community nurses in four trusts is described. The review of previous studies focuses on the use of principal component analysis or factor analysis in the development of measures. The MJS was developed from a bank of items culled from the literature and from discussions with key informants. It was mailed to a one in three sample of 723 members of the community nursing forums of the Royal College of Nursing. A 72% response rate was obtained from those eligible for inclusion. Principal component analysis with varimax rotation led to the identification of five dimensions of job satisfaction; Personal Satisfaction, Satisfaction with Workload, Satisfaction with Professional Support, Satisfaction with Pay and Prospects and Satisfaction with Training. These factors form the basis of five subscales of satisfaction which summate to give an Overall Job Satisfaction score. Internal consistency, test-retest reliability, concurrent and discriminatory validity were assessed and were found to be satisfactory. The factor structure was replicated using data obtained from the first three of the community trusts involved in the main study. The limitations of the study and issues which require further exploration are identified and discussed.",
"title": ""
},
{
"docid": "841a5ecba126006e1deb962473662788",
"text": "In the past decade large scale recommendation datasets were published and extensively studied. In this work we describe a detailed analysis of a sparse, large scale dataset, specifically designed to push the envelope of recommender system models. The Yahoo! Music dataset consists of more than a million users, 600 thousand musical items and more than 250 million ratings, collected over a decade. It is characterized by three unique features: First, rated items are multi-typed, including tracks, albums, artists and genres; Second, items are arranged within a four level taxonomy, proving itself effective in coping with a severe sparsity problem that originates from the unusually large number of items (compared to, e.g., movie ratings datasets). Finally, fine resolution timestamps associated with the ratings enable a comprehensive temporal and session analysis. We further present a matrix factorization model exploiting the special characteristics of this dataset. In particular, the model incorporates a rich bias model with terms that capture information from the taxonomy of items and different temporal dynamics of music ratings. To gain additional insights of its properties, we organized the KddCup-2011 competition about this dataset. As the competition drew thousands of participants, we expect the dataset to attract considerable research activity in the future.",
"title": ""
},
{
"docid": "2777ce97acdb673d90f881e044c36dbd",
"text": "Interest point detection is essential process for many computer vision applications, which must provide invariant points to several image variations, such as, rotation, zoom, blur, illumination variation and change of viewpoints. Harris-Affine detector is considered as one of the most effective interest point detectors, although it still presents vulnerability to some image. This paper proposes an improved Harris-affine interest point detector based on two-dimensional weight Atomic functions to improve the repeatability and stability of detected points of Harris-Affine detector, in which the Gaussian kernel is replaced by the ) (x up Atomic Function. Evaluation results show that the new image interest point detector, called Atomic Harris-Affine detector, improves the repeatability in about 40% compared with the conventional Harris-Affine detector, under several conditions such as blurring, illumination changes, change of viewpoint, as well as other geometrical and affine transformations. Key-Words: interest points detector; Atomic function; Harris-Affine detector; repeatability; convergence rate",
"title": ""
},
{
"docid": "ccab9e95d4a0ad133c7c0f7e28b2c6f4",
"text": "Endoscopic abdominoplasty is feasible, safe, and effective in the proper surgical candidate. Excellent results can be expected when proper patient selection criteria are followed. With future refinements in technique and equipment, this procedure may be extended safely to those patients with more severe deformities.",
"title": ""
},
{
"docid": "62e4e376170a649efd578d968392a12b",
"text": "This paper presents a new algorithm to identify Bengali Sign Language (BdSL) for recognizing 46 hand gestures, including 9 gestures for 11 vowels, 28 gestures for 39 consonants and 9 gestures for 9 numerals according to the similarity of pronunciation. The image was first re-sized and then converted to binary format to crop the region of interest by using only top-most, left-most and right-most white pixels. The positions of the finger-tips were found by applying a fingertip finder algorithm. Eleven features were extracted from each image to train a multilayered feedforward neural network with a back-propagation training algorithm. Distance between the centroid of the hand region and each finger tip was calculated along with the angles between each fingertip and horizontal x axis that crossed the centroid. A database of 2300 images of Bengali signs was constructed to evaluate the effectiveness of the proposed system, where 70%, 15% and 15% images were used for training, testing, and validating, respectively. Experimental result showed an average of 88.69% accuracy in recognizing BdSL which is very much promising compare to other existing methods.",
"title": ""
},
{
"docid": "80d9439987b7eac8cf021be7dc533ec9",
"text": "While previous studies have investigated the determinants and consequences of online trust, online distrust has seldom been studied. Assuming that the positive antecedents of online trust are necessarily negative antecedents of online distrust or that positive consequences of online trust are necessarily negatively affected by online distrust is inappropriate. This study examines the different antecedents of online trust and distrust in relation to consumer and website characteristics. Moreover, this study further examines whether online trust and distrust asymmetrically affect behaviors with different risk levels. A model is developed and tested using a survey of 1,153 online consumers. LISREL was employed to test the proposed model. Overall, different consumer and website characteristics influence online trust and distrust, and online trust engenders different behavioral outcomes to online distrust. The authors also discuss the theoretical and managerial implications of the study findings.",
"title": ""
},
{
"docid": "00e315b8baf0ce6548ec7139c8ce105c",
"text": "We revisit the well-known problem of boolean group testing which attempts to discover a sparse subset of faulty items in a large set of mostly good items using a small number of pooled (or grouped) tests. This problem originated during the second WorldWar, and has been the subject of active research during the 70's, and 80's. Recently, there has been a resurgence of interest due to the striking parallels between group testing and the now highly popular field of compressed sensing. In fact, boolean group testing is nothing but compressed sensing in a different algebra - with boolean `AND' and `OR' operations replacing vector space multiplication and addition. In this paper we review existing solutions for non-adaptive (batch) group testing and propose a linear programming relaxation solution, which has a resemblance to the basis pursuit algorithm for sparse recovery in linear models. We compare its performance to alternative methods for group testing.",
"title": ""
},
{
"docid": "dfdde8b25d644664eaec5b8a6e4d8817",
"text": "Minimizing adverse reactions caused by drug-drug interactions has always been a momentous research topic in clinical pharmacology. Detecting all possible interactions through clinical studies before a drug is released to the market is a demanding task. The power of big data is opening up new approaches to discover various drug-drug interactions. However, these discoveries contain a huge amount of noise and provide knowledge bases far from complete and trustworthy ones to be utilized. Most existing studies focus on predicting binary drug-drug interactions between drug pairs and ignore other interactions. In this paper, we propose a novel framework, called PRD, to predict drug-drug interactions. The framework uses the graph embedding that can overcome data incompleteness and sparsity issues to achieve multiple DDI label prediction. First, a large-scale drug knowledge graph is generated from different sources. Then, the knowledge graph is embedded with comprehensive biomedical text into a common low dimensional space. Finally, the learned embeddings are used to efficiently compute rich DDI information through a link prediction process. To validate the effectiveness of the proposed framework, extensive experiments were conducted on real-world datasets. The results demonstrate that our model outperforms several state-of-the-art baseline methods in terms of capability and accuracy.",
"title": ""
}
] |
scidocsrr
|
6aea630e01bf073b07093003e93bef9e
|
Learning Like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images
|
[
{
"docid": "418a5ef9f06f8ba38e63536671d605c1",
"text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.",
"title": ""
},
{
"docid": "a2f91e55b5096b86f6fa92e701c62898",
"text": "The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction.",
"title": ""
}
] |
[
{
"docid": "b3e90fdfda5346544f769b6dd7c3882b",
"text": "Bromelain is a complex mixture of proteinases typically derived from pineapple stem. Similar proteinases are also present in pineapple fruit. Beneficial therapeutic effects of bromelain have been suggested or proven in several human inflammatory diseases and animal models of inflammation, including arthritis and inflammatory bowel disease. However, it is not clear how each of the proteinases within bromelain contributes to its anti-inflammatory effects in vivo. Previous in vivo studies using bromelain have been limited by the lack of assays to control for potential differences in the composition and proteolytic activity of this naturally derived proteinase mixture. In this study, we present model substrate assays and assays for cleavage of bromelain-sensitive cell surface molecules can be used to assess the activity of constituent proteinases within bromelain without the need for biochemical separation of individual components. Commercially available chemical and nutraceutical preparations of bromelain contain predominately stem bromelain. In contrast, the proteinase activity of pineapple fruit reflects its composition of fruit bromelain>ananain approximately stem bromelain. Concentrated bromelain solutions (>50 mg/ml) are more resistant to spontaneous inactivation of their proteolytic activity than are dilute solutions, with the proteinase stability in the order of stem bromelain>fruit bromelain approximately ananain. The proteolytic activity of concentrated bromelain solutions remains relatively stable for at least 1 week at room temperature, with minimal inactivation by multiple freeze-thaw cycles or exposure to the digestive enzyme trypsin. The relative stability of concentrated versus dilute bromelain solutions to inactivation under physiologically relevant conditions suggests that delivery of bromelain as a concentrated bolus would be the preferred method to maximize its proteolytic activity in vivo.",
"title": ""
},
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "05a76f64a6acbcf48b7ac36785009db3",
"text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.",
"title": ""
},
{
"docid": "40059f4cd570b658726745fa7b5ecf38",
"text": "Autonomous humanoid robots need high torque actuators to be able to walk and run. One problem in this context is the heat generated. In this paper we propose to use water evaporation to improve cooling of the motors. Simulations based on thermodynamic calculations as well as measurements on real actuators show that, under the assumption of the load of a soccer game, cooling can be considerably improved with relatively small amounts of water.",
"title": ""
},
{
"docid": "aaf075f849b4e61f57aa2451cdccad70",
"text": "The spatial relation between mitochondria and endoplasmic reticulum (ER) in living HeLa cells was analyzed at high resolution in three dimensions with two differently colored, specifically targeted green fluorescent proteins. Numerous close contacts were observed between these organelles, and mitochondria in situ formed a largely interconnected, dynamic network. A Ca2+-sensitive photoprotein targeted to the outer face of the inner mitochondrial membrane showed that, upon opening of the inositol 1,4,5-triphosphate (IP3)-gated channels of the ER, the mitochondrial surface was exposed to a higher concentration of Ca2+ than was the bulk cytosol. These results emphasize the importance of cell architecture and the distribution of organelles in regulation of Ca2+ signaling.",
"title": ""
},
{
"docid": "c10bd86125db702e0839e2a3776e195b",
"text": "To solve the big topic modeling problem, we need to reduce both time and space complexities of batch latent Dirichlet allocation (LDA) algorithms. Although parallel LDA algorithms on the multi-processor architecture have low time and space complexities, their communication costs among processors often scale linearly with the vocabulary size and the number of topics, leading to a serious scalability problem. To reduce the communication complexity among processors for a better scalability, we propose a novel communication-efficient parallel topic modeling architecture based on power law, which consumes orders of magnitude less communication time when the number of topics is large. We combine the proposed communication-efficient parallel architecture with the online belief propagation (OBP) algorithm referred to as POBP for big topic modeling tasks. Extensive empirical results confirm that POBP has the following advantages to solve the big topic modeling problem: 1) high accuracy, 2) communication-efficient, 3) fast speed, and 4) constant memory usage when compared with recent state-of-the-art parallel LDA algorithms on the multi-processor architecture. Index Terms —Big topic modeling, latent Dirichlet allocation, communication complexity, multi-processor architecture, online belief propagation, power law.",
"title": ""
},
{
"docid": "3963e1a10366748bf4e52d34cc15cc0f",
"text": "Surface electromyography (sEMG) is widely used in clinical diagnosis, rehabilitation engineering and humancomputer interaction and other fields. In this paper, we use Myo armband to collect sEMG signals. Myo armband can be worn above any elbow of any arm and it can capture the bioelectric signal generated when the arm muscles move. MYO can pass of signals through its low-power Blue-tooth, and its interference is small, which makes the signal quality really good. By collecting the sEMG signals of the upper limb forearm, we extract five eigenvalues in the time domain, and use the BP neural network classification algorithm to realize the recognition of six gestures in this paper. Experimental results show that the use of MYO for gesture recognition can get a very good recognition results, it can accurately identify the six hand movements with the average recognition rate of 93%.",
"title": ""
},
{
"docid": "224287bfe0a3f7b3236b442748a59cff",
"text": "Interactive image processing techniques, along with a linear-programming-based inductive classiier, have been used to create a highly accurate system for diagnosis of breast tumors. A small fraction of a ne needle aspirate slide is selected and digitized. With an interactive interface, the user initializes active contour models, known as snakes, near the boundaries of a set of cell nuclei. The customized snakes are deformed to the exact shape of the nuclei. This allows for precise, automated analysis of nuclear size, shape and texture. Ten such features are computed for each nucleus, and the mean value, largest (or \\worst\") value and standard error of each feature are found over the range of isolated cells. After 569 images were analyzed in this fashion, diierent combinations of features were tested to nd those which best separate benign from malignant samples. Tenfold cross-validation accuracy of 97% was achieved using a single separating plane on three of the thirty features: mean texture, worst area and worst smoothness. This represents an improvement over the best diagnostic results in the medical literature. The system is currently in use at the University of Wisconsin Hospitals. The same feature set has also been utilized in the much more diicult task of predicting distant recurrence of malignancy in patients, resulting in an accuracy of 86%.",
"title": ""
},
{
"docid": "09b008daecc4cab2de39f2e51ff11586",
"text": "Mondor's disease is a rare, self-limiting, benign process with acute presentation characterized by subcutaneous bands in several parts of the body. Penile Mondor's disease (PMD) is thrombophlebitis of the superficial dorsal vein of the penis. It is usually considered as thrombophlebitis or phlebitis of subcutaneous vessels. Some findings suggest that it might be of lymphatic origin. The chest, abdominal wall, penis, upper arm, and other parts of the body may also be involved by the disease. Although its physiopathology is not exactly known, transection of the vessel during surgery or any type of trauma such as external compression may trigger its possible development. This disease almost always limits itself. It may be associated with psychological distress and sexual incompatibility. The patients usually feel the superficial vein of the penis like a hard rope and present with complaint of pain around this hardness. Diagnosis is usually easy with physical examination but color Doppler ultrasound examination is important for differential diagnosis. Thus, a close collaboration is required between radiologist and urologist in order to determine the correct diagnosis and appropriate therapies.",
"title": ""
},
{
"docid": "cf26167180275d4feaca5c56afd0ffb1",
"text": "The polycystic ovary syndrome (PCOS) is defined as a combination of hyperandrogenism (hirsutism and acne) and anovulation (oligomenorrhea, infertility, and dysfunctional uterine bleeding), with or without the presence of polycystic ovaries on ultrasound. It represents the main endocrine disorder in the reproductive age, affecting 6% 15% of women in menacme. It is the most common cause of infertility due to anovulation, and the main source of female infertility. When in the presence of a menstrual disorder, the diagnosis of PCOS is reached in 30% 40% of patients with primary or secondary amenorrhoea and in 80% of patients with oligomenorrhea. PCOS should be diagnosed and treated early in adolescence due to reproductive, metabolic and oncological complications which may be associated with it. Treatment options include drugs, diet and lifestyle improvement.",
"title": ""
},
{
"docid": "75cb5c4c9c122d6e80419a3ceb99fd67",
"text": "Indonesian clove cigarettes (kreteks), typically have the appearance of a conventional domestic cigarette. The unique aspects of kreteks are that in addition to tobacco they contain dried clove buds (15-40%, by wt.), and are flavored with a proprietary \"sauce\". Whereas the clove buds contribute to generating high levels of eugenol in the smoke, the \"sauce\" may also contribute other potentially harmful constituents in addition to those associated with tobacco use. We measured levels of eugenol, trans-anethole (anethole), and coumarin in smoke from 33 brands of clove-flavored cigarettes (filtered and unfiltered) from five kretek manufacturers. In order to provide information for evaluating the delivery of these compounds under standard smoking conditions, a quantification method was developed for their measurement in mainstream cigarette smoke. The method allowed collection of mainstream cigarette smoke particulate matter on a Cambridge filter pad, extraction with methanol, sampling by automated headspace solid-phase microextraction, and subsequent analysis using gas chromatography/mass spectrometry. The presence of these compounds was confirmed in the smoke of kreteks using mass spectral library matching, high-resolution mass spectrometry (+/-0.0002 amu), and agreement with a relative retention time index, and native standards. We found that when kreteks were smoked according to standardized machine smoke parameters as specified by the International Standards Organization, all 33 clove brands contained levels of eugenol ranging from 2,490 to 37,900 microg/cigarette (microg/cig). Anethole was detected in smoke from 13 brands at levels of 22.8-1,030 microg/cig, and coumarin was detected in 19 brands at levels ranging from 9.2 to 215 microg/cig. These detected levels are significantly higher than the levels found in commercial cigarette brands available in the United States.",
"title": ""
},
{
"docid": "9b5207fc5beec8d2094d214cf8bfbded",
"text": "We present a novel model for the task of joint mention extraction and classification. Unlike existing approaches, our model is able to effectively capture overlapping mentions with unbounded lengths. The model is highly scalable, with a time complexity that is linear in the number of words in the input sentence and linear in the number of possible mention classes. Our model can be extended to additionally capture mention heads explicitly in a joint manner under the same time complexity. We demonstrate the effectiveness of our model through extensive experiments on standard datasets.",
"title": ""
},
{
"docid": "c508f62dfd94d3205c71334638790c54",
"text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).",
"title": ""
},
{
"docid": "ce167e13e5f129059f59c8e54b994fd4",
"text": "Critical research has emerged as a potentially important stream in information systems research, yet the nature and methods of critical research are still in need of clarification. While criteria or principles for evaluating positivist and interpretive research have been widely discussed, criteria or principles for evaluating critical social research are lacking. Therefore, the purpose of this paper is to propose a set of principles for the conduct of critical research. This paper has been accepted for publication in MIS Quarterly and follows on from an earlier piece that suggested a set of principles for interpretive research (Klein and Myers, 1999). The co-author of this paper is Heinz Klein.",
"title": ""
},
{
"docid": "427c5f5825ca06350986a311957c6322",
"text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.",
"title": ""
},
{
"docid": "883042a6004a5be3865da51da20fa7c9",
"text": "Green Mining is a field of MSR that studies software energy consumption and relies on software performance data. Unfortunately there is a severe lack of publicly available software power use performance data. This means that green mining researchers must generate this data themselves by writing tests, building multiple revisions of a product, and then running these tests multiple times (10+) for each software revision while measuring power use. Then, they must aggregate these measurements to estimate the energy consumed by the tests for each software revision. This is time consuming and is made more difficult by the constraints of mobile devices and their OSes. In this paper we propose, implement, and demonstrate Green Miner: the first dedicated hardware mining software repositories testbed. The Green Miner physically measures the energy consumption of mobile devices (Android phones) and automates the testing of applications, and the reporting of measurements back to developers and researchers. The Green Miner has already produced valuable results for commercial Android application developers, and has been shown to replicate other power studies' results.",
"title": ""
},
{
"docid": "4a21e3015f4fb63f25fd214eaa68ed87",
"text": "We describe our submission to the Brain Tumor Segmentation Challenge (BraTS) at MICCAI 2013. This segmentation approach is based on similarities between multi-channel patches. After patches are extracted from several MR channels for a test case, similar patches are found in training images for which label maps are known. These labels maps are then combined to result in a segmentation map for the test case. The labelling is performed, in a leave-one-out scheme, for each case of a publicly available training set, which consists of 30 real cases (20 highgrade gliomas, 10 low-grade gliomas) and 50 synthetic cases (25 highgrade gliomas, 25 low-grade gliomas). Promising results are shown on the training set, and we believe this algorithm would perform favourably well in comparison to the state of the art on a testing set.",
"title": ""
},
{
"docid": "d8b3cd4a65e02e451c020319fc091cfa",
"text": "This paper describes an experiment in which we try to automatically correct mistakes in grammatical agreement in English to Czech MT outputs. We perform several rule-based corrections on sentences parsed to dependency trees. We prove that it is possible to improve the MT quality of majority of the systems participating in WMT shared task. We made both automatic (BLEU) and manual evaluations.",
"title": ""
},
{
"docid": "3b9df74123b17342b6903120c16242e3",
"text": "Surgical eyebrow lift has been described by using many different open and endoscopic methods. Difficult techniques and only short time benefits oft lead to patients' complaints. We present a safe and simple temporal Z-incision technique for eyebrow lift in 37 patients. Besides simplicity and safety, our technique shows long lasting aesthetic results with hidden scars and a high rate of patient satisfaction.",
"title": ""
}
] |
scidocsrr
|
6a094affdc931187becc204061d5ecb8
|
Evolving spiking neural networks for personalised modelling, classification and prediction of spatio-temporal patterns with a case study on stroke
|
[
{
"docid": "67bfcfb41ef6fcffa90f699354c5e67f",
"text": "This paper presents a new modular and integrative sensory information system inspired by the way the brain performs information processing, in particular, pattern recognition. Spiking neural networks are used to model human-like visual and auditory pathways. This bimodal system is trained to perform the specific task of person authentication. The two unimodal systems are individually tuned and trained to recognize faces and speech signals from spoken utterances, respectively. New learning procedures are designed to operate in an online evolvable and adaptive way. Several ways of modelling sensory integration using spiking neural network architectures are suggested and evaluated in computer experiments.",
"title": ""
}
] |
[
{
"docid": "38450c8c93a3a7807972443fc2b59962",
"text": "UNLABELLED\nWe have created a Shiny-based Web application, called Shiny-phyloseq, for dynamic interaction with microbiome data that runs on any modern Web browser and requires no programming, increasing the accessibility and decreasing the entrance requirement to using phyloseq and related R tools. Along with a data- and context-aware dynamic interface for exploring the effects of parameter and method choices, Shiny-phyloseq also records the complete user input and subsequent graphical results of a user's session, allowing the user to archive, share and reproduce the sequence of steps that created their result-without writing any new code themselves.\n\n\nAVAILABILITY AND IMPLEMENTATION\nShiny-phyloseq is implemented entirely in the R language. It can be hosted/launched by any system with R installed, including Windows, Mac OS and most Linux distributions. Information technology administrators can also host Shiny--phyloseq from a remote server, in which case users need only have a Web browser installed. Shiny-phyloseq is provided free of charge under a GPL-3 open-source license through GitHub at http://joey711.github.io/shiny-phyloseq/.",
"title": ""
},
{
"docid": "f4c66ff0852b3ad640655e945f5639d9",
"text": "The emergence of a feature-analyzing function from the development rules of simple, multilayered networks is explored. It is shown that even a single developing cell of a layered network exhibits a remarkable set of optimization properties that are closely related to issues in statistics, theoretical physics, adaptive signal processing, the formation of knowledge representation in artificial intelligence, and information theory. The network studied is based on the visual system. These results are used to infer an information-theoretic principle that can be applied to the network as a whole, rather than a single cell. The organizing principle proposed is that the network connections develop in such a way as to maximize the amount of information that is preserved when signals are transformed at each processing stage, subject to certain constraints. The operation of this principle is illustrated for some simple cases.<<ETX>>",
"title": ""
},
{
"docid": "95f56689bb980812ab57c7f16ea36e2f",
"text": "Entity search over news, social media and the Web allows users to precisely retrieve concise information about specific people, organizations, movies and their characters, and other kinds of entities. This expressive search mode builds on two major assets: 1) a knowledge base (KB) that contains the entities of interest and 2) entity markup in the documents of interest derived by automatic disambiguation of entity names (NED) and linking names to the KB. These prerequisites are not easily available, though, in the important case when a user is interested in a newly emerging entity (EE) such as new movies, new songs, etc. Automatic methods for detecting and canonicalizing EEs are not nearly at the same level as the NED methods for prominent entities that have rich descriptions in the KB. To overcome this major limitation, we have developed an approach and prototype system that allows searching for EEs in a user-friendly manner. The approach leverages the human in the loop by prompting for user feedback on candidate entities and on characteristic keyphrases for EEs. For convenience and low burden on users, this process is supported by the automatic harvesting of tentative keyphrases. Our demo system shows this interactive process and its high usability.",
"title": ""
},
{
"docid": "e95253b765129a0940e4af899d9e5d72",
"text": "Smart health devices monitor certain health parameters, are connected to an Internet service, and target primarily a lay consumer seeking a healthy lifestyle rather than the medical expert or the chronically ill person. These devices offer tremendous opportunities for wellbeing and self-management of health. This department reviews smart health devices from a pervasive computing perspective, discussing various devices and their functionality, limitations, and potential.",
"title": ""
},
{
"docid": "728a06d89a57261cf0560ec3513f2ae6",
"text": "This paper reports on our review of published research relating to how teams work together to execute Big Data projects. Our findings suggest that there is no agreed upon standard for executing these projects but that there is a growing research focus in this area and that an improved process methodology would be useful. In addition, our synthesis also provides useful suggestions to help practitioners execute their projects, specifically our identified list of 33 important success factors for executing Big Data efforts, which are grouped by our six identified characteristics of a mature Big Data organization.",
"title": ""
},
{
"docid": "58039fbc0550c720c4074c96e866c025",
"text": "We argue that to best comprehend many data sets, plotting judiciously selected sample statistics with associated confidence intervals can usefully supplement, or even replace, standard hypothesis-testing procedures. We note that most social science statistics textbooks limit discussion of confidence intervals to their use in between-subject designs. Our central purpose in this article is to describe how to compute an analogous confidence interval that can be used in within-subject designs. This confidence interval rests on the reasoning that because between-subject variance typically plays no role in statistical analyses of within-subject designs, it can legitimately be ignored; hence, an appropriate confidence interval can be based on the standard within-subject error term-that is, on the variability due to the subject × condition interaction. Computation of such a confidence interval is simple and is embodied in Equation 2 on p. 482 of this article. This confidence interval has two useful properties. First, it is based on the same error term as is the corresponding analysis of variance, and hence leads to comparable conclusions. Second, it is related by a known factor (√2) to a confidence interval of the difference between sample means; accordingly, it can be used to infer the faith one can put in some pattern of sample means as a reflection of the underlying pattern of population means. These two properties correspond to analogous properties of the more widely used between-subject confidence interval.",
"title": ""
},
{
"docid": "75d3ad60adfed9fe24a67dd588ac8090",
"text": "The neuromuscular system acts to maintain postural stability and reduce the impact of deleterious loads on the spine. Exercising of the abdominal muscles has become widely used in the management of low back pain in order to provide this supplement to spinal stability. Several exercise programmes have been advocated to promote stabilization but evaluation is difficult. This study evaluates two common forms of exercise effects on the ability to appropriately contract Transversus Abdominis (TrA) muscle, whose normal function is regarded as significant in spinal stability. Thirty-six asymptomatic females were examined. Twelve formed the Pilates trained group, 12 the abdominal curl group (both attending a minimum of 25 classes in 6 months) and 12 were non-training controls. A pressure biofeedback unit (PBU) was used to assess performance of the TrA muscle during an abdominal hollowing activity (TrA isolation test) and under limb load (Lumbo-pelvic stability test). The percentage of subjects passing the TrA isolation test was 10 subjects (83%) from the Pilates group, four subjects (33%) from the abdominal curl group, and three subjects (25%) from the control group. The percentage of subjects passing the lumbo-pelvic stability test was five subjects (42%) from the Pilates group, all the subjects from both the abdominal curl and control groups failed the test. The study appears to indicate that Pilates trained subjects could contract the TrA and maintain better lumbo-pelvic control than do those who perform regular abdominal curl exercises, or no abdominal muscle exercises. & 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "07e9b961a1196665538d89b60a30a7d1",
"text": "The problem of anomaly detection in time series has received a lot of attention in the past two decades. However, existing techniques cannot locate where the anomalies are within anomalous time series, or they require users to provide the length of potential anomalies. To address these limitations, we propose a self-learning online anomaly detection algorithm that automatically identifies anomalous time series, as well as the exact locations where the anomalies occur in the detected time series. In addition, for multivariate time series, it is difficult to detect anomalies due to the following challenges. First, anomalies may occur in only a subset of dimensions (variables). Second, the locations and lengths of anomalous subsequences may be different in different dimensions. Third, some anomalies may look normal in each individual dimension but different with combinations of dimensions. To mitigate these problems, we introduce a multivariate anomaly detection algorithm which detects anomalies and identifies the dimensions and locations of the anomalous subsequences. We evaluate our approaches on several real-world datasets, including two CPU manufacturing data from Intel. We demonstrate that our approach can successfully detect the correct anomalies without requiring any prior knowledge about the data.",
"title": ""
},
{
"docid": "4f0865012265be44d8a39fedf01f70ce",
"text": "In this paper, we derive new closed-form expressions for the gradient of the mutual information with respect to arbitrary parameters of the two-user multiple access cha nnel (MAC). The derived relations generalize the fundamental relation between the derivative of the mutual i nformation and the minimum mean squared error (MMSE) to multiuser setups. We prove that the derivative of t he mutual information with respect to the signal to noise ratio (SNR) is equal to the MMSE plus a covariance induc e due to the interference, quantified by a term with respect to the cross correlation of the multiuser input esti mates, the channels and the precoding matrices. We also derive new relations for the gradient of the conditional and non-conditional mutual information with respect to the MMSE. Capitalizing on the new fundamental relations, we inv estigate the linear precoding and power allocation policies that maximize the mutual information for the two-u ser MAC Gaussian channels with arbitrary input distributions. We show that the optimal design of linear pre coders may satisfy a fixed-point equation as a function of the channel and the input constellation under specific set up . We show also that the non-mutual interference in a multiuser setup introduces a term to the gradient of the mut ual information which plays a fundamental role in the design of optimal transmission strategies, particular ly the optimal precoding and power allocation, and explains the losses in the data rates. Therefore, we provide a novel in terpretation of the interference with respect to the channel, power, and input estimates of the main user and the i n erferer.",
"title": ""
},
{
"docid": "a5f9b7b7b25ccc397acde105c39c3d9d",
"text": "Processors with multiple cores and complex cache coherence protocols are widely employed to improve the overall performance. It is a major challenge to verify the correctness of a cache coherence protocol since the number of reachable states grows exponentially with the number of cores. In this paper, we propose an efficient test generation technique, which can be used to achieve full state and transition coverage in simulation based verification for a wide variety of cache coherence protocols. Based on effective analysis of the state space structure, our method can generate more efficient test sequences (50% shorter) compared with tests generated by breadth first search. Moreover, our proposed approach can generate tests on-the-fly due to its space efficient design.",
"title": ""
},
{
"docid": "1b33ca2433ab0846d369a4f8ad278076",
"text": "Software-defined networking (SDN), is evolving as a new paradigm for the next generation of network architecture. The separation of control plane and data plane within SDN, brings the flexibility to manage, configure, secure, and optimize network resources using dynamic software programs. From a security point of view SDN has the ability to collect information from the network devices and allow applications to program the forwarding devices, which unleashes a powerful technology for proactive and smart security policy. These functions enable the integration of security tools that can be used in distributed scenarios, unlike the traditional security solutions based on a static firewall programmed by an administrator such as Intrusion Detection and Prevention System (IDS/IPS). This network programmability may be integrated to create a new communication platform for the Internet of Things (IoT). In this paper, we present our preliminary study that is focused on the understanding of an effective approach to build a cluster network using SDN. By using network virtualization and OpenFlow technologies to generate virtual nodes, we simulate a prototype system of over 500 devices controlled by SDN, and it represents a cluster. The results show that the network devices are only able to forward the packets by predefined rules on the controller. For this reason, we propose a method to control the IP header at the application-level to overcome this problem using Opflex within SDN architecture.",
"title": ""
},
{
"docid": "781bdc522ed49108cd7132a9aaf49fce",
"text": "ROC curve analysis is often applied to measure the diagnostic accuracy of a biomarker. The analysis results in two gains: diagnostic accuracy of the biomarker and the optimal cut-point value. There are many methods proposed in the literature to obtain the optimal cut-point value. In this study, a new approach, alternative to these methods, is proposed. The proposed approach is based on the value of the area under the ROC curve. This method defines the optimal cut-point value as the value whose sensitivity and specificity are the closest to the value of the area under the ROC curve and the absolute value of the difference between the sensitivity and specificity values is minimum. This approach is very practical. In this study, the results of the proposed method are compared with those of the standard approaches, by using simulated data with different distribution and homogeneity conditions as well as a real data. According to the simulation results, the use of the proposed method is advised for finding the true cut-point.",
"title": ""
},
{
"docid": "683107abf87d68a9bb6ab5a22e24cb99",
"text": "We present supertagging-based models for Tree Adjoining Grammar parsing that use neural network architectures and dense vector representation of supertags (elementary trees) to achieve state-of-the-art performance in unlabeled and labeled attachment scores. The shift-reduce parsing model eschews lexical information entirely, and uses only the 1-best supertags to parse a sentence, providing further support for the claim that supertagging is “almost parsing.” We demonstrate that the embedding vector representations the parser induces for supertags possess linguistically interpretable structure, supporting analogies between grammatical structures like those familiar from recent work in distributional semantics. This dense representation of supertags overcomes the drawbacks for statistical models of TAG as compared to CCG parsing, raising the possibility that TAG is a viable alternative for NLP tasks that require the assignment of richer structural descriptions to sentences.",
"title": ""
},
{
"docid": "6379d5330037a774f9ceed4c51bda1f6",
"text": "Despite long-standing observations on diverse cytokinin actions, the discovery path to cytokinin signaling mechanisms was tortuous. Unyielding to conventional genetic screens, experimental innovations were paramount in unraveling the core cytokinin signaling circuitry, which employs a large repertoire of genes with overlapping and specific functions. The canonical two-component transcription circuitry involves His kinases that perceive cytokinin and initiate signaling, as well as His-to-Asp phosphorelay proteins that transfer phosphoryl groups to response regulators, transcriptional activators, or repressors. Recent advances have revealed the complex physiological functions of cytokinins, including interactions with auxin and other signal transduction pathways. This review begins by outlining the historical path to cytokinin discovery and then elucidates the diverse cytokinin functions and key signaling components. Highlights focus on the integration of cytokinin signaling components into regulatory networks in specific contexts, ranging from molecular, cellular, and developmental regulations in the embryo, root apical meristem, shoot apical meristem, stem and root vasculature, and nodule organogenesis to organismal responses underlying immunity, stress tolerance, and senescence.",
"title": ""
},
{
"docid": "705b60c0bb076f65894fc55b855e35a0",
"text": "Aiming at automatically discovering the common objects contained in a set of relevant images and segmenting them as foreground simultaneously, object co-segmentation has become an active research topic in recent years. Although a number of approaches have been proposed to address this problem, many of them are designed with the misleading assumption, unscalable prior, or low flexibility and thus still suffer from certain limitations, which reduces their capability in the real-world scenarios. To alleviate these limitations, we propose a novel two-stage co-segmentation framework, which introduces the weak background prior to establish a globally close-loop graph to represent the common object and union background separately. Then a novel graph optimized-flexible manifold ranking algorithm is proposed to flexibly optimize the graph connection and node labels to co-segment the common objects. Experiments on three image datasets demonstrate that our method outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "c7d466ccc2237bea69468f739654d859",
"text": "Engineering organisations following a traditional development process often suffer from under-specified requirements and from poor responsiveness to changes in those requirements during the course of a project. Furthermore, these organizations need to deliver highly dependable products and decrease time-tomarket. In the software engineering community, Agile methods have been proposed to address similar issues. Pilot projects that apply agile approaches in Cyber-Physical Systems (CPS) engineering have reported some success. This position paper studies the challenges faced when adopting an agile process to design CPS. These challenges are broken down into their essential components and solutions are proposed, both pertaining to model/simulation management and to processes.",
"title": ""
},
{
"docid": "5d11188bf08cc7abc057241837b263bb",
"text": "This paper presents the design and development of a sensorized soft robotic glove based on pneumatic soft-and-rigid hybrid actuators for providing continuous passive motion (CPM) in hand rehabilitation. This hybrid actuator is comprised of bellow-type soft actuator sections connected through block-shaped semi-rigid sections to form robotic digits. The actuators were designed to satisfy the anatomical range of motion for each joint. Each digit was sensorized at the tip with an inertial measurement unit sensor in order to track the rotation of the distal end. A pneumatic feedback control system was developed to control the motion of the soft robotic digit in following desired trajectories. The performance of the soft robotic glove and the associated control system were examined on an able-bodied subject during flexion and extension to show the glove's applicability to CPM applications.",
"title": ""
},
{
"docid": "1c8eba03d834ee7858156932e892aee6",
"text": "Simulation and rendering of sparse volumetric data have different constraints and solutions depending on the application area. Generating precise simulations and understanding very large data are problems in scientific visualization, whereas convincing simulations and realistic visuals are challenges in motion pictures. Both require volumes with dynamic topology, very large domains, and efficient high quality rendering. We present the GPU voxel database structure, GVDB, based on the voxel database topology of Museth [Mus13], as a method for efficient GPU-based compute and raytracing on a sparse hierarchy of grids. GVDB introduces an indexed memory pooling design for dynamic topology, and a novel hierarchical traversal for efficient raytracing on the GPU. Examples are provided for ray sampling of volumetric data, rendering of isosurfaces with multiple scattering, and raytracing of level sets. We demonstrate that GVDB can give large performance improvements over CPU methods with identical quality.",
"title": ""
},
{
"docid": "743ad5ea052a9a5269f0c9e319fcf20d",
"text": "Several lines of evidence converge to the idea that rapid eye movement sleep (REMS) is a good model to foster our understanding of psychosis. Both REMS and psychosis course with internally generated perceptions and lack of rational judgment, which is attributed to a hyperlimbic activity along with hypofrontality. Interestingly, some individuals can become aware of dreaming during REMS, a particular experience known as lucid dreaming (LD), whose neurobiological basis is still controversial. Since the frontal lobe plays a role in self-consciousness, working memory and attention, here we hypothesize that LD is associated with increased frontal activity during REMS. A possible way to test this hypothesis is to check whether transcranial magnetic or electric stimulation of the frontal region during REMS triggers LD. We further suggest that psychosis and LD are opposite phenomena: LD as a physiological awakening while dreaming due to frontal activity, and psychosis as a pathological intrusion of dream features during wake state due to hypofrontality. We further suggest that LD research may have three main clinical implications. First, LD could be important to the study of consciousness, including its pathologies and other altered states. Second, LD could be used as a therapy for recurrent nightmares, a common symptom of depression and post-traumatic stress disorder. Finally, LD may allow for motor imagery during dreaming with possible improvement of physical rehabilitation. In all, we believe that LD research may clarify multiple aspects of brain functioning in its physiological, altered and pathological states.",
"title": ""
},
{
"docid": "16709c54458167634803100605a4f4a5",
"text": "Automatic Web page segmentation is the basis to adaptive Web browsing on mobile devices. It breaks a large page into smaller blocks, in which contents with coherent semantics are keeping together. Then, various adaptations like single column and thumbnail view can be developed. However, page segmentation remains a challenging task, and its poor result directly yields a frustrating user experience. As human usually understand the Web page well, in this paper, we start from Gestalt theory, a psychological theory that can explain human's visual perceptive processes. Four basic laws, proximity, similarity, closure, and simplicity, are drawn from Gestalt theory and then implemented in a program to simulate how human understand the layout of Web pages. The experiments show that this method outperforms existing methods.",
"title": ""
}
] |
scidocsrr
|
dfa9e242f159573351d919fa8ba62544
|
1 € Filter: a Simple Speed-based Low-pass Filter for Noisy Input in Interactive Systems
|
[
{
"docid": "32a4c17a53643042a5c19180bffd7c21",
"text": "Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. Although some user interface libraries and toolkits offer gesture recognizers, such infrastructure is often unavailable in design-oriented environments like Flash, scripting environments like JavaScript, or brand new off-desktop prototyping environments. To enable novice programmers to incorporate gestures into their UI prototypes, we present a \"$1 recognizer\" that is easy, cheap, and usable almost anywhere in about 100 lines of code. In a study comparing our $1 recognizer, Dynamic Time Warping, and the Rubine classifier on user-supplied gestures, we found that $1 obtains over 97% accuracy with only 1 loaded template and 99% accuracy with 3+ loaded templates. These results were nearly identical to DTW and superior to Rubine. In addition, we found that medium-speed gestures, in which users balanced speed and accuracy, were recognized better than slow or fast gestures for all three recognizers. We also discuss the effect that the number of templates or training examples has on recognition, the score falloff along recognizers' N-best lists, and results for individual gestures. We include detailed pseudocode of the $1 recognizer to aid development, inspection, extension, and testing.",
"title": ""
}
] |
[
{
"docid": "7635ad3e2ac2f8e72811bf056d29dfbb",
"text": "Nowadays, many consumer videos are captured by portable devices such as iPhone. Different from constrained videos that are produced by professionals, e.g., those for broadcast, summarizing multiple handheld videos from a same scenery is a challenging task. This is because: 1) these videos have dramatic semantic and style variances, making it difficult to extract the representative key frames; 2) the handheld videos are with different degrees of shakiness, but existing summarization techniques cannot alleviate this problem adaptively; and 3) it is difficult to develop a quality model that evaluates a video summary, due to the subjectiveness of video quality assessment. To solve these problems, we propose perceptual multiattribute optimization which jointly refines multiple perceptual attributes (i.e., video aesthetics, coherence, and stability) in a multivideo summarization process. In particular, a weakly supervised learning framework is designed to discover the semantically important regions in each frame. Then, a few key frames are selected based on their contributions to cover the multivideo semantics. Thereafter, a probabilistic model is proposed to dynamically fit the key frames into an aesthetically pleasing video summary, wherein its frames are stabilized adaptively. Experiments on consumer videos taken from sceneries throughout the world demonstrate the descriptiveness, aesthetics, coherence, and stability of the generated summary.",
"title": ""
},
{
"docid": "c73b5b81fa75676e96309610b4c6ac81",
"text": "We present a theory of excess stock market volatility, in which market movements are due to trades by very large institutional investors in relatively illiquid markets. Such trades generate significant spikes in returns and volume, even in the absence of important news about fundamentals. We derive the optimal trading behavior of these investors, which allows us to provide a unified explanation for apparently disconnected empirical regularities in returns, trading volume and investor size.",
"title": ""
},
{
"docid": "d840814a871a36479e465736077b375a",
"text": "With the popularity of the Internet, online news media are pouring numerous of news reports into the Internet every day. People get lost in the information explosion. Although the existing methods are able to extract news reports according to key words, and aggregate news reports into stories or events, they just list the related reports or events in order. Moreover, they are unable to provide the evolution relationships between events within a topic, thus people hardly capture the events development vein. In order to mine the underlying evolution relationships between events within the topic, we propose a novel event evolution Model in this paper. This model utilizes TFIEF and Temporal Distance Cost factor (TDC) to model the event evolution relationships. we construct event evolution relationships map to show the events development vein. The experimental evaluation on real dataset show that our technique precedes the baseline technique.",
"title": ""
},
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
},
{
"docid": "28a185e08ec254647f8f6c6ad9160264",
"text": "0079-6565/$ see front matter Published by Elsevier doi:10.1016/j.pnmrs.2008.12.001 Abbreviations: NMR, Nuclear Magnetic Resonan RMSD, mean square deviation; HSQC, heteronuclea spectroscopy; NOE, Nuclear Overhauser Effect; RDC, re Protein Data Bank; pol g, zinc finger domain of the hu CH, C Ha; hSRI, human Set2-Rpb1 interacting do human transcription elongation factor CA150 (RN domain interacting protein); POF, principal order fram MD, molecular dynamics; SSE, secondary structure e WPS, well-packed satisfying; vdW, van der Waals; DO * Corresponding author. Tel.: +1 919 660 6583. E-mail address: brd+pnmrs09@cs.duke.edu (B.R. D URL: http://www.cs.duke.edu/brd (B.R. Donald).",
"title": ""
},
{
"docid": "d4570f189544b0c21c8b431b1e70e0a2",
"text": "A novel transform-domain image watermark based on chaotic sequences is proposed in this paper. A complex chaos-based scheme is developed to embed a gray-level image in the wavelet domain of the original color image signal. The chaos system plays an important role in the security and invisibility of the proposed scheme. The parameter and initial state of chaos system directly influence the generation of watermark information as a key. Meanwhile, the watermark information has the property of spread spectrum signal by chaotic sequence. To improve the invisibility of watermarked image Computer simulation results show that the proposed algorithm is imperceptible and is robust to most watermarking attacks, especially to image cropping, JPEG compression and multipliable noise.",
"title": ""
},
{
"docid": "e6291818253de22ee675f67eed8213d9",
"text": "This literature review focuses on aesthetics of interaction design with further goal of outlining a study towards prediction model of aesthetic value. The review covers three main issues, tightly related to aesthetics of interaction design: evaluation of aesthetics, relations between aesthetics and interaction qualities and implementation of aesthetics in interaction design. Analysis of previous models is carried out according to definition of interaction aesthetics: holistic approach to aesthetic perception considering its' action- and appearance-related components. As a result the empirical study is proposed for investigating the relations between attributes of interaction and users' aesthetic experience.",
"title": ""
},
{
"docid": "e8478d17694b39bd252175139a5ca14d",
"text": "Building a computationally creative system is a challenging undertaking. While such systems are beginning to proliferate, and a good number of them have been reasonably well-documented, it may seem, especially to newcomers to the field, that each system is a bespoke design that bears little chance of revealing any general knowledge about CC system building. This paper seeks to dispel this concern by presenting an abstract CC system description, or, in other words a practical, general approach for constructing CC systems.",
"title": ""
},
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "5ccf0b3f871f8362fccd4dbd35a05555",
"text": "Recent evidence suggests a positive impact of bilingualism on cognition, including later onset of dementia. However, monolinguals and bilinguals might have different baseline cognitive ability. We present the first study examining the effect of bilingualism on later-life cognition controlling for childhood intelligence. We studied 853 participants, first tested in 1947 (age = 11 years), and retested in 2008-2010. Bilinguals performed significantly better than predicted from their baseline cognitive abilities, with strongest effects on general intelligence and reading. Our results suggest a positive effect of bilingualism on later-life cognition, including in those who acquired their second language in adulthood.",
"title": ""
},
{
"docid": "084b83aed850aca07bed298de455c110",
"text": "Leveraging built-in cameras on smartphones and tablets, face authentication provides an attractive alternative of legacy passwords due to its memory-less authentication process. However, it has an intrinsic vulnerability against the media-based facial forgery (MFF) where adversaries use photos/videos containing victims' faces to circumvent face authentication systems. In this paper, we propose FaceLive, a practical and robust liveness detection mechanism to strengthen the face authentication on mobile devices in fighting the MFF-based attacks. FaceLive detects the MFF-based attacks by measuring the consistency between device movement data from the inertial sensors and the head pose changes from the facial video captured by built-in camera. FaceLive is practical in the sense that it does not require any additional hardware but a generic front-facing camera, an accelerometer, and a gyroscope, which are pervasively available on today's mobile devices. FaceLive is robust to complex lighting conditions, which may introduce illuminations and lead to low accuracy in detecting important facial landmarks; it is also robust to a range of cumulative errors in detecting head pose changes during face authentication.",
"title": ""
},
{
"docid": "03e6fab6da3644d64081387018012599",
"text": "High dimensionality of POMDP's belief state space is one major cause that makes the underlying optimal policy computation intractable. Belief compression refers to the methodology that projects the belief state space to a low-dimensional one to alleviate the problem. In this paper, we propose a novel orthogonal non-negative matrix factorization (O-NMF) for the projection. The proposed O-NMF not only factors the belief state space by minimizing the reconstruction error, but also allows the compressed POMDP formulation to be efficiently computed (due to its orthogonality) in a value-directed manner so that the value function will take same values for corresponding belief states in the original and compressed state spaces. We have tested the proposed approach using a number of benchmark problems and the empirical results confirms its effectiveness in achieving substantial computational cost saving in policy computation.",
"title": ""
},
{
"docid": "7fa90daad61f864ff5a2a55e38c554d8",
"text": "In recent years, wireless networks and applications have achieved marvelous successes in government, enterprise, home, and personal communication systems. The desired features of wireless communications draw lots of attention to the industrial communication and expected to bring benefits such as reduce deployment and maintenance after employed. However, the industrial communication system required real-time communication, which means the control systems in the factory are required accurate control and rapid communication, such as the industrial motion control system. In this type of application, the communication system performance and efficiency will be evaluated to ensure it applicable to the industrial network. However, there are a few original issues in the wireless communication, such as fading, multipath propagation and interference problems, which will affect the reliability and performance of industrial communication system operation. Therefore, we proposed a connection protection mechanism that cooperates with wireless network and visible light communication to achieve reliability and performance in industrial communication network. We will consider implementing this mechanism by using industrial wireless Ethernet in the near future.",
"title": ""
},
{
"docid": "83238b7ede9cc85090e44028e79375af",
"text": "Purpose – This paper aims to represent a capability model for industrial robot as they pertain to assembly tasks. Design/methodology/approach – The architecture of a real kit building application is provided to demonstrate how robot capabilities can be used to fully automate the planning of assembly tasks. Discussion on the planning infrastructure is done with the Planning Domain Definition Language (PDDL) for heterogeneous multi robot systems. Findings – The paper describes PDDL domain and problem files that are used by a planner to generate a plan for kitting. Discussion on the plan shows that the best robot is selected to carry out assembly actions. Originality/value – The author presents a robot capability model that is intended to be used for helping manufacturers to characterize the different capabilities their robots contribute to help the end user to select the appropriate robots for the appropriate tasks, selecting backup robots during robot’s failures to limit the deterioration of the system’s productivity and the products’ quality and limiting robots’ failures and increasing productivity by providing a tool to manufacturers that outputs a process plan that assigns the best robot to each task needed to accomplish the assembly.",
"title": ""
},
{
"docid": "a3f6f2e6415267bb5b9ac92c3c77e872",
"text": "In recent times, the use of separable convolutions in deep convolutional neural network architectures has been explored. Several researchers, most notably and have used separable convolutions in their deep architectures and have demonstrated state of the art or close to state of the art performance. However, the underlying mechanism of action of separable convolutions is still not fully understood. Although, their mathematical definition is well understood as a depth-wise convolution followed by a point-wise convolution, “deeper” interpretations (such as the “extreme Inception”) hypothesis have failed to provide a thorough explanation of their efficacy. In this paper, we propose a hybrid interpretation that we believe is a better model for explaining the efficacy of separable convolutions.",
"title": ""
},
{
"docid": "d7bd02def0f010016b53e2c41b42df35",
"text": "We utilise smart eyeglasses for dietary monitoring, in particular to sense food chewing. Our approach is based on a 3D-printed regular eyeglasses design that could accommodate processing electronics and Electromyography (EMG) electrodes. Electrode positioning was analysed and an optimal electrode placement at the temples was identified. We further compared gel and dry fabric electrodes. For the subsequent analysis, fabric electrodes were attached to the eyeglasses frame. The eyeglasses were used in a data recording study with eight participants eating different foods. Two chewing cycle detection methods and two food classification algorithms were compared. Detection rates for individual chewing cycles reached a precision and recall of 80%. For five foods, classification accuracy for individual chewing cycles varied between 43% and 71%. Majority voting across intake sequences improved accuracy, ranging between 63% and 84%. We concluded that EMG-based chewing analysis using smart eyeglasses can contribute essential chewing structure information to dietary monitoring systems, while the eyeglasses remain inconspicuous and thus could be continuously used.",
"title": ""
},
{
"docid": "69ec6acad98e8d2945c65626c97eca45",
"text": "Body-area networks of pervasive wearable devices are increasingly used for health monitoring, personal assistance, entertainment, and home automation. In an ideal world, a user would simply wear their desired set of devices with no configuration necessary: the devices would discover each other, recognize that they are on the same person, construct a secure communications channel, and recognize the user to which they are attached. In this paper we address a portion of this vision by offering a wearable system that unobtrusively recognizes the person wearing it. Because it can recognize the user, our system can properly label sensor data or personalize interactions.\n Our recognition method uses bioimpedance, a measurement of how tissue responds when exposed to an electrical current. By collecting bioimpedance samples using a small wearable device we designed, our system can determine that (a)the wearer is indeed the expected person and (b)~the device is physically on the wearer's body. Our recognition method works with 98% balanced-accuracy under a cross-validation of a day's worth of bioimpedance samples from a cohort of 8 volunteer subjects. We also demonstrate that our system continues to recognize a subset of these subjects even several months later. Finally, we measure the energy requirements of our system as implemented on a Nexus~S smart phone and custom-designed module for the Shimmer sensing platform.",
"title": ""
},
{
"docid": "7cd8dee294d751ec6c703d628e0db988",
"text": "A major component of secondary education is learning to write effectively, a skill which is bolstered by repeated practice with formative guidance. However, providing focused feedback to every student on multiple drafts of each essay throughout the school year is a challenge for even the most dedicated of teachers. This paper first establishes a new ordinal essay scoring model and its state of the art performance compared to recent results in the Automated Essay Scoring field. Extending this model, we describe a method for using prediction on realistic essay variants to give rubric-specific formative feedback to writers. This method is used in Revision Assistant, a deployed data-driven educational product that provides immediate, rubric-specific, sentence-level feedback to students to supplement teacher guidance. We present initial evaluations of this feedback generation, both offline and in deployment.",
"title": ""
}
] |
scidocsrr
|
3caa784c0c25b4cc064f50561d2eeb5f
|
A mediating influence on customer loyalty : The role of perceived value
|
[
{
"docid": "e21c2eb941f69329c157b7a46dac4511",
"text": "In recent research on service quality it has been argued that the relationship between perceived service quality and service loyalty is an issue which requires conceptual and empirical elaboration through replication and extension of current knowledge. Focuses on the refinement of a scale for measuring service loyalty dimensions and the relationships between dimensions of service quality and these service loyalty dimensions. The results of an empirical study of a large sample of customers from four different service industries suggest that four dimensions of service loyalty can be identified: purchase intentions, word-of-mouth communication; price sensitivity; and complaining behaviour. Further analysis yields an intricate pattern of service quality-service loyalty relationships at the level of the individual dimensions with notable differences across",
"title": ""
},
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
},
{
"docid": "4b570eb16d263b2df0a8703e9135f49c",
"text": "ions. They also presume that consumers carefully calculate the give and get components of value, an assumption that did not hold true for most consumers in the exploratory study. Price as a Quality Indicator Most experimental studies related to quality have focused on price as the key extrinsic quality signal. As suggested in the propositions, price is but one of several potentially useful extrinsic cues; brand name or package may be equally or more important, especially in packaged goods. Further, evidence of a generalized price-perceived quality relationship is inconclusive. Quality research may benefit from a de-emphasis on price as the main extrinsic quality indicator. Inclusion of other important indicators, as well as identification of situations in which each of those indicators is important, may provide more interesting and useful answers about the extrinsic signals consumers use. Management Implications An understanding of what quality and value mean to consumers offers the promise of improving brand positions through more precise market analysis and segmentation, product planning, promotion, and pricing strategy. The model presented here suggests the following strategies that can be implemented to understand and capitalize on brand quality and value. Close the Quality Perception Gap Though managers increasingly acknowledge the importance of quality, many continue to define and measure it from the company's perspective. Closing the gap between objective and perceived quality requires that the company view quality the way the consumer does. Research that investigates which cues are important and how consumers form impressions of qualConsumer Perceptions of Price, Quality, and Value / 17 ity based on those technical, objective cues is necessary. Companies also may benefit from research that identifies the abstract dimensions of quality desired by consumers in a product class. Identify Key Intrinsic and Extrinsic Attribute",
"title": ""
}
] |
[
{
"docid": "54314e448a1dd146289c6c4859ab9791",
"text": "The article investigates how the difficulties caused by the flexibility of the endoscope shaft could be solved and to provide a categorized overview of designs that potentially provide a solution. The following are discussed: paradoxical problem of flexible endoscopy; NOTES or hybrid endoscopy surgery; design challenges; shaft-guidance: guiding principles; virtual track guidance; physical track guidance; shaft-guidance: rigidity control; material stiffening; structural stiffening; and hybrid stiffening.",
"title": ""
},
{
"docid": "77f6ca45b479cebb17734e954af5f9fc",
"text": "For data-intensive computing, the low throughput of the existing disk-bound storage systems is a major bottleneck. Recent emergence of the in-memory file systems with heterogeneous storage support mitigates this problem to a great extent. Parallel programming frameworks, e.g. Hadoop MapReduce and Spark are increasingly being run on such high-performance file systems. However, no comprehensive study has been done to analyze the impacts of the in-memory file systems on various Big Data applications. This paper characterizes two file systems in literature, Tachyon [17] and Triple-H [13] that support in-memory and heterogeneous storage, and discusses the impacts of these two architectures on the performance and fault tolerance of Hadoop MapReduce and Spark applications. We present a complete methodology for evaluating MapReduce and Spark workloads on top of in-memory file systems and provide insights about the interactions of different system components while running these workloads. We also propose advanced acceleration techniques to adapt Triple-H for iterative applications and study the impact of different parameters on the performance of MapReduce and Spark jobs on HPC systems. Our evaluations show that, although Tachyon is 5x faster than HDFS for primitive operations, Triple-H performs 47% and 2.4x better than Tachyon for MapReduce and Spark workloads, respectively. Triple-H also accelerates K-Means by 15% over HDFS and 9% over Tachyon.",
"title": ""
},
{
"docid": "0b6693195ef302e2c160d65956d80eea",
"text": "Let f : Sd−1 × Sd−1 → R be a function of the form f(x,x′) = g(〈x,x′〉) for g : [−1, 1] → R. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate f whenever g cannot be approximated by a low degree polynomial. Moreover, for many g’s, such as g(x) = sin(πdx), the number of neurons must be 2 . Furthermore, the result holds w.r.t. the uniform distribution on Sd−1 × Sd−1. As many functions of the above form can be well approximated by poly-size depth three networks with polybounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on Sd−1 × Sd−1.",
"title": ""
},
{
"docid": "0b872b1d13c9a96c52046b41272e3a5f",
"text": "This dissertation describes experiments conducted to evaluate an algorithm that attempts to automatically recognise emotions (affect) in written language. Examples from several areas of research that can inform affect recognition experiments are reviewed, including sentiment analysis, subjectivity analysis, and the psychology of emotion. An affect annotation exercise was carried out in order to build a suitable set of test data for the experiment. An algorithm to classify according to the emotional content of sentences was derived from an existing technique for sentiment analysis. When compared against the manual annotations, the algorithm achieved an accuracy of 32.78%. Several factors indicate that the method is making slightly informed choices, and could be useful as part of a holistic approach to recognising the affect represented in text. iii Acknowledgements",
"title": ""
},
{
"docid": "7a5d22ae156d6a62cfd080c2a58103d2",
"text": "Stochastic neurons and hard non-linearities can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such stochastic or non-smooth neurons? I.e., can we “back-propagate” through these stochastic neurons? We examine this question, existing approaches, and compare four families of solutions, applicable in different settings. One of them is the minimum variance unbiased gradient estimator for stochatic binary neurons (a special case of the REINFORCE algorithm). A second approach, introduced here, decomposes the operation of a binary stochastic neuron into a stochastic binary part and a smooth differentiable part, which approximates the expected effect of the pure stochatic binary neuron to first order. A third approach involves the injection of additive or multiplicative noise in a computational graph that is otherwise differentiable. A fourth approach heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument (we call this the straight-through estimator). To explore a context where these estimators are useful, we consider a small-scale version of conditional computation, where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network. In this case, it is important that the gating units produce an actual 0 most of the time. The resulting sparsity can be potentially be exploited to greatly reduce the computational cost of large deep networks for which conditional computation would be useful.",
"title": ""
},
{
"docid": "ac6430e097fb5a7dc1f7864f283dcf47",
"text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "9068ae05b4064a98977f6a19bae6ccf0",
"text": "We present Raman spectroscopy measurements on single- and few-layer graphene flakes. By using a scanning confocal approach, we collect spectral data with spatial resolution, which allows us to directly compare Raman images with scanning force micrographs. Single-layer graphene can be distinguished from double- and few-layer by the width of the D' line: the single peak for single-layer graphene splits into different peaks for the double-layer. These findings are explained using the double-resonant Raman model based on ab initio calculations of the electronic structure and of the phonon dispersion. We investigate the D line intensity and find no defects within the flake. A finite D line response originating from the edges can be attributed either to defects or to the breakdown of translational symmetry.",
"title": ""
},
{
"docid": "a17052726cbf3239c3f516b51af66c75",
"text": "Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part, or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be “reinventing the wheel”. Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.",
"title": ""
},
{
"docid": "6dbe5a46a96857b58fc6c3d0ca7ded94",
"text": "High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003), demonstrated that HSGPA in college-preparatory courses was the best predictor of freshman grades for a sample of almost 80,000 students admitted to the University of California. Because freshman grades provide only a short-term indicator of college performance, the present study tracked four-year college outcomes, including cumulative college grades and graduation, for the same sample in order to examine the relative contribution of high-school record and standardized tests in predicting longerterm college performance. Key findings are: (1) HSGPA is consistently the strongest predictor of four-year college outcomes for all academic disciplines, campuses and freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated with HSGPA increases after the freshman year, accounting for a greater proportion of variance in cumulative fourth-year than first-year college grades; and (3) as an admissions criterion, HSGPA has less adverse impact than standardized tests on disadvantaged and underrepresented minority students. The paper concludes with a discussion of the implications of these findings for admissions policy and argues for greater emphasis on the high-school record, and a corresponding de-emphasis on standardized tests, in college admissions. * The study was supported by a grant from the Koret Foundation. Geiser and Santelices: VALIDITY OF HIGH-SCHOOL GRADES 2 CSHE Research & Occasional Paper Series Introduction and Policy Context This study examines the relative contribution of high-school grades and standardized admissions tests in predicting students’ long-term performance in college, including cumulative grade-point average and college graduation. The relative emphasis on grades vs. tests as admissions criteria has become increasingly visible as a policy issue at selective colleges and universities, particularly in states such as Texas and California, where affirmative action has been challenged or eliminated. Compared to high-school gradepoint average (HSGPA), scores on standardized admissions tests such as the SAT I are much more closely correlated with students’ socioeconomic background characteristics. As shown in Table 1, for example, among our study sample of almost 80,000 University of California (UC) freshmen, SAT I verbal and math scores exhibit a strong, positive relationship with measures of socioeconomic status (SES) such as family income, parents’ education and the academic ranking of a student’s high school, whereas HSGPA is only weakly associated with such measures. As a result, standardized admissions tests tend to have greater adverse impact than HSGPA on underrepresented minority students, who come disproportionately from disadvantaged backgrounds. The extent of the difference can be seen by rank-ordering students on both standardized tests and highschool grades and comparing the distributions. Rank-ordering students by test scores produces much sharper racial/ethnic stratification than when the same students are ranked by HSGPA, as shown in Table 2. It should be borne in mind the UC sample shown here represents a highly select group of students, drawn from the top 12.5% of California high-school graduates under the provisions of the state’s Master Plan for Higher Education. Overall, under-represented minority students account for about 17 percent of that group, although their percentage varies considerably across different HSGPA and SAT levels within the sample. When students are ranked by HSGPA, underrepresented minorities account for 28 percent of students in the bottom Family Parents' School API Income Education Decile SAT I verbal 0.32 0.39 0.32 SAT I math 0.24 0.32 0.39 HSGPA 0.04 0.06 0.01 Source: UC Corporate Student System data on 79,785 first-time freshmen entering between Fall 1996 and Fall 1999. Correlation of Admissions Factors with SES Table 1",
"title": ""
},
{
"docid": "b58055779111f5ae0b6cf5b70220b20e",
"text": "Screen media usage, sleep time and socio-demographic features are related to adolescents' academic performance, but interrelations are little explored. This paper describes these interrelations and behavioral profiles clustered in low and high academic performance. A nationally representative sample of 3,095 Spanish adolescents, aged 12 to 18, was surveyed on 15 variables linked to the purpose of the study. A Self-Organizing Maps analysis established non-linear interrelationships among these variables and identified behavior patterns in subsequent cluster analyses. Topological interrelationships established from the 15 emerging maps indicated that boys used more passive videogames and computers for playing than girls, who tended to use mobile phones to communicate with others. Adolescents with the highest academic performance were the youngest. They slept more and spent less time using sedentary screen media when compared to those with the lowest performance, and they also showed topological relationships with higher socioeconomic status adolescents. Cluster 1 grouped boys who spent more than 5.5 hours daily using sedentary screen media. Their academic performance was low and they slept an average of 8 hours daily. Cluster 2 gathered girls with an excellent academic performance, who slept nearly 9 hours per day, and devoted less time daily to sedentary screen media. Academic performance was directly related to sleep time and socioeconomic status, but inversely related to overall sedentary screen media usage. Profiles from the two clusters were strongly differentiated by gender, age, sedentary screen media usage, sleep time and academic achievement. Girls with the highest academic results had a medium socioeconomic status in Cluster 2. Findings may contribute to establishing recommendations about the timing and duration of screen media usage in adolescents and appropriate sleep time needed to successfully meet the demands of school academics and to improve interventions targeting to affect behavioral change.",
"title": ""
},
{
"docid": "fcf410fc492f3ddf80be9cb5351f7aed",
"text": "Unmanned Combat Aerial Vehicle (UCAV) research has allowed the state of the art of the remote-operation of these technologies to advance significantly in modern times, though mostly focusing on ground strike scenarios. Within the context of air-to-air combat, millisecond long timeframes for critical decisions inhibit remoteoperation of UCAVs. Beyond this, given an average human visual reaction time of 0.15 to 0.30 seconds, and an even longer time to think of optimal plans and coordinate them with friendly forces, there is a huge window of improvement that an Artificial Intelligence (AI) can capitalize upon. While many proponents for an increase in autonomous capabilities herald the ability to design aircraft that can perform extremely high-g maneuvers as well as the benefit of reducing risk to our pilots, this white paper will primarily focus on the increase in capabilities of real-time decision making.",
"title": ""
},
{
"docid": "3910a3317ea9ff4ea6c621e562b1accc",
"text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.",
"title": ""
},
{
"docid": "06525bcc03586c8d319f5d6f1d95b852",
"text": "Many different automatic color correction approaches have been proposed by different research communities in the past decade. However, these approaches are seldom compared, so their relative performance and applicability are unclear. For multi-view image and video stitching applications, an ideal color correction approach should be effective at transferring the color palette of the source image to the target image, and meanwhile be able to extend the transferred color from the overlapped area to the full target image without creating visual artifacts. In this paper we evaluate the performance of color correction approaches for automatic multi-view image and video stitching. We consider nine color correction algorithms from the literature applied to 40 synthetic image pairs and 30 real mosaic image pairs selected from different applications. Experimental results show that both parametric and non-parametric approaches have members that are effective at transferring colors, while parametric approaches are generally better than non-parametric approaches in extendability.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "1a1fd84f2f7e13966bceccdd602d967c",
"text": "The significance of social media has already been proven in provoking transformation of public opinion for developed countries in improving democratic process of elections. On the contrary, developing countries lacking basic necessities of life possess monopolistic electoral system in which candidates are elected based on tribes, family backgrounds, or landlord influences. They extort voters to cast votes against their promises for the provision of basic needs. Similarly voters also poll votes for personal interests being unaware of party manifesto or national interest. These issues can be addressed by social media, resulting as ongoing process of improvement for presently adopted electoral procedures. People of Pakistan utilized social media to garner support and campaign for political parties in General Elections 2013. Political leaders, parties, and people of Pakistan disseminated party's agenda and advocacy of party's ideology on Twitter without much campaigning cost. To study effectiveness of social media inferred from individual's political behavior, large scale analysis, sentiment detection & tweet classification was done in order to classify, predict and forecast election results. The experimental results depicts that social media content can be used as an effective indicator for capturing political behaviors of different parties Positive, negative and neutral behavior of the party followers as well as party's campaign impact can be predicted from the analysis. The analytical findings proved to be having considerable correspondence with actual results as published by Election Commission of Pakistan..",
"title": ""
},
{
"docid": "e0db3c5605ea2ea577dda7d549e837ae",
"text": "This paper presents a system based on new operators for handling sets of propositional clauses represented by means of ZBDDs. The high compression power of such data structures allows efficient encodings of structured instances. A specialized operator for the distribution of sets of clauses is introduced and used for performing multi-resolution on clause sets. Cut eliminations between sets of clauses of exponential size may then be performed using polynomial size data structures. The ZRES system, a new implementation of the Davis-Putnam procedure of 1960, solves two hard problems for resolution, that are currently out of the scope of the best SAT provers.",
"title": ""
},
{
"docid": "2d5dba872d7cd78a9e2d57a494a189ea",
"text": "In this chapter, we give an overview of what ontologies are and how they can be used. We discuss the impact of the expressiveness, the number of domain elements, the community size, the conceptual dynamics, and other variables on the feasibility of an ontology project. Then, we break down the general promise of ontologies of facilitating the exchange and usage of knowledge to six distinct technical advancements that ontologies actually provide, and discuss how this should influence design choices in ontology projects. Finally, we summarize the main challenges of ontology management in real-world applications, and explain which expectations from practitioners can be met as",
"title": ""
},
{
"docid": "81b5379abf3849e1ae4e233fd4955062",
"text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.",
"title": ""
},
{
"docid": "f5ce4a13a8d081243151e0b3f0362713",
"text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the",
"title": ""
},
{
"docid": "322141533594ed1927f36b850b8d963f",
"text": "Microelectrodes are widely used in the physiological recording of cell field potentials. As microelectrode signals are generally in the μV range, characteristics of the cell-electrode interface are important to the recording accuracy. Although the impedance of the microelectrode-solution interface has been well studied and modeled in the past, no effective model has been experimentally verified to estimate the noise of the cell-electrode interface. Also in existing interface models, spectral information is largely disregarded. In this work, we developed a model for estimating the noise of the cell-electrode interface from interface impedances. This model improves over existing noise models by including the cell membrane capacitor and frequency dependent impedances. With low-noise experiment setups, this model is verified by microelectrode array (MEA) experiments with mouse muscle myoblast cells. Experiments show that the noise estimated from this model has <;10% error, which is much less than estimations from existing models. With this model, noise of the cell-electrode interface can be estimated by simply measuring interface impedances. This model also provides insights for micro- electrode design to achieve good recording signal-to-noise ratio.",
"title": ""
}
] |
scidocsrr
|
95ff5d203e53bc3d55f10adeff45ec3d
|
Residual MeshNet: Learning to Deform Meshes for Single-View 3D Reconstruction
|
[
{
"docid": "8d83568ca0c89b1a6e344341bb92c2d0",
"text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"title": ""
},
{
"docid": "d214ef50a5c26fb65d8c06ea7db3d07c",
"text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.",
"title": ""
},
{
"docid": "a90f865e053b9339052a4d00281dbd03",
"text": "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.",
"title": ""
},
{
"docid": "348a5c33bde53e7f9a1593404c6589b4",
"text": "Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"title": ""
},
{
"docid": "b70716877c23701d0897ab4a42a5beba",
"text": "We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.",
"title": ""
},
{
"docid": "a959f4ac1f92aa43ce7ae24e4ff092e9",
"text": "Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [1]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework i) outperforms the state-of-theart methods for single view reconstruction, and ii) enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).",
"title": ""
},
{
"docid": "56a52c6a6b1815daee9f65d8ffc2610e",
"text": "State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark.",
"title": ""
}
] |
[
{
"docid": "a241ca85048e30c48acd532bce1bf2ca",
"text": "This paper addresses the challenge of establlishing a bridge between deep convolutional neural networks and conventional object detection frameworks for accurate and efficient generic object detection. We introduce Dense Neural Patterns, short for DNPs, which are dense local features derived from discriminatively trained deep convolutional neural networks. DNPs can be easily plugged into conventional detection frameworks in the same way as other dense local features(like HOG or LBP). The effectiveness of the proposed approach is demonstrated with Regionlets object detection framework. It achieved 46.1% mean average precision on the PASCAL VOC 2007 dataset, and 44.1% on the PASCAL VOC 2010 dataset, which dramatically improves the originalRegionlets approach without DNPs.",
"title": ""
},
{
"docid": "d90d40a59f91b59bd63a3c52a8d715a4",
"text": "The paradigm shift from planar (two dimensional (2D)) to vertical (three-dimensional (3D)) models has placed the NAND flash technology on the verge of a design evolution that can handle the demands of next-generation storage applications. However, it also introduces challenges that may obstruct the realization of such 3D NAND flash. Specifically, we observed that the fast threshold drift (fast-drift) in a charge-trap flash-based 3D NAND cell can make it lose a critical fraction of the stored charge relatively soon after programming and generate errors.\n In this work, we first present an elastic read reference (VRef) scheme (ERR) for reducing such errors in ReveNAND—our fast-drift aware 3D NAND design. To address the inherent limitation of the adaptive VRef, we introduce a new intra-block page organization (hitch-hike) that can enable stronger error correction for the error-prone pages. In addition, we propose a novel reinforcement-learning-based smart data refill scheme (iRefill) to counter the impact of fast-drift with minimum performance and hardware overhead. Finally, we present the first analytic model to characterize fast-drift and evaluate its system-level impact. Our results show that, compared to conventional 3D NAND design, our ReveNAND can reduce fast-drift errors by 87%, on average, and can lower the ECC latency and energy overheads by 13× and 10×, respectively.",
"title": ""
},
{
"docid": "6116b4e96f472b518137fba69297518d",
"text": "This paper examines the technique of using a noise-suppressing nonlinearity in the adaptive filter error feedback-loop of an acoustic echo canceler (AEC) based on the least mean square (LMS) algorithm when there is an interference at the near end. The source of distortion may be linear, such as local speech or background noise, or nonlinear due to speech coding used in the telecommunication networks. Detailed derivation of the error recovery nonlinearity (ERN), which “enhances” the filter estimation error prior to the adaptation in order to assist the linear adaptation process, will be provided. Connections to other existing AEC and signal enhancement techniques will be revealed. In particular, the error enhancement technique is well-founded in the information-theoretic sense and has strong ties to independent component analysis (ICA), which is the basis for blind source separation (BSS) that permits unsupervised adaptation in the presence of multiple interfering signals. The single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. The system approach to robust AEC will be motivated, where a proper integration of the LMS algorithm with the ERN into the AEC “system” allows for continuous and stable adaptation even during double talk without precise estimation of the signal statistics. The error enhancement paradigm encompasses many traditional signal enhancement techniques and opens up an entirely new avenue for solving the AEC problem in a real-world setting.",
"title": ""
},
{
"docid": "2ba53ad9e9c015779cfb2aec51fe310f",
"text": "In the past few years, more and more researchers have paid close attention to the emerging field of delay tolerant networks (DTNs), in which network often partitions and end-to-end paths do not exist nearly all the time. To cope with these challenges, most routing protocols employ the \"store-carry-forward\" strategy to transmit messages. However, the difficulty of this strategy is how to choose the best relay node and determine the best time to forward messages. Fortunately, social relations among nodes can be used to address these problems. In this paper, we present a comprehensive survey of recent social-aware routing protocols, which offer an insight into how to utilize social relationships to design efficient and applicable routing algorithms in DTNs. First, we review the major practical applications of DTNs. Then, we focus on understanding social ties between nodes and investigating some design-related issues of social-based routing approaches, e.g., the ways to obtain social relations among nodes, the metrics and approaches to identify the characteristics of social ties, the strategies to optimize social-aware routing protocols, and the suitable mobility traces to evaluate these protocols. We also create a taxonomy for social-aware routing protocols according to the sources of social relations. Finally, we outline several open issues and research challenges.",
"title": ""
},
{
"docid": "4b2b199aeb61128cbee7691bc49e16f5",
"text": "Although deep learning approaches have achieved performance surpassing humans for still image-based face recognition, unconstrained video-based face recognition is still a challenging task due to large volume of data to be processed and intra/inter-video variations on pose, illumination, occlusion, scene, blur, video quality, etc. In this work, we consider challenging scenarios for unconstrained video-based face recognition from multiple-shot videos and surveillance videos with low-quality frames. To handle these problems, we propose a robust and efficient system for unconstrained video-based face recognition, which is composed of face/fiducial detection, face association, and face recognition. First, we use multi-scale single-shot face detectors to efficiently localize faces in videos. The detected faces are then grouped respectively through carefully designed face association methods, especially for multi-shot videos. Finally, the faces are recognized by the proposed face matcher based on an unsupervised subspace learning approach and a subspace-tosubspace similarity metric. Extensive experiments on challenging video datasets, such as Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), JANUS Challenge Set 6 (CS6) for low-quality surveillance videos and IARPA JANUS Benchmark B (IJB-B) for multiple-shot videos, demonstrate that the proposed system can accurately detect and associate faces from unconstrained videos and effectively learn robust and discriminative features for recognition.",
"title": ""
},
{
"docid": "7071a178d42011a39145066da2d08895",
"text": "This paper discusses the trend modeling for traffic time series. First, we recount two types of definitions for a long-term trend that appeared in previous studies and illustrate their intrinsic differences. We show that, by assuming an implicit temporal connection among the time series observed at different days/locations, the PCA trend brings several advantages to traffic time series analysis. We also describe and define the so-called short-term trend that cannot be characterized by existing definitions. Second, we sequentially review the role that trend modeling plays in four major problems in traffic time series analysis: abnormal data detection, data compression, missing data imputation, and traffic prediction. The relations between these problems are revealed, and the benefit of detrending is explained. For the first three problems, we summarize our findings in the last ten years and try to provide an integrated framework for future study. For traffic prediction problem, we present a new explanation on why prediction accuracy can be improved at data points representing the short-term trends if the traffic information from multiple sensors can be appropriately used. This finding indicates that the trend modeling is not only a technique to specify the temporal pattern but is also related to the spatial relation of traffic time series.",
"title": ""
},
{
"docid": "e9bf278fd48cc437796f12530d352d3c",
"text": "This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user's transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.",
"title": ""
},
{
"docid": "7115c7f17faa8712dbdeac631f022ae4",
"text": "Scientific workflows, like other applications, benefit from the cloud computing, which offers access to virtually unlimited resources provisioned elastically on demand. In order to efficiently execute a workflow in the cloud, scheduling is required to address many new aspects introduced by cloud resource provisioning. In the last few years, many techniques have been proposed to tackle different cloud environments enabled by the flexible nature of the cloud, leading to the techniques of different designs. In this paper, taxonomies of cloud workflow scheduling problem and techniques are proposed based on analytical review. We identify and explain the aspects and classifications unique to workflow scheduling in the cloud environment in three categories, namely, scheduling process, task and resource. Lastly, review of several scheduling techniques are included and classified onto the proposed taxonomies. We hope that our taxonomies serve as a stepping stone for those entering this research area and for further development of scheduling technique.",
"title": ""
},
{
"docid": "63ca8787121e3b392e130f9d451b11ea",
"text": "Frank K.Y. Chan Hong Kong University of Science and Technology",
"title": ""
},
{
"docid": "0b1a8b80b4414fa34d6cbb5ad1342ad7",
"text": "OBJECTIVE\nThe aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.\n\n\nMETHODS\nThis prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.\n\n\nRESULTS\nThe study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P < .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P < .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P < .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P < .05).\n\n\nCONCLUSION\nLidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.",
"title": ""
},
{
"docid": "2f1dc4a089f88d6f7e39b10f53321e89",
"text": "⎯ A new technique for summarizing news articles using a neural network is presented. A neural network is trained to learn the relevant characteristics of sentences that should be included in the summary of the article. The neural network is then modified to generalize and combine the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used as a filter to summarize news articles.",
"title": ""
},
{
"docid": "2ed0f0699d7e58f8b6b2041eb5153672",
"text": "The massive explosion in social networks has led to a significant growth in graph analytics and specifically in dynamic, time-varying graphs. Most prior work processes dynamic graphs by first storing the updates and then repeatedly running static graph analytics on saved snapshots. To handle the extreme scale and fast evolution of real-world graphs, we propose a dynamic graph analytics framework, GraphIn, that incrementally processes graphs on-the-fly using fixed-sized batches of updates. As part of GraphIn, we propose a novel programming model called I-GAS (based on gather-apply-scatter programming paradigm) that allows for implementing a large set of incremental graph processing algorithms seamlessly across multiple CPU cores. We further propose a property-based, dual-path execution model to choose between incremental or static computation. Our experiments show that for a variety of graph inputs and algorithms, GraphIn achieves up to 9.3 million updates/sec and over 400× speedup when compared to static graph recomputation.",
"title": ""
},
{
"docid": "9c98b0652776a8402979134e753a8b86",
"text": "In this paper, the shielded coil structure using the ferrites and the metallic shielding is proposed. It is compared with the unshielded coil structure (i.e. a pair of circular loop coils only) to demonstrate the differences in the magnetic field distributions and system performance. The simulation results using the 3D Finite Element Analysis (FEA) tool show that it can considerably suppress the leakage magnetic field from 100W-class wireless power transfer (WPT) system with the enhanced system performance.",
"title": ""
},
{
"docid": "db83ca64b54bbd54b4097df425c48017",
"text": "This paper introduces the application of high-resolution angle estimation algorithms for a 77GHz automotive long range radar sensor. Highresolution direction of arrival (DOA) estimation is important for future safety systems. Using FMCW principle, major challenges discussed in this paper are small number of snapshots, correlation of the signals, and antenna mismatches. Simulation results allow analysis of these effects and help designing the sensor. Road traffic measurements show superior DOA resolution and the feasibility of high-resolution angle estimation.",
"title": ""
},
{
"docid": "57334078030a2b2d393a7c236d6a3a1c",
"text": "Neural Architecture Search (NAS) aims at finding one “single” architecture that achieves the best accuracy for a given task such as image recognition. In this paper, we study the instance-level variation, and demonstrate that instance-awareness is an important yet currently missing component of NAS. Based on this observation, we propose InstaNAS for searching toward instance-level architectures; the controller is trained to search and form a “distribution of architectures” instead of a single final architecture. Then during the inference phase, the controller selects an architecture from the distribution, tailored for each unseen image to achieve both high accuracy and short latency. The experimental results show that InstaNAS reduces the inference latency without compromising classification accuracy. On average, InstaNAS achieves 48.9% latency reduction on CIFAR-10 and 40.2% latency reduction on CIFAR-100 with respect to MobileNetV2 architecture.",
"title": ""
},
{
"docid": "42e25eaf06693b3544498d959a55bd1e",
"text": "A standard view of the semantics of natural language sentences or utterances is that a sentence has a particular logical structure and is assigned truth-conditional content on the basis of that structure. Such a semantics is assumed to be able to capture the logical properties of sentences, including necessary truth, contradiction and valid inference; our knowledge of these properties is taken to be part of our semantic competence as native speakers of the language. The following examples pose a problem for this view of semantics:",
"title": ""
},
{
"docid": "d6ed97c07d19545de707733ac2fbe38e",
"text": "We present an approach for tracking camera pose in real time given a stream of depth images. Existing algorithms are prone to drift in the presence of smooth surfaces that destabilize geometric alignment. We show that useful contour cues can be extracted from noisy and incomplete depth input. These cues are used to establish correspondence constraints that carry information about scene geometry and constrain pose estimation. Despite ambiguities in the input, the presented contour constraints reliably improve tracking accuracy. Results on benchmark sequences and on additional challenging examples demonstrate the utility of contour cues for real-time camera pose estimation.",
"title": ""
},
{
"docid": "509da5200a09c7a6119e78df8c1a865a",
"text": "Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X , Z and Y wherein we are interested in generating sequences in Y starting from information available in X . However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications). Z thus acts as a pivot/bridge. An obvious solution, which is perhaps less elegant but works very well in practice is to train a two stage model which first converts from X to Z and then from Z to Y . Instead we explore an interlingua inspired solution which jointly learns to do the following (i) encodeX and Z to a common representation and (ii) decode Y from this common representation. We evaluate our model on two tasks: (i) bridge transliteration and (ii) bridge captioning. We report promising results in both these applications and believe that this is a right step towards truly interlingua inspired encoder decoder architectures.",
"title": ""
},
{
"docid": "4dca4af3b49056b6ab46749f0144a2cd",
"text": "Pie menus are a well-known technique for interacting with 2D environments and so far a large body of research documents their usage and optimizations. Yet, comparatively little research has been done on the usability of pie menus in immersive virtual environments (IVEs). In this paper we reduce this gap by presenting an implementation and evaluation of an extended hierarchical pie menu system for IVEs that can be operated with a six-degrees-of-freedom input device. Following an iterative development process, we first developed and evaluated a basic hierarchical pie menu system. To better understand how pie menus should be operated in IVEs, we tested this system in a pilot user study with 24 participants and focus on item selection. Regarding the results of the study, the system was tweaked and elements like check boxes, sliders, and color map editors were added to provide extended functionality. An expert review with five experts was performed with the extended pie menus being integrated into an existing VR application to identify potential design issues. Overall results indicated high performance and efficient design.",
"title": ""
},
{
"docid": "ddfd19823d6dcfc1bd9c3763ecc30cb0",
"text": "As travelers are becoming more price sensitive, less brand loyal and more sophisticated, Customer Relationship Management (CRM) becomes a strategic necessity for attracting and increasing guests’ patronage. Although CRM in hospitality has overstated the importance of ICT, it is now widely recognised that successful CRM implementation should effectively combine and align ICT functionality with business operations. Given the lack of a widely accepted framework for CRM implementation, this paper proposed a model for managing and integrating ICT capabilities into CRM strategies and business processes. The model argues that successful CRM implementation requires the management and alignment of three managerial processes: ICT, relationship (internal and external) and knowledge management. The model is tested by gathering data from Greek hotels, while findings provide useful practical implications and suggestions for future research. r 2004 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
1d92b9819174f61b4c311cf73d6259a3
|
Software defined network — Architectures
|
[
{
"docid": "c64751968597299dc5622f589742c37d",
"text": "OpenFlow switching and Network Operating System (NOX) have been proposed to support new conceptual networking trials for fine-grained control and visibility. The OpenFlow is expected to provide multi-layer networking with switching capability of Ethernet, MPLS, and IP routing. NOX provides logically centralized access to high-level network abstraction and exerts control over the network by installing flow entries in OpenFlow compatible switches. The NOX, however, is missing the necessary functions for QoS-guaranteed software defined networking (SDN) service provisioning on carrier grade provider Internet, such as QoS-aware virtual network embedding, end-to-end network QoS assessment, and collaborations among control elements in other domain network. In this paper, we propose a QoS-aware Network Operating System (QNOX) for SDN with Generalized OpenFlows. The functional modules and operations of QNOX for QoS-aware SDN service provisioning with the major components (e.g., service element (SE), control element (CE), management element (ME), and cognitive knowledge element (CKE)) are explained in detail. The current status of prototype implementation and performances are explained. The scalability of the QNOX is also analyzed to confirm that the proposed framework can be applied for carrier grade large scale provider Internet1.",
"title": ""
},
{
"docid": "6fd511ffcdb44c39ecad1a9f71a592cc",
"text": "s Providing Supporting Policy Compositional Operators Functional Composition Network Layered Abstract Topologies Topological Decomposition Packet Extensible Headers Policy & Network Abstractions Pyretic (Contributions)",
"title": ""
}
] |
[
{
"docid": "93520b1461f17ecd537cc99e84e0c88f",
"text": "We present a streaming method for reconstructing surfaces from large data sets generated by a laser range scanner using wavelets. Wavelets provide a localized, multiresolution representation of functions and this makes them ideal candidates for streaming surface reconstruction algorithms. We show how wavelets can be used to reconstruct the indicator function of a shape from a cloud of points with associated normals. Our method proceeds in several steps. We first compute a low-resolution approximation of the indicator function using an octree followed by a second pass that incrementally adds fine resolution details. The indicator function is then smoothed using a modified octree convolution step and contoured to produce the final surface. Due to the local, multiresolution nature of wavelets, our approach results in an algorithm over 10 times faster than previous methods and can process extremely large data sets in the order of several hundred million points in only an hour.",
"title": ""
},
{
"docid": "6495f8c0217be9aea23e694abae248f1",
"text": "This paper describes the interactive narrative experiences in Babyz, an interactive entertainment product for the PC currently in development at PF Magic / Mindscape in San Francisco, to be released in October 1999. Babyz are believable agents designed and implemented in the tradition of Dogz and Catz, Your Virtual Petz. As virtual human characters, Babyz are more intelligent, expressive and communicative than their Petz predecessors, allowing for both broader and deeper narrative possibilities. Babyz are designed with behaviors to support entertaining short-term narrative experiences, as well as long-term emotional relationships and narratives.",
"title": ""
},
{
"docid": "4d27ab78849800580fe2c80ec107ba97",
"text": "The article deals with research on framing effects. First, I will start with classifying different approaches on framing. Subsequently, I will provide a definition of the concepts of frame, schema and framing, expand on framing research conducted so far both theoretically and operationally. Having this equipment at hand, I will initiate a discussion on studies of framingeffects in terms of theory, methods and empirical results. This discussion leads to the conclusion that studies on framing effects are insufficiently concerned with the more recent psychological constructs and theories. In merely focusing on the activation of schemata, most studies ignore the more elaborate types of framing-effects. Therefore, several empirical questions remain unanswered and some methodical chances seem to be wasted.",
"title": ""
},
{
"docid": "0679fb7e4d3d07dc66214c580867c478",
"text": "This report summarizes what the computing research community knows about the role of trustworthy software for safety and effectiveness of medical devices. Research shows that problems in medical device software result largely from a failure to apply well-known systems engineering techniques, especially during specification of requirements and analysis of human factors. Recommendations to increase the trustworthiness of medical device software include (1) regulatory policies that specify outcome measures rather than technology, (2) collection of statistics on the role of software in medical devices, (3) establishment of open-research platforms for innovation, (4) clearer roles and responsibility for the shared burden of software, (5) clarification of the meaning of substantial equivalence for software, and (6) an increase in FDA’s access to outside experts in software. This report draws upon material from research in software engineering and trustworthy computing, public FDA data, and accident reports to provide a high-level understanding of the issues surrounding the risks and benefits of medical device software.",
"title": ""
},
{
"docid": "0915e156af3bec6a401ec9bd10ab899f",
"text": "The ability to generalize from past experience to solve previously unseen tasks is a key research challenge in reinforcement learning (RL). In this paper, we consider RL tasks defined as a sequence of high-level instructions described by natural language and study two types of generalization: to unseen and longer sequences of previously seen instructions, and to sequences where the instructions themselves were previously not seen. We present a novel hierarchical deep RL architecture that consists of two interacting neural controllers: a meta controller that reads instructions and repeatedly communicates subtasks to a subtask controller that in turn learns to perform such subtasks. To generalize better to unseen instructions, we propose a regularizer that encourages to learn subtask embeddings that capture correspondences between similar subtasks. We also propose a new differentiable neural network architecture in the meta controller that learns temporal abstractions which makes learning more stable under delayed reward. Our architecture is evaluated on a stochastic 2D grid world and a 3D visual environment where the agent should execute a list of instructions. We demonstrate that the proposed architecture is able to generalize well over unseen instructions as well as longer lists of instructions.",
"title": ""
},
{
"docid": "3ae9da3a27b00fb60f9e8771de7355fe",
"text": "In the past decade, graph-based structures have penetrated nearly every aspect of our lives. The detection of anomalies in these networks has become increasingly important, such as in exposing infected endpoints in computer networks or identifying socialbots. In this study, we present a novel unsupervised two-layered meta-classifier that can detect irregular vertices in complex networks solely by utilizing topology-based features. Following the reasoning that a vertex with many improbable links has a higher likelihood of being anomalous, we applied our method on 10 networks of various scales, from a network of several dozen students to online networks with millions of vertices. In every scenario, we succeeded in identifying anomalous vertices with lower false positive rates and higher AUCs compared to other prevalent methods. Moreover, we demonstrated that the presented algorithm is generic, and efficient both in revealing fake users and in disclosing the influential people in social networks.",
"title": ""
},
{
"docid": "cd102d4e21cf389c841aa9fbc26ff9c3",
"text": "We report the case of a 73-year-old man with massive swelling of the lower extremities, with a chronic and rather uncommon form of stasis dermatitis - stasis papillomatosis. The patient was also diagnosed with severe heart failure, including dilated cardiomyopathy, hypothyroidism that required a substantial dose of exogenous tyrosine, microcytic and megaloblastic anemia, iron deficiency, and type 2 diabetes. The cause of stasis dermatitis lesions is not completely understood. It may be caused by the allergic reaction to some epidermal protein antigen formation or chronic damage to the dermal-epidermal barrier that makes the skin more sensitive to irritants or trauma. It has, however, been suggested that the term stasis dermatitis should be used to refer only to cases caused by chronic venous insufficiency, which belongs to a group of lifestyle diseases and affects both women and men more and more frequently.",
"title": ""
},
{
"docid": "d3ac081abe2895830d3fff7ad2ce0721",
"text": "Despite the ubiquity of textual data, so far few researchers have applied text mining to answer organizational research questions. Text mining, which essentially entails a quantitative approach to the analysis of (usually) voluminous textual data, helps accelerate knowledge discovery by radically increasing the amount data that can be analyzed. This article aims to acquaint organizational researchers with the fundamental logic underpinning text mining, the analytical stages involved, and contemporary techniques that may be used to achieve different types of objectives. The specific analytical techniques reviewed are (a) dimensionality reduction, (b) distance and similarity computing, (c) clustering, (d) topic modeling, and (e) classification. We describe how text mining may extend contemporary organizational research by allowing the testing of existing or new research questions with data that are likely to be rich, contextualized, and ecologically valid. After an exploration of how evidence for the validity of text mining output may be generated, we conclude the article by illustrating the text mining process in a job analysis setting using a dataset composed of job vacancies.",
"title": ""
},
{
"docid": "060e518af9a250c1e6a3abf49555754f",
"text": "The deep learning community has proposed optimizations spanning hardware, software, and learning theory to improve the computational performance of deep learning workloads. While some of these optimizations perform the same operations faster (e.g., switching from a NVIDIA K80 to P100), many modify the semantics of the training procedure (e.g., large minibatch training, reduced precision), which can impact a model’s generalization ability. Due to a lack of standard evaluation criteria that considers these trade-offs, it has become increasingly difficult to compare these different advances. To address this shortcoming, DAWNBENCH and the upcoming MLPERF benchmarks use time-to-accuracy as the primary metric for evaluation, with the accuracy threshold set close to state-of-the-art and measured on a held-out dataset not used in training; the goal is to train to this accuracy threshold as fast as possible. In DAWNBENCH, the winning entries improved time-to-accuracy on ImageNet by two orders of magnitude over the seed entries. Despite this progress, it is unclear how sensitive time-to-accuracy is to the chosen threshold as well as the variance between independent training runs, and how well models optimized for time-to-accuracy generalize. In this paper, we provide evidence to suggest that time-to-accuracy has a low coefficient of variance and that the models tuned for it generalize nearly as well as pre-trained models. We additionally analyze the winning entries to understand the source of these speedups, and give recommendations for future benchmarking efforts.",
"title": ""
},
{
"docid": "10514cb40ed8adc9fb59e12cb0cf3fe9",
"text": "Crossover recombination is a crucial process in plant breeding because it allows plant breeders to create novel allele combnations on chromosomes that can be used for breeding superior F1 hybrids. Gaining control over this process, in terms of increasing crossover incidence, altering crossover positions on chromosomes or silencing crossover formation, is essential for plant breeders to effectively engineer the allelic composition of chromosomes. We review the various means of crossover control that have been described or proposed. By doing so, we sketch a field of science that uses both knowledge from classic literature and the newest discoveries to manage the occurrence of crossovers for a variety of breeding purposes.",
"title": ""
},
{
"docid": "1aada401a1a86fa42bed323e8ef2889c",
"text": "KEY POINTS\nThree weeks of intensified training and mild energy deficit in elite race walkers increases peak aerobic capacity independent of dietary support. Adaptation to a ketogenic low carbohydrate, high fat (LCHF) diet markedly increases rates of whole-body fat oxidation during exercise in race walkers over a range of exercise intensities. The increased rates of fat oxidation result in reduced economy (increased oxygen demand for a given speed) at velocities that translate to real-life race performance in elite race walkers. In contrast to training with diets providing chronic or periodised high carbohydrate availability, adaptation to an LCHF diet impairs performance in elite endurance athletes despite a significant improvement in peak aerobic capacity.\n\n\nABSTRACT\nWe investigated the effects of adaptation to a ketogenic low carbohydrate (CHO), high fat diet (LCHF) during 3 weeks of intensified training on metabolism and performance of world-class endurance athletes. We controlled three isoenergetic diets in elite race walkers: high CHO availability (g kg-1 day-1 : 8.6 CHO, 2.1 protein, 1.2 fat) consumed before, during and after training (HCHO, n = 9); identical macronutrient intake, periodised within or between days to alternate between low and high CHO availability (PCHO, n = 10); LCHF (< 50 g day-1 CHO; 78% energy as fat; 2.1 g kg-1 day-1 protein; LCHF, n = 10). Post-intervention, V̇O2 peak during race walking increased in all groups (P < 0.001, 90% CI: 2.55, 5.20%). LCHF was associated with markedly increased rates of whole-body fat oxidation, attaining peak rates of 1.57 ± 0.32 g min-1 during 2 h of walking at ∼80% V̇O2 peak . However, LCHF also increased the oxygen (O2 ) cost of race walking at velocities relevant to real-life race performance: O2 uptake (expressed as a percentage of new V̇O2 peak ) at a speed approximating 20 km race pace was reduced in HCHO and PCHO (90% CI: -7.047, -2.55 and -5.18, -0.86, respectively), but was maintained at pre-intervention levels in LCHF. HCHO and PCHO groups improved times for 10 km race walk: 6.6% (90% CI: 4.1, 9.1%) and 5.3% (3.4, 7.2%), with no improvement (-1.6% (-8.5, 5.3%)) for the LCHF group. In contrast to training with diets providing chronic or periodised high-CHO availability, and despite a significant improvement in V̇O2 peak , adaptation to the topical LCHF diet negated performance benefits in elite endurance athletes, in part due to reduced exercise economy.",
"title": ""
},
{
"docid": "1dfd962aab338894bbd1af8c7dd8fd7e",
"text": "A variety of congenital syndromes affecting the face occur due to defects involving the first and second BAs. Radiographic evaluation of craniofacial deformities is necessary to define aberrant anatomy, plan surgical procedures, and evaluate the effects of craniofacial growth and surgical reconstructions. High-resolution CT has proved vital in determining the nature and extent of these syndromes. The radiologic evaluation of syndromes of the first and second BA should begin first by studying a series of isolated defects (cleft lip with or without CP, micrognathia, and EAC atresia) that compose the major features of these syndromes and allow a more specific diagnosis. After discussion of these defects and the associated embryology, we discuss PRS, HFM, ACS, TCS, Stickler syndrome, and VCFS.",
"title": ""
},
{
"docid": "2a1920f22f22dcf473612a6d35cf0132",
"text": "We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a \"mixture of experts\" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data thus, a combined learning/classification operation much akin to what is done in image segmentation can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.",
"title": ""
},
{
"docid": "c238e600d072b7239934978b9f37a076",
"text": "ifferentiation of benign and malignant (melanoma) of the pigmented skin lesions is difficult even for the dermatologists thus in this paper a new analysis of the dermatoscopic images have been proposed. Segmentation, feature extraction and classification are the major steps of images analysis. In Segmentation step we use an improved FFCM based segmentation method (our previous work) to achieve to binary segmented image. In feature extraction step, the shape features are extracted from the binary segmented image. After normalizing of the features, in classification step, the feature vectors are classified into two groups (benign and malignant) by SVM classifier. The classification result for the accuracy is 71.39%, specificity is 85.95%, and it has the satisfactory results in sensitivity metrics.",
"title": ""
},
{
"docid": "188d26992b9b30495fa1c432cf49d649",
"text": "We consider the problem of dynamically maintaining (approximate) all-pairs effective resistances in separable graphs, which are those that admit an n-separator theorem for some c < 1. We give a fully dynamic algorithm that maintains (1 + ε)-approximations of the allpairs effective resistances of an n-vertex graph G undergoing edge insertions and deletions with Õ( √ n/ε) worst-case update time and Õ( √ n/ε) worst-case query time, if G is guaranteed to be √ n-separable (i.e., it is taken from a class satisfying a √ n-separator theorem) and its separator can be computed in Õ(n) time. Our algorithm is built upon a dynamic algorithm for maintaining approximate Schur complement that approximately preserves pairwise effective resistances among a set of terminals for separable graphs, which might be of independent interest. We complement our result by proving that for any two fixed vertices s and t, no incremental or decremental algorithm can maintain the s − t effective resistance for √n-separable graphs with worst-case update time O(n) and query time O(n) for any δ > 0, unless the Online Matrix Vector Multiplication (OMv) conjecture is false. We further show that for general graphs, no incremental or decremental algorithm can maintain the s− t effective resistance problem with worst-case update time O(n) and querytime O(n) for any δ > 0, unless the OMv conjecture is false. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 340506. University of Vienna, Faculty of Computer Science, Vienna, Austria. E-mail: gramoz.goranci@univie.ac.at. University of Vienna, Faculty of Computer Science, Vienna, Austria. E-mail: monika.henzinger@univie.ac.at. Department of Computer Science, University of Sheffield, Sheffield, UK. E-mail: p.peng@sheffield.ac.uk. Work done in part while at the Faculty of Computer Science, University of Vienna, Austria.",
"title": ""
},
{
"docid": "3519172a7bf6d4183484c613dcc65b0a",
"text": "There has been minimal attention paid in the literature to the aesthetics of the perioral area, either in youth or in senescence. Aging around the lips traditionally was thought to result from a combination of thinning skin surrounding the area, ptosis, and loss of volume in the lips. The atrophy of senescence was treated by adding volume to the lips and filling the deep nasolabial creases. There is now a growing appreciation for the role of volume enhancement in the perioral region and the sunken midface, as well as for dentition, in the resting and dynamic appearance of the perioral area (particularly in youth). In this article, the authors describe the senior author's (BG) preferred methods for aesthetic enhancement of the perioral region and his rejuvenative techniques developed over the past 28 years. The article describes the etiologies behind the dysmorphologies in this area and presents a problem-oriented algorithm for treating them.",
"title": ""
},
{
"docid": "530760520956d9487ff207ca4538e646",
"text": "The majority of organizations are competing to survive in this volatile and fierce market environment. Motivation and performance of the employees are essential tools for the success of any organization in the long run. On the one hand, measuring performance is critical to organization’s management, as it highlights the evolution and achievement of the organization. On the other hand, there is a positive relationship between employee motivation and organizational effectiveness, reflected in numerous studies. This paper aims to analyze the drivers of employee motivation to high levels of organizational performance. The literature shows that factors such as empowerment and recognition increase employee motivation. If the empowerment and recognition of employees is increased, their motivation to work will also improve, as well as their accomplishments and the organizational performance. Nevertheless, employee dissatisfactions caused by monotonous jobs and pressure from clients, might weaken the organizational performance. Therefore, jobs absenteeism rates may increase and employees might leave the organization to joint competitors that offer better work conditions and higher incentives. Not all individuals are the same, so each one should be motivated using different strategies. For example, one employee may be motivated by higher commission, while another might be motivated by job satisfaction or a better work environment.",
"title": ""
},
{
"docid": "80421f731f5faed4332437d5ad2d8a66",
"text": "This paper presents a capacitive measurement principle for the determination of the mass and volume flow rate and the revolution speed of screw conveyors. The measurement is realized using a set of transmitter electrodes and a single receiver ring electrode on the outer pipe surface. Sequential measurement of the inter-electrode capacitances between active transmitter and receiver is performed by means of a versatile hardware platform. A laboratory setup was designed and built. Sets of sensor data were acquired for different cereals. It is shown that the fill level influences the measured capacitances due to the permittivity of the bulk solids. Furthermore, the revolution speed and its variation over time can be directly deduced from the sensor signals. The good signal to noise ratio allows for a reliable measurement of the mass and volume flow rate.",
"title": ""
},
{
"docid": "af7d318e1c203358c87592d0c6bcb4d2",
"text": "A fundamental component of spatial modulation (SM), termed generalized space shift keying (GSSK), is presented. GSSK modulation inherently exploits fading in wireless communication to provide better performance over conventional amplitude/phase modulation (APM) techniques. In GSSK, only the antenna indices, and not the symbols themselves (as in the case of SM and APM), relay information. We exploit GSSKpsilas degrees of freedom to achieve better performance, which is done by formulating its constellation in an optimal manner. To support our results, we also derive upper bounds on GSSKpsilas bit error probability, where the source of GSSKpsilas strength is made clear. Analytical and simulation results show performance gains (1.5-3 dB) over popular multiple antenna APM systems (including Bell Laboratories layered space time (BLAST) and maximum ratio combining (MRC) schemes), making GSSK an excellent candidate for future wireless applications.",
"title": ""
},
{
"docid": "25b250495fd4989ce1a365d5ddaa526e",
"text": "Supervised automation of selected subtasks in Robot-Assisted Minimally Invasive Surgery (RMIS) has potential to reduce surgeon fatigue, operating time, and facilitate tele-surgery. Tumor resection is a multi-step multilateral surgical procedure to localize, expose, and debride (remove) a subcutaneous tumor, then seal the resulting wound with surgical adhesive. We developed a finite state machine using the novel devices to autonomously perform the tumor resection. The first device is an interchangeable instrument mount which uses the jaws and wrist of a standard RMIS gripping tool to securely hold and manipulate a variety of end-effectors. The second device is a fluid injection system that can facilitate precision delivery of material such as chemotherapy, stem cells, and surgical adhesives to specific targets using a single-use needle attached using the interchangeable instrument mount. Fluid flow through the needle is controlled via an externallymounted automated lead screw. Initial experiments suggest that an automated Intuitive Surgical dVRK system which uses these devices combined with a palpation probe and sensing model described in a previous paper can successfully complete the entire procedure in five of ten trials. We also show the most common failure phase, debridement, can be improved with visual feedback. Design details and video are available at: http://berkeleyautomation.github.io/surgical-tools.",
"title": ""
}
] |
scidocsrr
|
e8370f7607c755073ea998599221993f
|
Fast, Compact, and Discriminative: Evaluation of Binary Descriptors for Mobile Applications
|
[
{
"docid": "24a1aae42134632d5091ab0b2b008c6b",
"text": "Several visual feature extraction algorithms have recently appeared in the literature, with the goal of reducing the computational complexity of state-of-the-art solutions (e.g., SIFT and SURF). Therefore, it is necessary to evaluate the performance of these emerging visual descriptors in terms of processing time, repeatability and matching accuracy, and whether they can obtain competitive performance in applications such as image retrieval. This paper aims to provide an up-to-date detailed, clear, and complete evaluation of local feature detector and descriptors, focusing on the methods that were designed with complexity constraints, providing a much needed reference for researchers in this field. Our results demonstrate that recent feature extraction algorithms, e.g., BRISK and ORB, have competitive performance requiring much lower complexity and can be efficiently used in low-power devices.",
"title": ""
},
{
"docid": "83ad3f9cce21b2f4c4f8993a3d418a44",
"text": "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.",
"title": ""
}
] |
[
{
"docid": "6e1839a27ea40fc936f50fbf5e08ad4e",
"text": "Recent work has shown that adaptively reweighting the training set, growing a classifier using the new weights, and combining the classifiers constructed to date can significantly decrease generalization error. Procedures of this type were called arcing by Breiman[1996]. The first successful arcing procedure was introduced by Freund and Schapire[1995,1996] and called Adaboost. In an effort to explain why Adaboost works, Schapire et.al. [1997] derived a bound on the generalization error of a convex combination of classifiers in terms of the margin. We introduce a function called the edge, which differs from the margin only if there are more than two classes. A framework for understanding arcing algorithms is defined. In this framework, we see that the arcing algorithms currently in the literature are optimization algorithms which minimize some function of the edge. A relation is derived between the optimal reduction in the maximum value of the edge and the PAC concept of weak learner. Two algorithms are described which achieve the optimal reduction. Tests on both synthetic and real data cast doubt on the Schapire et.al. explanation.",
"title": ""
},
{
"docid": "050065ce3e7240343d9636ef6c0e96cd",
"text": "The present paper explores a novel method that integrates efficient distributed representations with terminology extraction. We show that the information from a small number of observed instances can be combined with local and global word embeddings to remarkably improve the term extraction results on unigram terms. To do so, we pass the terms extracted by other tools to a filter made of the local-global embeddings and a classifier which in turn decides whether or not a term candidate is a term. The filter can also be used as a hub to merge different term extraction tools into a single higher-performing system. We compare filters that use the skipgram architecture and filters that employ the CBOW architecture for the task at hand.",
"title": ""
},
{
"docid": "b89f999bd27a6cbe1865f8853e384eba",
"text": "A rescue crawler robot with flipper arms has high ability to get over rough terrain, but it is hard to control its flipper arms in remote control. The authors aim at development of a semi-autonomous control system for the solution. In this paper, the authors propose a sensor reflexive method that controls these flippers autonomously for getting over unknown steps. Our proposed method is effective in unknown and changeable environment. The authors applied the proposed method to Aladdin, and examined validity of these control rules in unknown environment.",
"title": ""
},
{
"docid": "7e32cb8fb6ac5c079c7773f6d29b4a62",
"text": "In this letter, a modified optimum bandstop filter (BSF) design is presented. Broad stopband and sharp rejection characteristics are achieved by realizing three transmission zeros near the passband edges. Stopband rejection depth and bandwidth can be controlled by the impedances of the configuration. A lossless transmission line model analysis is used to derive design equations. Further, a compact geometry is chosen by replacing the low impedance section with its equivalent dual-high impedance lines, which facilitates convenient folding. To validate the theoretical prediction, a single unit sharp-rejecting prototype BSF having a 20 dB rejection bandwidth of 126% at 1.5 GHz has been fabricated.",
"title": ""
},
{
"docid": "e1fb515f0f5bbec346098f1ee2aaefdc",
"text": "Observing failures and other – desired or undesired – behavior patterns in large scale software systems of specific domains (telecommunication systems, information systems, online web applications, etc.) is difficult. Very often, it is only possible by examining the runtime behavior of these systems through operational logs or traces. However, these systems can generate data in order of gigabytes every day, which makes a challenge to process in the course of predicting upcoming critical problems or identifying relevant behavior patterns. We can say that there is a gap between the amount of information we have and the amount of information we need to make a decision. Low level data has to be processed, correlated and synthesized in order to create high level, decision helping data. The actual value of this high level data lays in its availability at the time of decision making (e.g., do we face a virus attack?). In other words high level data has to be available real-time or near real-time. The research area of event processing deals with processing such data that are viewed as events and with making alerts to the administrators (users) of the systems about relevant behavior patterns based on the rules that are determined in advance. The rules or patterns describe the typical circumstances of the events which have been experienced by the administrators. Normally, these experts improve their observation capabilities over time as they experience more and more critical events and the circumstances preceding them. However, there is a way to aid this manual process by applying the results from a related (and from many aspects, overlapping) research area, predictive analytics, and thus improving the effectiveness of event processing. Predictive analytics deals with the prediction of future events based on previously observed historical data by applying sophisticated methods like machine learning, the historical data is often collected and transformed by using techniques similar to the ones of event processing, e.g., filtering, correlating the data, and so on. In this paper, we are going to examine both research areas and offer a survey on terminology, research achievements, existing solutions, and open issues. We discuss the applicability of the research areas to the telecommunication domain. We primarily base our survey on articles published in international conferences and journals, but we consider other sources of information as well, like technical reports, tools or web-logs.",
"title": ""
},
{
"docid": "5e8bec98eb2edd9acdf2ee5ba713c647",
"text": "In this paper we present a sensor-based table tennis stroke detection and classification system. We attached inertial sensors to table tennis rackets and collected data of 8 different basic stroke types from 10 amateur and professional players. Firstly, single strokes were detected by a event detection algorithm. Secondly, features were computed and used as input for stroke type classification. Multiple classifiers were compared regarding classification rates and computational effort. The overall sensitivity of the stroke detection was 95.7% and the best classifier reached a classification rate of 96.7%. Therefore, our presented approach is able to detect table tennis strokes in time-series data and to classify each stroke into correct stroke type categories. The system has the potential to be implemented as an embedded real-time application for other racket sports, to analyze training exercises and competitions, to present match statistics or to support the athletes' training progress. To our knowledge, this is the first paper that addresses a wearable support system for table tennis, and our future work aims at using the presented results to build a complete match analysis system for this sport.",
"title": ""
},
{
"docid": "4250ae1e0b2c662b98171acaeaa35028",
"text": "For many applications in Urban Search and Rescue (USAR) scenarios robots need to learn a map of unknown environments. We present a system for fast online learning of occupancy grid maps requiring low computational resources. It combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing. By using a fast approximation of map gradients and a multi-resolution grid, reliable localization and mapping capabilities in a variety of challenging environments are realized. Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the system is sufficiently accurate as to not require explicit loop closing techniques in the considered scenarios. The software is available as an open source package for ROS.",
"title": ""
},
{
"docid": "5dba3258382d9781287cdcb6b227153c",
"text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.",
"title": ""
},
{
"docid": "b3d4f37cbf2b277ecec7291d12f4dde5",
"text": "This paper reports on the design, fabrication, assembly, as well as the optical, mechanical and thermal characterization of a novel MEMS-based optical cochlear implant (OCI). Building on advances in optogenetics, it will enable the optical stimulation of neural activity in the auditory pathway at 10 independently controlled spots. The optical stimulation of the spiral ganglion neurons (SGNs) promises a pronounced increase in the number of discernible acoustic frequency channels in comparison with commercial cochlear implants based on the electrical stimulation. Ten high-efficiency light-emitting diodes are integrated as a linear array onto an only 12-μm-thick highly flexible polyimide substrate with three metal and three polyimide layers. The high mechanical flexibility of this novel OCI enables its insertion into a 300 μm wide channel with an outer bending radius of 1 mm. The 2 cm long and only 240 μm wide OCI is electrically passivated with a thin layer of Cy-top™.",
"title": ""
},
{
"docid": "279de90035c16de3f3acfcd4f352a3c9",
"text": "Purpose – To develop a model that bridges the gap between CSR definitions and strategy and offers guidance to managers on how to connect socially committed organisations with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. Design/methodology/approach – This paper offers a critical evaluation of the theoretical foundations of corporate responsibility (CR) and proposes a new strategic approach to CR, which seeks to overcome the limitations of normative definitions. To address this perceived issue, the authors propose a new processual model of CR, which they refer to as the 3C-SR model. Findings – The 3C-SR model can offer practical guidelines to managers on how to connect with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. It is argued that many of the redefinitions of CR for a contemporary audience are normative exhortations (“calls to arms”) that fail to provide managers with the conceptual resources to move from “ought” to “how”. Originality/value – The 3C-SR model offers a novel approach to CR in so far as it addresses strategy, operations and markets in a single framework.",
"title": ""
},
{
"docid": "adfe05c7e0cebf76c3f6cf7f84c7523e",
"text": "Mass detection from mammograms plays a crucial role as a pre- processing stage for mass segmentation and classification. The detection of masses from mammograms is considered to be a challenging problem due to their large variation in shape, size, boundary and texture and also because of their low signal to noise ratio compared to the surrounding breast tissue. In this paper, we present a novel approach for detecting masses in mammograms using a cascade of deep learning and random forest classifiers. The first stage classifier consists of a multi-scale deep belief network that selects suspicious regions to be further processed by a two-level cascade of deep convolutional neural networks. The regions that survive this deep learning analysis are then processed by a two-level cascade of random forest classifiers that use morphological and texture features extracted from regions selected along the cascade. Finally, regions that survive the cascade of random forest classifiers are combined using connected component analysis to produce state-of-the-art results. We also show that the proposed cascade of deep learning and random forest classifiers are effective in the reduction of false positive regions, while maintaining a high true positive detection rate. We tested our mass detection system on two publicly available datasets: DDSM-BCRP and INbreast. The final mass detection produced by our approach achieves the best results on these publicly available datasets with a true positive rate of 0.96 ± 0.03 at 1.2 false positive per image on INbreast and true positive rate of 0.75 at 4.8 false positive per image on DDSM-BCRP.",
"title": ""
},
{
"docid": "2d7ea221d2bce97c2a91ee26a3793d0d",
"text": "In this article we introduce modern statistical machine learning and bioinformatics approaches that have been used in learning statistical relationships from big data in medicine and behavioral science that typically include clinical, genomic (and proteomic) and environmental variables. Every year, data collected from biomedical and behavioral science is getting larger and more complicated. Thus, in medicine, we also need to be aware of this trend and understand the statistical tools that are available to analyze these datasets. Many statistical analyses that are aimed to analyze such big datasets have been introduced recently. However, given many different types of clinical, genomic, and environmental data, it is rather uncommon to see statistical methods that combine knowledge resulting from those different data types. To this extent, we will introduce big data in terms of clinical data, single nucleotide polymorphism and gene expression studies and their interactions with environment. In this article, we will introduce the concept of well-known regression analyses such as linear and logistic regressions that has been widely used in clinical data analyses and modern statistical models such as Bayesian networks that has been introduced to analyze more complicated data. Also we will discuss how to represent the interaction among clinical, genomic, and environmental data in using modern statistical models. We conclude this article with a promising modern statistical method called Bayesian networks that is suitable in analyzing big data sets that consists with different type of large data from clinical, genomic, and environmental data. Such statistical model form big data will provide us with more comprehensive understanding of human physiology and disease.",
"title": ""
},
{
"docid": "e0dfdc0d6a8a8cfd9834fc9873389b10",
"text": "In this paper we study how to build an effective incremental crawler. The crawler selectively and incrementally updates its index and/or local collection of web pages, instead of periodically refreshing the collection in batch mode. The incremental crawler can improve the “freshness” of the collection significantly and bring in new pages in a more timely manner. We first present results from an experiment conducted on more than half million web pages over 4 months, to estimate how web pages evolve over time. Based on these experimental results, we compare various design choices for an incremental crawler and discuss their trade-offs. We propose an architecture for the incremental crawler, which combines the best design choices.",
"title": ""
},
{
"docid": "81ea96fd08b41ce6e526d614e9e46a7e",
"text": "BACKGROUND\nChronic alcoholism is known to impair the functioning of episodic and working memory, which may consequently reduce the ability to learn complex novel information. Nevertheless, semantic and cognitive procedural learning have not been properly explored at alcohol treatment entry, despite its potential clinical relevance. The goal of the present study was therefore to determine whether alcoholic patients, immediately after the weaning phase, are cognitively able to acquire complex new knowledge, given their episodic and working memory deficits.\n\n\nMETHODS\nTwenty alcoholic inpatients with episodic memory and working memory deficits at alcohol treatment entry and a control group of 20 healthy subjects underwent a protocol of semantic acquisition and cognitive procedural learning. The semantic learning task consisted of the acquisition of 10 novel concepts, while subjects were administered the Tower of Toronto task to measure cognitive procedural learning.\n\n\nRESULTS\nAnalyses showed that although alcoholic subjects were able to acquire the category and features of the semantic concepts, albeit slowly, they presented impaired label learning. In the control group, executive functions and episodic memory predicted semantic learning in the first and second halves of the protocol, respectively. In addition to the cognitive processes involved in the learning strategies invoked by controls, alcoholic subjects seem to attempt to compensate for their impaired cognitive functions, invoking capacities of short-term passive storage. Regarding cognitive procedural learning, although the patients eventually achieved the same results as the controls, they failed to automate the procedure. Contrary to the control group, the alcoholic groups' learning performance was predicted by controlled cognitive functions throughout the protocol.\n\n\nCONCLUSION\nAt alcohol treatment entry, alcoholic patients with neuropsychological deficits have difficulty acquiring novel semantic and cognitive procedural knowledge. Compared with controls, they seem to use more costly learning strategies, which are nonetheless less efficient. These learning disabilities need to be considered when treatment requiring the acquisition of complex novel information is envisaged.",
"title": ""
},
{
"docid": "9a5ef746c96a82311e3ebe8a3476a5f4",
"text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.",
"title": ""
},
{
"docid": "45547e71daf4c392e3b35eed4312292e",
"text": "The detection of features from Light Detection and Ranging (LIDAR) data is a fundamental component of feature-based mapping and SLAM systems. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be very poor. We describe a general purpose feature detector for LIDAR data that is applicable to virtually any environment. Our methods adapt classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. Our resulting method is capable of identifying stable features at a variety of spatial scales and produces uncertainty estimates for use in a state estimation algorithm. We present results on standard datasets, including Victoria Park and Intel Research Center (both 2D), and the MIT DARPA Urban Challenge dataset (3D).",
"title": ""
},
{
"docid": "a0e0d3224cd73539e01f260d564109a7",
"text": "We are living in a world where there is an increasing need for evidence in organizations. Good digital evidence is becoming a business enabler. Very few organizations have the structures (management and infrastructure) in place to enable them to conduct cost effective, low-impact and fficient digital investigations [1]. Digital Forensics (DF) is a vehicle that organizations use to provide good and trustworthy evidence and processes. The current DF models concentrate on reactive investigations, with limited reference to DF readiness and live investigations. However, organizations use DF for other purposes for example compliance testing. The paper proposes that DF consists of three components: Pro-active (ProDF), Active (ActDF) and Re-active (ReDF). ProDF concentrates on DF readiness and the proactive responsible use of DF to demonstrate good governance and enhance governance structures. ActDF considers the gathering of live evidence during an ongoing attack with a limited live investigation element whilst ReDF deals with the traditional DF investigation. The paper discusses each component and the relationship between the components.",
"title": ""
},
{
"docid": "362c41e8f90c097160c7785e8b4c9053",
"text": "This paper focuses on biomimetic design in the field of technical textiles / smart fabrics. Biologically inspired design is a very promising approach that has provided many elegant solutions. Firstly, a few bio-inspired innovations are presented, followed the introduction of trans-disciplinary research as a useful tool for defining the design problem and giving solutions. Furthermore, the required methods for identifying and applying biological analogies are analysed. Finally, the bio-mimetic approach is questioned and the difficulties, limitations and errors that a designer might face when adopting it are discussed. Researchers and product developers that use this approach should also problematize on the role of biomimetic design: is it a practice that redirects us towards a new model of sustainable development or is it just another tool for generating product ideas in order to increase a company’s competitiveness in the global market? Author",
"title": ""
},
{
"docid": "463c6bb86f81d0f0e19427772add1a22",
"text": "Administrative burden represents the costs to businesses, citizens and the administration itself of complying with government regulations and procedures. The burden tends to increase with new forms of public governance that rely less on direct decisions and actions undertaken by traditional government bureaucracies, and more on government creating and regulating the environment for other, non-state actors to jointly address public needs. Based on the reviews of research and policy literature, this paper explores administrative burden as a policy problem, presents how Digital Government (DG) could be applied to address this problem, and identifies societal adoption, organizational readiness and other conditions under which DG can be an effective tool for Administrative Burden Reduction (ABR). Finally, the paper tracks ABR to the latest Contextualization stage in the DG evolution, and discusses possible development approaches and technological potential of pursuing ABR through DG.",
"title": ""
},
{
"docid": "a242abcad4b52ea18c3308d7dd5708d4",
"text": "This 60-day, 30-subject pilot study examined a novel combination of ingredients in a unique sustained release (Carbopol matrix) tablet consumed twice daily. The product was composed of extracts of banaba leaf, green coffee bean, and Moringa oleifera leaf and vitamin D3. Safety was assessed using a 45-measurement blood chemistry panel, an 86-item self-reported Quality of Life Inventory, bone mineral density, and cardiovascular changes. Efficacy was assessed by calculating a body composition improvement index (BCI) based on changes in dual energy X-ray absorptiometry measured fat mass (FM) and fat-free mass (FFM) as well as between the study group (SG) and a historical placebo group. No changes occurred in any blood chemistry measurements. Positive changes were found in the Quality of Life (QOL) inventory composite scores. No adverse effects were observed. Decreases occurred in FM (p = 0.004) and increases in FFM (p = 0.009). Relative to the historical placebo group, the SG lost more FM (p < 0.0001), gained more FFM (p = <0.0001), and had a negative BCI of -2.7 lb. compared with a positive BCI in the SG of 3.4 lb., a 6.1 discordance (p = 0.0009). The data support the safety and efficacy of this unique product and demonstrate importance of using changes in body composition versus scale weight and BMI.",
"title": ""
}
] |
scidocsrr
|
cd2be2ce5bfc5ac89f55ac26be86d759
|
Targeting Infeasibility Questions on Obfuscated Codes
|
[
{
"docid": "98efa74b25284d0ce22038811f9e09e5",
"text": "Automatic analysis of malicious binaries is necessary in order to scale with the rapid development and recovery of malware found in the wild. The results of automatic analysis are useful for creating defense systems and understanding the current capabilities of attackers. We propose an approach for automatic dissection of malicious binaries which can answer fundamental questions such as what behavior they exhibit, what are the relationships between their inputs and outputs, and how an attacker may be using the binary. We implement our approach in a system called BitScope. At the core of BitScope is a system which allows us to execute binaries with symbolic inputs. Executing with symbolic inputs allows us to reason about code paths without constraining the analysis to a particular input value. We implement 5 analysis using BitScope, and demonstrate that the analysis can rapidly analyze important properties such as what behaviors the malicious binaries exhibit. For example, BitScope uncovers all commands in typical DDoS zombies and botnet programs, and uncovers significant behavior in just minutes. This work was supported in part by CyLab at Carnegie Mellon under grant DAAD19-02-1-0389 from the Army Research Office, the U.S. Army Research Office under the Cyber-TA Research Grant No. W911NF-06-1-0316, the ITA (International Technology Alliance), CCF-0424422, National Science Foundation Grant Nos. 0311808, 0433540, 0448452, 0627511, and by the IT R&D program of MIC(Ministry of Information and Communication)/IITA(Institute for Information Technology Advancement) [2005-S-606-02, Next Generation Prediction and Response technology for Computer and Network Security Incidents]. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the ARO, CMU, or the U.S. Government.",
"title": ""
},
{
"docid": "2a8d7998ec186e0144c0dcf762afbacc",
"text": "Within the software industry software piracy is a great concern. In this article we address this issue through a prevention technique called software watermarking. Depending on how a software watermark is applied it can be used to discourage piracy; as proof of authorship or purchase; or to track the source of the illegal redistribution. In particular we analyze an algorithm originally proposed by Geneviève Arboit in A Method for Watermarking Java Programs via Opaque Predicates. This watermarking technique embeds the watermark by adding opaque predicates to the application. We have found that the Arboit technique does withstand some forms of attack and has a respectable data-rate. However, it is susceptible to a variety of distortive attacks. One unanswered question in the area of software watermarking is whether dynamic algorithms are inherently more resilient to attacks than static algorithms. We have implemented and empirically evaluated both static and dynamic versions within the SANDMARK framework.",
"title": ""
}
] |
[
{
"docid": "262be71d64eef2534fab547ec3db6b9a",
"text": "In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies.",
"title": ""
},
{
"docid": "26796d48a19ea2a1248b5557814802e8",
"text": "In this paper, we investigate the security challenges and issues of cyber-physical systems. (1)We abstract the general workflow of cyber physical systems, (2)identify the possible vulnerabilities, attack issues, adversaries characteristics and a set of challenges that need to be addressed, (3)then we also propose a context-aware security framework for general cyber-physical systems and suggest some potential research areas and problems.",
"title": ""
},
{
"docid": "c4ff647b5962d3d713577c16a7a9cae5",
"text": "In this paper we propose the use of an illumination invariant transform to improve many aspects of visual localisation, mapping and scene classification for autonomous road vehicles. The illumination invariant colour space stems from modelling the spectral properties of the camera and scene illumination in conjunction, and requires only a single parameter derived from the image sensor specifications. We present results using a 24-hour dataset collected using an autonomous road vehicle, demonstrating increased consistency of the illumination invariant images in comparison to raw RGB images during daylight hours. We then present three example applications of how illumination invariant imaging can improve performance in the context of vision-based autonomous vehicles: 6-DoF metric localisation using monocular cameras over a 24-hour period, life-long visual localisation and mapping using stereo, and urban scene classification in changing environments. Our ultimate goal is robust and reliable vision-based perception and navigation an attractive proposition for low-cost autonomy for road vehicles.",
"title": ""
},
{
"docid": "97571039c1f7a11c65e71c723d231713",
"text": "Blockchains are increasingly attractive due to their decentralization, yet inherent limitations of high latency, in the order of minutes, and attacks on consensus cap their practicality. We introduce Blinkchain, a Byzantine consensus protocol that relies on sharding and locality-preserving techniques from distributed systems to provide a bound on consensus latency, proportional to the network delay between the buyer and the seller nodes. Blinkchain selects a random pool of validators, some of which are legitimate with high probability, even when an attacker focuses its forces to crowd out legitimate validators in a small vicinity.",
"title": ""
},
{
"docid": "7016bdf0b636ab12bb7ddf801d74fb0d",
"text": "Cloud Computing is arguably one of the most discussed information technology topics in recent times. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to “the Cloud“. One of the main concerns of customers is Cloud security and the threat of the unknown. Cloud Service Providers (CSP) encourage this perception by not letting their customers see what is behind their “virtual curtain“. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. In Cloud Forensics, the lack of physical access to servers constitutes a completely new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the Cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed Cloud environments. We contribute by assessing whether it is possible for the customer of Cloud Computing services to perform a traditional digital investigation from a technical standpoint. Furthermore we discuss possible new methodologies helping customers to perform such investigations and discuss future issues.",
"title": ""
},
{
"docid": "b925f9a2faab100ef9ea3ccc8f956547",
"text": "OBJECTIVE\nAn experiment studied the frequency and correlates of driver mind wandering.\n\n\nBACKGROUND\nDriver mind wandering is associated with risk for crash involvement. The present experiment examined the performance and attentional changes by which this effect might occur.\n\n\nMETHOD\nParticipants performed a car-following task in a high-fidelity driving simulator and were asked to report any time they caught themselves mind wandering. Vehicle control and eye movement data were recorded.\n\n\nRESULTS\nAs compared with their attentive performance, participants showed few deficits in vehicle control while mind wandering but tended to focus visual attention narrowly on the road ahead.\n\n\nCONCLUSION\nData suggest that mind wandering can engender a failure to monitor the environment while driving.\n\n\nAPPLICATION\nResults identify behavioral correlates and potential risks of mind wandering that might enable efforts to detect and mitigate driver inattention.",
"title": ""
},
{
"docid": "2a67a524cb3279967207b1fa8748cd04",
"text": "Recent work in Information Retrieval (IR) using Deep Learning models has yielded state of the art results on a variety of IR tasks. Deep neural networks (DNN) are capable of learning ideal representations of data during the training process, removing the need for independently extracting features. However, the structures of these DNNs are often tailored to perform on specific datasets. In addition, IR tasks deal with text at varying levels of granularity from single factoids to documents containing thousands of words. In this paper, we examine the role of the granularity on the performance of common state of the art DNN structures in IR.",
"title": ""
},
{
"docid": "e3a9de1939e90b8a50c8e472027622be",
"text": "The bacterial cellulose (BC) secreted by Gluconacetobacter xylinus was explored as a novel scaffold material due to its unusual biocompatibility, light transmittance and material properties. The specific surface area of the frozendried BC sheet based on BET isotherm was 22.886 m/g, and the porosity was around 90%. It is known by SEM graphs that significant difference in porosity and pore size exists in the two sides of air-dried BC sheets. The width of cellulose ribbons was 10 nm to 100 nm known by AFM image. The examination of the growth of human corneal stromal cells on BC demonstrated that the material supported the growth and proliferation of human corneal stromal cells. The ingrowth of corneal stromal cells into the scaffold was verified by Laser Scanning Confocal Microscope. The results suggest the potentiality for this biomaterial as a scaffold for tissue engineering of artificial cornea. KeywordsBacterial cellulose; Cornea; Tissue engineering; Scaffold; Corneal stromal cells",
"title": ""
},
{
"docid": "d196fad248811b1d3f7f8d4d11d3b83b",
"text": "Recent developments in telecommunications have allowed drawing new paradigms, including the Internet of Everything, to provide services by the interconnection of different physical devices enabling the exchange of data to enrich and automate people’s daily activities; and Fog computing, which is an extension of the well-known Cloud computing, bringing tasks to the edge of the network exploiting characteristics such as lower latency, mobility support, and location awareness. Combining these paradigms opens a new set of possibilities for innovative services and applications; however, it also brings a new complex scenario that must be efficiently managed to properly fulfill the needs of the users. In this scenario, the Fog Orchestrator component is the key to coordinate the services in the middle of Cloud computing and Internet of Everything. In this paper, key challenges in the development of the Fog Orchestrator to support the Internet of Everything are identified, including how they affect the tasks that a Fog service Orchestrator should perform. Furthermore, different service Orchestrator architectures for the Fog are explored and analyzed in order to identify how the previously listed challenges are being tackled. Finally, a discussion about the open challenges, technological directions, and future of the research on this subject is presented.",
"title": ""
},
{
"docid": "cd08ec6c25394b3304368952cf4fb99b",
"text": "Recently, several experimental studies have been conducted on block data layout as a data transformation technique used in conjunction with tiling to improve cache performance. In this paper, we provide a theoretical analysis for the TLB and cache performance of block data layout. For standard matrix access patterns, we derive an asymptotic lower bound on the number of TLB misses for any data layout and show that block data layout achieves this bound. We show that block data layout improves TLB misses by a factor of O B compared with conventional data layouts, where B is the block size of block data layout. This reduction contributes to the improvement in memory hierarchy performance. Using our TLB and cache analysis, we also discuss the impact of block size on the overall memory hierarchy performance. These results are validated through simulations and experiments on state-of-the-art platforms.",
"title": ""
},
{
"docid": "4a9a53444a74f7125faa99d58a5b0321",
"text": "The new transformed read-write Web has resulted in a rapid growth of user generated content on the Web resulting into a huge volume of unstructured data. A substantial part of this data is unstructured text such as reviews and blogs. Opinion mining and sentiment analysis (OMSA) as a research discipline has emerged during last 15 years and provides a methodology to computationally process the unstructured data mainly to extract opinions and identify their sentiments. The relatively new but fast growing research discipline has changed a lot during these years. This paper presents a scientometric analysis of research work done on OMSA during 20 0 0–2016. For the scientometric mapping, research publications indexed in Web of Science (WoS) database are used as input data. The publication data is analyzed computationally to identify year-wise publication pattern, rate of growth of publications, types of authorship of papers on OMSA, collaboration patterns in publications on OMSA, most productive countries, institutions, journals and authors, citation patterns and an year-wise citation reference network, and theme density plots and keyword bursts in OMSA publications during the period. A somewhat detailed manual analysis of the data is also performed to identify popular approaches (machine learning and lexicon-based) used in these publications, levels (document, sentence or aspect-level) of sentiment analysis work done and major application areas of OMSA. The paper presents a detailed analytical mapping of OMSA research work and charts the progress of discipline on various useful parameters. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fc59a335d52d2f895eb6b7e49a836f67",
"text": "Workflow management promises a new solution to an age-old problem: controlling, monitoring, optimizing and supporting business processes. What is new about workflow management is the explicit representation of the business process logic which allows for computerized support. This paper discusses the use of Petri nets in the context of workflow management. Petri nets are an established tool for modeling and analyzing processes. On the one hand, Petri nets can be used as a design language for the specification of complex workflows. On the other hand, Petri net theory provides for powerful analysis techniques which can be used to verify the correctness of workflow procedures. This paper introduces workflow management as an application domain for Petri nets, presents state-of-the-art results with respect to the verification of workflows, and highlights some Petri-net-based workflow tools.",
"title": ""
},
{
"docid": "06fe4547495c597a0f7052efd78d5a04",
"text": "The American cockroach, Periplaneta americana, provides a successful model for the study of legged locomotion. Sensory regulation and the relative importance of sensory feedback vs. central control in animal locomotion are key aspects in our understanding of locomotive behavior. Here we introduce the cockroach model and describe the basic characteristics of the neural generation and control of walking and running in this insect. We further provide a brief overview of some recent studies, including mathematical modeling, which have contributed to our knowledge of sensory control in cockroach locomotion. We focus on two sensory mechanisms and sense organs, those providing information related to loading and unloading of the body and the legs, and leg-movement-related sensory receptors, and present evidence for the instrumental role of these sensory signals in inter-leg locomotion control. We conclude by identifying important open questions and indicate future perspectives.",
"title": ""
},
{
"docid": "5e7e74966751bba22ca66b02c4c91642",
"text": "To deal with the defects of BP neural networks used in balance control of inverted pendulum, such as longer train time and converging in partial minimum, this article reaLizes the control of double inverted pendulum with improved BP algorithm of artificial neural networks(ANN), builds up a training model of test simulation and the BP network is 6-10-1 structure. Tansig function is used in hidden layer and PureLin function is used in output layer, LM is used in training algorithm. The training data is acquried by three-loop PID algorithm. The model is learned and trained with Matlab calculating software, and the simuLink simulation experiment results prove that improved BP algorithm for inverted pendulum control has higher precision, better astringency and lower calculation. This algorithm has wide appLication on nonLinear control and robust control field in particular.",
"title": ""
},
{
"docid": "679e7b448f0b3bc2f1713cdb852ac6b2",
"text": "There are many advantages of using high frequency PWM (in the range of 50 to 100 kHz) in motor drive applications. High motor efficiency, fast control response, lower motor torque ripple, close to ideal sinusoidal motor current waveform, smaller filter size, lower cost filter, etc. are a few of the advantages. However, higher frequency PWM is also associated with severe voltage reflection and motor insulation breakdown issues at the motor terminals. If standard Si IGBT based inverters are employed, losses in the switches make it difficult to overcome significant drop in efficiency of converting electrical power to mechanical power. Work on SiC and GaN based inverter has progressed and variable frequency drives (VFDs) can now be operated efficiently at carrier frequencies in the 50 to 200 kHz range, using these devices. Using soft magnetic material, the overall efficiency of filtering can be improved. The switching characteristics of SiC and GaN devices are such that even at high switching frequency, the turn on and turn off losses are minimal. Hence, there is not much penalty in increasing the carrier frequency of the VFD. Losses in AC motors due to PWM waveform are significantly reduced. All the above features put together improves system efficiency. This paper presents results obtained on using a 6-in-1 GaN module for VFD application, operating at a carrier frequency of 100 kHz with an output sine wave filter. Experimental results show the improvement in motor efficiency and system efficiency on using a GaN based VFD in comparison to the standard Si IGBT based VFD.",
"title": ""
},
{
"docid": "72b1a4204d49e588c793f3ec5f91c18d",
"text": "Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns and facial expressions. As a result, when modeling human language, it is essential to not only consider the literal meaning of the words but also the nonverbal contexts in which these words appear. To better model human language, we first model expressive nonverbal representations by analyzing the fine-grained visual and acoustic patterns that occur during word segments. In addition, we seek to capture the dynamic nature of nonverbal intents by shifting word representations based on the accompanying nonverbal behaviors. To this end, we propose the Recurrent Attended Variation Embedding Network (RAVEN) that models the fine-grained structure of nonverbal subword sequences and dynamically shifts word representations based on nonverbal cues. Our proposed model achieves competitive performance on two publicly available datasets for multimodal sentiment analysis and emotion recognition. We also visualize the shifted word representations in different nonverbal contexts and summarize common patterns regarding multimodal variations of word representations.",
"title": ""
},
{
"docid": "e6a59ce7ee0df1d7ba6936595b0ac59d",
"text": "MOTIVATION\nInferring the underlying regulatory pathways within a gene interaction network is a fundamental problem in Systems Biology to help understand the complex interactions and the regulation and flow of information within a system-of-interest. Given a weighted gene network and a gene in this network, the goal of an inference algorithm is to identify the potential regulatory pathways passing through this gene.\n\n\nRESULTS\nIn a departure from previous approaches that largely rely on the random walk model, we propose a novel single-source k-shortest paths based algorithm to address this inference problem. An important element of our approach is to explicitly account for and enhance the diversity of paths discovered by our algorithm. The intuition here is that diversity in paths can help enrich different functions and thereby better position one to understand the underlying system-of-interest. Results on the yeast gene network demonstrate the utility of the proposed approach over extant state-of-the-art inference algorithms. Beyond utility, our algorithm achieves a significant speedup over these baselines.\n\n\nAVAILABILITY\nAll data and codes are freely available upon request.",
"title": ""
},
{
"docid": "bdda2d3eef1a5040d626419c10f18d36",
"text": "This paper presents a novel hybrid permanent magnet and wound field synchronous machine geometry with a displaced reluctance axis. This concept is known for improving motor operation performance and efficiency at the cost of an inferior generator operation. To overcome this disadvantage, the proposed machine geometry is capable of inverting the magnetic asymmetry dynamically. Thereby, the positive effects of the magnetic asymmetry can be used in any operation point. This paper examines the theoretical background and shows the benefits of this geometry by means of simulation and measurement. The prototype achieves an increase in torque of 4 % and an increase in efficiency of 2 percentage points over a conventional electrically excited synchronous machine.",
"title": ""
},
{
"docid": "ea843fd64fcf15fc8d2a970400be011c",
"text": "Advances in the development of micro-electromechanical systems (MEMS) have made possible the fabrication of cheap and small dimension accelerometers and gyroscopes, which are being used in many applications where the global positioning system (GPS) and the inertial navigation system (INS) integration is carried out, i.e., identifying track defects, terrestrial and pedestrian navigation, unmanned aerial vehicles (UAVs), stabilization of many platforms, etc. Although these MEMS sensors are low-cost, they present different errors, which degrade the accuracy of the navigation systems in a short period of time. Therefore, a suitable modeling of these errors is necessary in order to minimize them and, consequently, improve the system performance. In this work, the most used techniques currently to analyze the stochastic errors that affect these sensors are shown and compared: we examine in detail the autocorrelation, the Allan variance (AV) and the power spectral density (PSD) techniques. Subsequently, an analysis and modeling of the inertial sensors, which combines autoregressive (AR) filters and wavelet de-noising, is also achieved. Since a low-cost INS (MEMS grade) presents error sources with short-term (high-frequency) and long-term (low-frequency) components, we introduce a method that compensates for these error terms by doing a complete analysis of Allan variance, wavelet de-nosing and the selection of the level of decomposition for a suitable combination between these techniques. Eventually, in order to assess the stochastic models obtained with these techniques, the Extended Kalman Filter (EKF) of a loosely-coupled GPS/INS integration strategy is augmented with different states. Results show a comparison between the proposed method and the traditional sensor error models under GPS signal blockages using real data collected in urban roadways.",
"title": ""
}
] |
scidocsrr
|
3e00d4bd7cedbd9f9934cc86a0a4dcaf
|
Exploring the Use of Text Classification in the Legal Domain
|
[
{
"docid": "7de29b042513aaf1a3b12e71bee6a338",
"text": "The widespread use of deception in online sources has motivated the need for methods to automatically profile and identify deceivers. This work explores deception, gender and age detection in short texts using a machine learning approach. First, we collect a new open domain deception dataset also containing demographic data such as gender and age. Second, we extract feature sets including n-grams, shallow and deep syntactic features, semantic features, and syntactic complexity and readability metrics. Third, we build classifiers that aim to predict deception, gender, and age. Our findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task. We further explore the linguistic differences in deceptive content that relate to deceivers gender and age and find evidence that both age and gender play an important role in people’s word choices when fabricating lies.",
"title": ""
}
] |
[
{
"docid": "909829de03729dd70d231d20a9c92e81",
"text": "Nonparametric two sample testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. We refer to the most common settings as mean difference alternatives (MDA), for testing differences only in first moments, and general difference alternatives (GDA), which is about testing for any difference in distributions. A large number of test statistics have been proposed for both these settings. This paper connects three classes of statistics high dimensional variants of Hotelling’s t-test, statistics based on Reproducing Kernel Hilbert Spaces, and energy statistics based on pairwise distances. We ask the following question how much statistical power do popular kernel and distance based tests for GDA have when the unknown distributions differ in their means, compared to specialized tests for MDA? To answer this, we formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime. We prove several interesting properties relating these classes of tests under MDA, which include (a) eED and gMMD have asymptotically equal power; furthermore they also enjoy a free lunch because, while they are additionally consistent for GDA, they have the same power as specialized high-dimensional t-tests for MDA. All these tests are asymptotically optimal (including matching constants) for MDA under spherical covariances, according to simple lower bounds. (b) The power of gMMD is independent of the kernel bandwidth, as long as it is larger than the choice made by the median heuristic. (c) There is a clear and smooth computation-statistics tradeoff for linear-time, subquadratic-time and quadratic-time versions of these tests, with more computation resulting in higher power. 1 ar X iv :1 50 8. 00 65 5v 1 [ m at h. ST ] 4 A ug 2 01 5 All three observations are practically important, since point (a) implies that eED and gMMD while being consistent against all alternatives, are also automatically adaptive to simpler alternatives, point (b) suggests that the median “heuristic” has some theoretical justification for being a default bandwidth choice, and point (c) implies that expending more computation may yield direct statistical benefit by orders of magnitude.",
"title": ""
},
{
"docid": "b5788c52127d2ef06df428d758f1a225",
"text": "Conventional convolutional neural networks use either a linear or a nonlinear filter to extract features from an image patch (region) of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> (typically, <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is small and is equal to <inline-formula> <tex-math notation=\"LaTeX\">$ W$ </tex-math></inline-formula>, e.g., <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is 5 or 7). Generally, the size of the filter is equal to the size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> of the input patch. We argue that the representational ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> is smaller than <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula>. The proposed subpatch filter consists of two subsequent filters. The first one is a linear filter of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> and is aimed at extracting features from spatial domain. The second one is of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ 1\\times 1 $ </tex-math></inline-formula> and is used for strengthening the connection between different input feature channels and for reducing the number of parameters. The subpatch filter convolves with the input patch and the resulting network is called a subpatch network. Taking the output of one subpatch network as input, we further repeat constructing subpatch networks until the output contains only one neuron in spatial domain. These subpatch networks form a new network called the cascaded subpatch network (CSNet). The feature layer generated by CSNet is called the <italic>csconv</italic> layer. For the whole input image, we construct a deep neural network by stacking a sequence of <italic>csconv</italic> layers. Experimental results on five benchmark data sets demonstrate the effectiveness and compactness of the proposed CSNet. For example, our CSNet reaches a test error of 5.68% on the CIFAR10 data set without model averaging. To the best of our knowledge, this is the best result ever obtained on the CIFAR10 data set.",
"title": ""
},
{
"docid": "11c245ca7bc133155ff761374dfdea6e",
"text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.",
"title": ""
},
{
"docid": "7fb2348fbde9dbef88357cc79ff394c5",
"text": "This paper presents a measurement system with capacitive sensor connected to an open-source electronic platform Arduino Uno. A simple code was modified in the project, which ensures that the platform works as interface for the sensor. The code can be modified and upgraded at any time to fulfill other specific applications. The simulations were carried out in the platform's own environment and the collected data are represented in graphical form. Accuracy of developed measurement platform is 0.1 pF.",
"title": ""
},
{
"docid": "26560de19573a47065e23150a6a56047",
"text": "In this note, we revisit the problem of scheduling stabilizing control tasks on embedded processors. We start from the paradigm that a real-time scheduler could be regarded as a feedback controller that decides which task is executed at any given instant. This controller has for objective guaranteeing that (control unrelated) software tasks meet their deadlines and that stabilizing control tasks asymptotically stabilize the plant. We investigate a simple event-triggered scheduler based on this feedback paradigm and show how it leads to guaranteed performance thus relaxing the more traditional periodic execution requirements.",
"title": ""
},
{
"docid": "923377a712c3c8b46fd1eefcd7106ae6",
"text": "Twitter has evolved from being a conversation or opinion sharing medium among friends into a platform to share and disseminate information about current events. Events in the real world create a corresponding spur of posts (tweets) on Twitter. Not all content posted on Twitter is trustworthy or useful in providing information about the event. In this paper, we analyzed the credibility of information in tweets corresponding to fourteen high impact news events of 2011 around the globe. From the data we analyzed, on average 30% of total tweets posted about an event contained situational information about the event while 14% was spam. Only 17% of the total tweets posted about the event contained situational awareness information that was credible. Using regression analysis, we identified the important content and sourced based features, which can predict the credibility of information in a tweet. Prominent content based features were number of unique characters, swear words, pronouns, and emoticons in a tweet, and user based features like the number of followers and length of username. We adopted a supervised machine learning and relevance feedback approach using the above features, to rank tweets according to their credibility score. The performance of our ranking algorithm significantly enhanced when we applied re-ranking strategy. Results show that extraction of credible information from Twitter can be automated with high confidence.",
"title": ""
},
{
"docid": "fe870a09fd4b0a8e400c4ea80b717e29",
"text": "We study trend filtering, a recently proposed tool of Kim et al. (2009) for nonparametric regression. The trend filtering estimate is defined as the minimizer of a penalized least squares criterion, in which the penalty term sums the absolute kth order discrete derivatives over the input points. Perhaps not surprisingly, trend filtering estimates appear to have the structure of kth degree spline functions, with adaptively chosen knot points (we say “appear” here as trend filtering estimates are not really functions over continuous domains, and are only defined over the discrete set of inputs). This brings to mind comparisons to other nonparametric regression tools that also produce adaptive splines; in particular, we compare trend filtering to smoothing splines, which penalize the sum of squared derivatives across input points, and to locally adaptive regression splines (Mammen & van de Geer 1997), which penalize the total variation of the kth derivative. Empirically, we discover that trend filtering estimates adapt to the local level of smoothness much better than smoothing splines, and further, they exhibit a remarkable similarity to locally adaptive regression splines. We also provide theoretical support for these empirical findings; most notably, we prove that (with the right choice of tuning parameter) the trend filtering estimate converges to the true underlying function at the minimax rate for functions whose kth derivative is of bounded variation. This is done via an asymptotic pairing of trend filtering and locally adaptive regression splines, which have already been shown to converge at the minimax rate (Mammen & van de Geer 1997). At the core of this argument is a new result tying together the fitted values of two lasso problems that share the same outcome vector, but have different predictor matrices.",
"title": ""
},
{
"docid": "f8854602bbb2f5295a5fba82f22ca627",
"text": "Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.",
"title": ""
},
{
"docid": "17cb27030abc5054b8f51256bdee346a",
"text": "Purpose – This paper seeks to define and describe agile project management using the Scrum methodology as a method for more effectively managing and completing projects. Design/methodology/approach – This paper provides a general overview and introduction to the concepts of agile project management and the Scrum methodology in particular. Findings – Agile project management using the Scrum methodology allows project teams to manage digital library projects more effectively by decreasing the amount of overhead dedicated to managing the project. Using an iterative process of continuous review and short-design time frames, the project team is better able to quickly adapt projects to rapidly evolving environments in which systems will be used. Originality/value – This paper fills a gap in the digital library project management literature by providing an overview of agile project management methods.",
"title": ""
},
{
"docid": "1fa7c954f5e352679c33d8946f4cac4e",
"text": "In some cases, such as in the estimation of impulse responses, it has been found that for plausible sample sizes the coverage accuracy of single bootstrap confidence intervals can be poor. The error in the coverage probability of single bootstrap confidence intervals may be reduced by the use of double bootstrap confidence intervals. The computer resources required for double bootstrap confidence intervals are often prohibitive, especially in the context of Monte Carlo studies. Double bootstrap confidence intervals can be estimated using computational algorithms incorporating simple deterministic stopping rules that avoid unnecessary computations. These algorithms may make the use and Monte Carlo evaluation of double bootstrap confidence intervals feasible in cases where otherwise they would not be feasible. The efficiency gains due to the use of these algorithms are examined by means of a Monte Carlo study for examples of confidence intervals for a mean and for the cumulative impulse response in a second order autoregressive model.",
"title": ""
},
{
"docid": "6fe5f8c299cbcff1b2b5f3f944e6ef75",
"text": "Microservices are a new trend rising fast from the enterprise world. Even though the design principles around microservices have been identified, it is difficult to have a clear view of existing research solutions for architecting microservices. In this paper we apply the systematic mapping study methodology to identify, classify, and evaluate the current state of the art on architecting microservices from the following three perspectives: publication trends, focus of research, and potential for industrial adoption. More specifically, we systematically define a classification framework for categorizing the research on architecting microservices and we rigorously apply it to the 71 selected studies. We synthesize the obtained data and produce a clear overview of the state of the art. This gives a solid basis to plan for future research and applications of architecting microservices.",
"title": ""
},
{
"docid": "ff53accc7e5342827104bf96a8d0e134",
"text": "The vision of a Smart Electric Grid relies critically on substantial advances in intelligent decentralized control mechanisms. We propose a novel class of autonomous broker agents for retail electricity trading that can operate in a wide range of Smart Electricity Markets, and that are capable of deriving long-term, profit-maximizing policies. Our brokers use Reinforcement Learning with function approximation, they can accommodate arbitrary economic signals from their environments, and they learn efficiently over the large state spaces resulting from these signals. We show how feature selection and regularization can be leveraged to automatically optimize brokers for particular market conditions, and demonstrate the performance of our design in extensive experiments using real-world energy market data.",
"title": ""
},
{
"docid": "f9afcc134abda1c919cf528cbc975b46",
"text": "Multimodal question answering in the cultural heritage domain allows visitors to museums, landmarks or other sites to ask questions in a more natural way. This in turn provides better user experiences. In this paper, we propose the construction of a golden standard dataset dedicated to aiding research into multimodal question answering in the cultural heritage domain. The dataset, soon to be released to the public, contains multimodal content about the fascinating old-Egyptian Amarna period, including images of typical artworks, documents about these artworks (containing images) and over 800 multimodal queries integrating visual and textual questions. The multimodal questions and related documents are all in English. The multimodal questions are linked to relevant paragraphs in the related documents that contain the answer to the multimodal query.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "6b7594aa4ace0f56884d970a9e254dc5",
"text": "Recent work has explored the use of hidden Markov models for unsupervised discourse and conversation modeling, where each segment or block of text such as a message in a conversation is associated with a hidden state in a sequence. We extend this approach to allow each block of text to be a mixture of multiple classes. Under our model, the probability of a class in a text block is a log-linear function of the classes in the previous block. We show that this model performs well at predictive tasks on two conversation data sets, improving thread reconstruction accuracy by up to 15 percentage points over a standard HMM. Additionally, we show quantitatively that the induced word clusters correspond to speech acts more closely than baseline models.",
"title": ""
},
{
"docid": "953f17ceeacafe508215278986fc9cb2",
"text": "I apply recent work on\"learning to think\"(2015) and on PowerPlay (2011) to the incremental training of an increasingly general problem solver, continually learning to solve new tasks without forgetting previous skills. The problem solver is a single recurrent neural network (or similar general purpose computer) called ONE. ONE is unusual in the sense that it is trained in various ways, e.g., by black box optimization / reinforcement learning / artificial evolution as well as supervised / unsupervised learning. For example, ONE may learn through neuroevolution to control a robot through environment-changing actions, and learn through unsupervised gradient descent to predict future inputs and vector-valued reward signals as suggested in 1990. User-given tasks can be defined through extra goal-defining input patterns, also proposed in 1990. Suppose ONE has already learned many skills. Now a copy of ONE can be re-trained to learn a new skill, e.g., through neuroevolution without a teacher. Here it may profit from re-using previously learned subroutines, but it may also forget previous skills. Then ONE is retrained in PowerPlay style (2011) on stored input/output traces of (a) ONE's copy executing the new skill and (b) previous instances of ONE whose skills are still considered worth memorizing. Simultaneously, ONE is retrained on old traces (even those of unsuccessful trials) to become a better predictor, without additional expensive interaction with the enviroment. More and more control and prediction skills are thus collapsed into ONE, like in the chunker-automatizer system of the neural history compressor (1991). This forces ONE to relate partially analogous skills (with shared algorithmic information) to each other, creating common subroutines in form of shared subnetworks of ONE, to greatly speed up subsequent learning of additional, novel but algorithmically related skills.",
"title": ""
},
{
"docid": "22a5aa4b9cbafa3cf63b6cf4aff60ba3",
"text": "characteristics, burnout, and (other-ratings of) performance (N 146). We hypothesized that job demands (e.g., work pressure and emotional demands) would be the most important antecedents of the exhaustion component of burnout, which, in turn, would predict in-role performance (hypothesis 1). In contrast, job resources (e.g., autonomy and social support) were hypothesized to be the most important predictors of extra-role performance, through their relationship with the disengagement component of burnout (hypothesis 2). In addition, we predicted that job resources would buffer the relationship between job demands and exhaustion (hypothesis 3), and that exhaustion would be positively related to disengagement (hypothesis 4). The results of structural equation modeling analyses provided strong support for hypotheses 1, 2, and 4, but rejected hypothesis 3. These findings support the JD-R model’s claim that job demands and job resources initiate two psychological processes, which eventually affect organizational outcomes. © 2004 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "a2f3c804333f36d208df00eb3f56589f",
"text": "In a number of human diseases, including depression, interactions between genetic and environmental factors have been identified in the absence of direct genotype-disorder associations. The lack of genes with major direct pathogenic effect suggests that genotype-specific vulnerabilities are balanced by adaptive advantages and implies aetiological heterogeneity. A model of depression is proposed that incorporates the interacting genetic and environmental factors over the life course and provides an explanatory framework for the heterogeneous aetiology of depression. Early environmental influences act on the genome to shape the adaptability to environmental changes in later life. The possibility is explored that genotype- and epigenotype-related traits can be harnessed to develop personalized therapeutic interventions. As diagnosis of depression alone is a weak predictor of response to specific treatments, aetiological subtypes can be used to inform the choice between treatments. As a specific application of this notion, a hypothesis is proposed regarding relative responsiveness of aetiological subtypes of depression to psychological treatment and antidepressant medication. Other testable predictions are likely to emerge from the general framework of interacting genetic, epigenetic and environmental mechanisms in depression.",
"title": ""
},
{
"docid": "333aeefbd2cf8a6eba5964ee57124df7",
"text": "The continue demands of internet and email communication has creating spam emails also known unsolicited bulk mails. These emails enter bypass in our mail box and affect our system. Different filtering techniques are using to detect these emails such as Random Forest, Naive Bayesian, SVM and Neural Network. In this paper, we compare the different performance matrices using Bayesian Classification and Neural Network approaches of data mining that are",
"title": ""
}
] |
scidocsrr
|
cbeb9eda3e0a7b5e40d4425dcf0e28c6
|
Neural Networks for Language Modeling
|
[
{
"docid": "497088def9f5f03dcb32e33d1b6fcb64",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
}
] |
[
{
"docid": "70743017cdee81c042491fe9ea550515",
"text": "Lightweight cryptographic solutions are required to guarantee the security of Internet of Things (IoT) pervasiveness. Cryptographic primitives mandate a non-linear operation. The design of a lightweight, secure, non-linear 4 × 4 substitution box (S-box) suited to Internet of Things (IoT) applications is proposed in this work. The structure of the 4 × 4 S-box is devised in the finite fields GF (24) and GF ((22)2). The finite field S-box is realized by multiplicative inversion followed by an affine transformation. The multiplicative inverse architecture employs Euclidean algorithm for inversion in the composite field GF ((22)2). The affine transformation is carried out in the field GF (24). The isomorphic mapping between the fields GF (24) and GF ((22)2) is based on the primitive element in the higher order field GF (24). The recommended finite field S-box architecture is combinational and enables sub-pipelining. The linear and differential cryptanalysis validates that the proposed S-box is within the maximal security bound. It is observed that there is 86.5% lesser gate count for the realization of sub field operations in the composite field GF ((22)2) compared to the GF (24) field. In the PRESENT lightweight cipher structure with the basic loop architecture, the proposed S-box demonstrates 5% reduction in the gate equivalent area over the look-up-table-based S-box with TSMC 180 nm technology.",
"title": ""
},
{
"docid": "758ac35802370de859d7d3eb668bfa26",
"text": "Mura is a typical region defect of TFT-LCD, which appears as low contrast, non-uniform brightness regions, typically larger than a single pixel. It is caused by a variety of physical factors such as non-uniformly distributed liquid crystal material and foreign particles within the liquid crystal. As compared to point defect and line defect, mura is relatively difficult to be identified due to its low contrast and no particular pattern of shape. Though automatic inspection of mura was discussed in many literatures, there is no an inspection method could be used to practical application because the defect models proposed were not consistent with the real ones. Since mura is of strong complexity and vagueness, so it is difficult to establish the accurate mathematical model of mura. Therefore, a fuzzy neural network approach for quantitative evaluation of mura in TFT-LCD is proposed in this paper. Experimental results show that a fuzzy neural network is very useful in solving such complex recognition problems as mura evaluation",
"title": ""
},
{
"docid": "0c77e3923dfae2b31824ce1285e6d5fd",
"text": "1 ACKNOWLEDGEMENTS 2",
"title": ""
},
{
"docid": "3718dbbcdf7d89ba4d41a4d29770d0da",
"text": "Sequential pattern mining is a popular data mining task with wide applications. However, it may present too many sequential patterns to users, which makes it difficult for users to comprehend the results. As a solution, it was proposed to mine maximal sequential patterns, a compact representation of the set of sequential patterns, which is often several orders of magnitude smaller than the set of all sequential patterns. However, the task of mining maximal patterns remains computationally expensive. To address this problem, we introduce a vertical mining algorithm named VMSP (Vertical mining of Maximal Sequential Patterns). It is to our knowledge the first vertical mining algorithm for mining maximal sequential patterns. An experimental study on five real datasets shows that VMSP is up to two orders of magnitude faster than the current state-of-the-art algorithm.",
"title": ""
},
{
"docid": "70d0f96d42467e1c998bb9969de55a39",
"text": "RGB-D cameras provide both a color image and a depth image which contains the real depth information about per-pixel. The richness of their data and the development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a novel hybrid visual odometry using an RGB-D camera. Different from the original method, it is a pure visual odometry method without any other information, such as inertial data. The important key is hybrid, which means that the odometry can be executed in two different processes depending on the conditions. It consists of two parts, including a feature-based visual odometry and a direct visual odometry. Details about the algorithm are discussed in the paper. Especially, the switch conditions are described in detail. Beside, we evaluate the continuity and robustness for the system on public dataset. The experiments demonstrate that our system has more stable continuity and better robustness.",
"title": ""
},
{
"docid": "35822f51adaef207b205910a48dd497f",
"text": "BACKGROUND\nThe adoption of healthcare technology is arduous, and it requires planning and implementation time. Healthcare organizations are vulnerable to modern trends and threats because it has not kept up with threats.\n\n\nOBJECTIVE\nThe objective of this systematic review is to identify cybersecurity trends, including ransomware, and identify possible solutions by querying academic literature.\n\n\nMETHODS\nThe reviewers conducted three separate searches through the CINAHL and PubMed (MEDLINE) and the Nursing and Allied Health Source via ProQuest databases. Using key words with Boolean operators, database filters, and hand screening, we identified 31 articles that met the objective of the review.\n\n\nRESULTS\nThe analysis of 31 articles showed the healthcare industry lags behind in security. Like other industries, healthcare should clearly define cybersecurity duties, establish clear procedures for upgrading software and handling a data breach, use VLANs and deauthentication and cloud-based computing, and to train their users not to open suspicious code.\n\n\nCONCLUSIONS\nThe healthcare industry is a prime target for medical information theft as it lags behind other leading industries in securing vital data. It is imperative that time and funding is invested in maintaining and ensuring the protection of healthcare technology and the confidentially of patient information from unauthorized access.",
"title": ""
},
{
"docid": "5178bd7051b15f9178f9d19e916fdc85",
"text": "With more and more functions in modern battery-powered mobile devices, enabling light-harvesting in the power management system can extend battery usage time [1]. For both indoor and outdoor operations of mobile devices, the output power range of the solar panel with the size of a touchscreen can vary from 100s of µW to a Watt due to the irradiance-level variation. An energy harvester is thus essential to achieve high maximum power-point tracking efficiency (ηT) over this wide power range. However, state-of-the-art energy harvesters only use one maximum power-point tracking (MPPT) method under different irradiance levels as shown in Fig. 22.5.1 [2–5]. Those energy harvesters with power-computation-based MPPT schemes for portable [2,3] and standalone [4] systems suffer from low ηT under low input power due to the limited input dynamic range of the MPPT circuitry. Other low-power energy harvesters with the fractional open-cell voltage (FOCV) MPPT scheme are confined by the fractional-constant accuracy to only offer high ηT across a narrow power range [5]. Additionally, the conventional FOCV MPPT scheme requires long transient time of 250ms to identify MPP [5], thereby significantly reducing energy capture from the solar panel. To address the above issues, this paper presents an energy harvester with an irradiance-aware hybrid algorithm (IAHA) to automatically switch between an auto-zeroed pulse-integration based MPPT (AZ PI-MPPT) and a slew-rate-enhanced FOCV (SRE-FOCV) MPPT scheme for maximizing ηT under different irradiance levels. The SRE-FOCV MPPT scheme also enables the energy harvester to shorten the MPPT transient time to 2.9ms in low irradiance levels.",
"title": ""
},
{
"docid": "d49d405fc765b647b39dc9ef1b4d6ba9",
"text": "The World Wide Web plays an important role while searching for information in the data network. Users are constantly exposed to an ever-growing flood of information. Our approach will help in searching for the exact user relevant content from multiple search engines thus, making the search more efficient and reliable. Our framework will extract the relevant result records based on two approaches i.e. Stored URL list and Run time Generated URL list. Finally, the unique set of records is displayed in a common framework's search result page. The extraction is performed using the concepts of Document Object Model (DOM) tree. The paper comprises of a concept of threshold and data filters to detect and remove irrelevant & redundant data from the web page. The data filters will also be used to further improve the similarity check of data records. Our system will be able to extract 75%-80% user relevant content by eliminating noisy content from the different structured web pages like blogs, forums, articles etc. in the dynamic environment. Our approach shows significant advantages in both precision and recall.",
"title": ""
},
{
"docid": "52da42b320e23e069519c228f1bdd8b5",
"text": "Over the last few years, C-RAN is proposed as a transformative architecture for 5G cellular networks that brings the flexibility and agility of cloud computing to wireless communications. At the same time, content caching in wireless networks has become an essential solution to lower the content- access latency and backhaul traffic loading, leading to user QoE improvement and network cost reduction. In this article, a novel cooperative hierarchical caching (CHC) framework in C-RAN is introduced where contents are jointly cached at the BBU and at the RRHs. Unlike in traditional approaches, the cache at the BBU, cloud cache, presents a new layer in the cache hierarchy, bridging the latency/capacity gap between the traditional edge-based and core-based caching schemes. Trace-driven simulations reveal that CHC yields up to 51 percent improvement in cache hit ratio, 11 percent decrease in average content access latency, and 18 percent reduction in backhaul traffic load compared to the edge-only caching scheme with the same total cache capacity. Before closing the article, we discuss the key challenges and promising opportunities for deploying content caching in C-RAN in order to make it an enabler technology in 5G ultra-dense systems.",
"title": ""
},
{
"docid": "af0039c3d24593474148e5f07446efc0",
"text": "The Internet of Things is arriving to our homes or cities through fields already known like Smart Homes, Smart Cities, or Smart Towns. The monitoring of environmental conditions of cities can help to adapt the indoor locations of the cities in order to be more comfortable for people who stay there. A way to improve the indoor conditions is an efficient temperature control, however, it depends on many factors like the different combinations of outdoor temperature and humidity. Therefore, adjusting the indoor temperature is not setting a value according to other value. There are many more factors to take into consideration, hence the traditional logic based in binary states cannot be used. Many problems cannot be solved with a set of binary solutions and we need a new way of development. Fuzzy logic is able to interpret many states, more than two states, giving to computers the capacity to react in a similar way to people. In this paper we will propose a new approach to control the temperature using the Internet of Things together its platforms and fuzzy logic regarding not only the indoor temperature but also the outdoor temperature and humidity in order to save energy and to set a more comfortable environment for their users. Finally, ∗Corresponding author Email addresses: danielmeanallorian@gmail.com (Daniel Meana-Llorián), gonzalezgarciacristian@hotmail.com (Cristian González Garćıa), crispelayo@uniovi.es (B. Cristina Pelayo G-Bustelo), cueva@uniovi.es (Juan Manuel Cueva Lovelle), nestor@uniovi.es (Nestor Garcia-Fernandez)",
"title": ""
},
{
"docid": "789e6e72391da4aaed371b2ee52641d4",
"text": "A compelling robotics course begins with a compelling robot. We introduce a new low-cost aerial educational platform, the PiDrone, along with an associated college-level introductory robotics course. In a series of projects, students incrementally build, program, and test their own drones to create an autonomous aircraft capable of using a downward facing RGB camera and infrared distance sensor to visually localize and maintain position. The PiDrone runs Python and the Robotics Operating System (ROS) framework on an onboard Raspberry Pi, providing an accessible and inexpensive platform for introducing students to robotics. Students can use any web and SSH capable computer as a base station and programming platform. The projects and supplementary homeworks introduce PID control, state estimation, and high-level planning, giving students the opportunity to exercise their new skills in an exciting long-term project.",
"title": ""
},
{
"docid": "569ae662a71c3484e7c53e6cf8dda50d",
"text": "Node mobility and end-to-end disconnections in Delay Tolerant Networks (DTNs) greatly impair the effectiveness of data dissemination. Although social-based approaches can be used to address the problem, most existing solutions only focus on forwarding data to a single destination. In this paper, we are the first to study multicast in DTNs from the social network perspective. We study multicast in DTNs with single and multiple data items, investigate the essential difference between multicast and unicast in DTNs, and formulate relay selections for multicast as a unified knapsack problem by exploiting node centrality and social community structures. Extensive trace-driven simulations show that our approach has similar delivery ratio and delay to the Epidemic routing, but can significantly reduce the data forwarding cost measured by the number of relays used.",
"title": ""
},
{
"docid": "f2fc6440b95c9ed93f5925672798ae2d",
"text": "This paper presents a standalone 5.6 nV/√Hz chopper op-amp that operates from a 2.1-5.5 V supply. Frequency compensation is achieved in a power-and area-efficient manner by using a current attenuator and a dummy differential output. As a result, the overall op-amp only consumes 1.4 mA supply current and 1.26 mm2 die area. Up-modulated chopper ripple is suppressed by a local feedback technique, called auto correction feedback (ACFB). The charge injection of the input chopping switches can cause residual offset voltages, especially with the wider switches needed to reduce thermal noise. By employing an adaptive clock boosting technique with NMOS input switches, the amount of charge injection is minimized and kept constant as the input common-mode voltage changes. This results in a 0.5 μV maximum offset and 0.015 μV/°C maximum drift over the amplifier's entire rail-to-rail input common-mode range and from -40 °C to 125 °C. The design is implemented in a 0.35 μm CMOS process augmented by 5 V CMOS transistors.",
"title": ""
},
{
"docid": "fba1a1296d8f3e22248e45cbe33263b5",
"text": "Wi-Fi has become the de facto wireless technology for achieving short- to medium-range device connectivity. While early attempts to secure this technology have been proved inadequate in several respects, the current more robust security amendments will inevitably get outperformed in the future, too. In any case, several security vulnerabilities have been spotted in virtually any version of the protocol rendering the integration of external protection mechanisms a necessity. In this context, the contribution of this paper is multifold. First, it gathers, categorizes, thoroughly evaluates the most popular attacks on 802.11 and analyzes their signatures. Second, it offers a publicly available dataset containing a rich blend of normal and attack traffic against 802.11 networks. A quite extensive first-hand evaluation of this dataset using several machine learning algorithms and data features is also provided. Given that to the best of our knowledge the literature lacks such a rich and well-tailored dataset, it is anticipated that the results of the work at hand will offer a solid basis for intrusion detection in the current as well as next-generation wireless networks.",
"title": ""
},
{
"docid": "b39b0b07e6195ae47295e38aea9d6dad",
"text": "Simulation theories of social cognition abound in the literature, but it is often unclear what simulation means and how it works. The discovery of mirror neurons, responding both to action execution and observation, suggested an embodied approach to mental simulation. Over the past few years this approach has been hotly debated and alternative accounts have been proposed. We discuss these accounts and argue that they fail to capture the uniqueness of embodied simulation (ES). ES theory provides a unitary account of basic social cognition, demonstrating that people reuse their own mental states or processes represented with a bodily format in functionally attributing them to others.",
"title": ""
},
{
"docid": "88f7c90be37cc4cb863fccbaf3a3a9e0",
"text": "A tensegrity is finite configuration of points in Ed suspended rigidly by inextendable cables and incompressable struts. Here it is explained how a stress-energy function, given by a symmetric stress matrix, can be used to create tensegrities that are globally rigid in the sense that the only configurations that satisfy the cable and strut constraints are congruent copies.",
"title": ""
},
{
"docid": "2ef2e4f2d001ab9221b3d513627bcd0b",
"text": "Semantic segmentation is in-demand in satellite imagery processing. Because of the complex environment, automatic categorization and segmentation of land cover is a challenging problem. Solving it can help to overcome many obstacles in urban planning, environmental engineering or natural landscape monitoring. In this paper, we propose an approach for automatic multi-class land segmentation based on a fully convolutional neural network of feature pyramid network (FPN) family. This network is consisted of pre-trained on ImageNet Resnet50 encoder and neatly developed decoder. Based on validation results, leaderboard score and our own experience this network shows reliable results for the DEEPGLOBE - CVPR 2018 land cover classification sub-challenge. Moreover, this network moderately uses memory that allows using GTX 1080 or 1080 TI video cards to perform whole training and makes pretty fast predictions.",
"title": ""
},
{
"docid": "1d4a116465d9c50f085b18d526119a90",
"text": "In this paper, we investigate the efficiency of FPGA implementations of AES and AES-like ciphers, specially in the context of authenticated encryption. We consider the encryption/decryption and the authentication/verification structures of OCB-like modes (like OTR or SCT modes). Their main advantage is that they are fully parallelisable. While this feature has already been used to increase the throughput/performance of hardware implementations, it is usually overlooked while comparing different ciphers. We show how to use it with zero area overhead, leading to a very significant efficiency gain. Additionally, we show that using FPGA technology mapping instead of logic optimization, the area of both the linear and non linear parts of the round function of several AES-like primitives can be reduced, without affecting the run-time performance. We provide the implementation results of two multi-stream implementations of both the LED and AES block ciphers. The AES implementation in this paper achieves an efficiency of 38 Mbps/slice, which is the most efficient implementation in literature, to the best of our knowledge. For LED, achieves 2.5 Mbps/slice on Spartan 3 FPGA, which is 2.57x better than the previous implementation. Besides, we use our new techniques to optimize the FPGA implementation of the CAESAR candidate Deoxys-I in both the encryption only and encryption/decryption settings. Finally, we show that the efficiency gains of the proposed techniques extend to other technologies, such as ASIC, as well.",
"title": ""
},
{
"docid": "9ce28606d78d64d970d73048c3bf2cc5",
"text": "Automated sports commentary is a form of automated narrative. Sports commentary exists to keep the viewer informed and entertained. One way to entertain the viewer is by telling brief stories relevant to the game in progress. We present a system called the sports commentary recommendation system (SCoReS) that can automatically suggest stories for commentators to tell during games. Through several user studies, we compared commentary using SCoReS to three other types of commentary and show that SCoReS adds significantly to the broadcast across several enjoyment metrics. We also collected interview data from professional sports commentators who positively evaluated a demonstration of the system. We conclude that SCoReS can be a useful broadcast tool, effective at selecting stories that add to the enjoyment and watchability of sports. SCoReS is a step toward automating sports commentary and, thus, automating narrative.",
"title": ""
},
{
"docid": "84d39e615b8b674cee53741f87a733da",
"text": "Cyber Bullying, which often has a deeply negative impact on the victim, has grown as a serious issue among adolescents. To understand the phenomenon of cyber bullying, experts in social science have focused on personality, social relationships and psychological factors involving both the bully and the victim. Recently computer science researchers have also come up with automated methods to identify cyber bullying messages by identifying bullying-related keywords in cyber conversations. However, the accuracy of these textual feature based methods remains limited. In this work, we investigate whether analyzing social network features can improve the accuracy of cyber bullying detection. By analyzing the social network structure between users and deriving features such as number of friends, network embeddedness, and relationship centrality, we find that the detection of cyber bullying can be significantly improved by integrating the textual features with social network features.",
"title": ""
}
] |
scidocsrr
|
f3dcb0ae6ea829f54dc1b064a0d2431c
|
Aalborg Universitet Detection of U . S . Traffic
|
[
{
"docid": "cdf2235bea299131929700406792452c",
"text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"title": ""
}
] |
[
{
"docid": "eb8321467458401aa86398390c32ae00",
"text": "As the wide popularization of online social networks, online users are not content only with keeping online friendship with social friends in real life any more. They hope the system designers can help them exploring new friends with common interest. However, the large amount of online users and their diverse and dynamic interests possess great challenges to support such a novel feature in online social networks. In this paper, by leveraging interest-based features, we design a general friend recommendation framework, which can characterize user interest in two dimensions: context (location, time) and content, as well as combining domain knowledge to improve recommending quality. We also design a potential friend recommender system in a real online social network of biology field to show the effectiveness of our proposed framework.",
"title": ""
},
{
"docid": "5add362bec606515136b0842f885f5bf",
"text": "We argue that the core problem facing peer-to-peer systems is locating documents in a decentralized network and propose Chord, a distributed lookup primitive. Chord provides an efficient method of locating documents while placing few constraints on the applications that use it. As proof that Chord’s functionality is useful in the development of peer-to-peer applications, we outline the implementation of a peer-to-peer file sharing system based on Chord.",
"title": ""
},
{
"docid": "df4d0112eecfcc5c6c57784d1a0d010d",
"text": "2 The design and measured results are reported on three prototype DC-DC converters which successfully demonstrate the design techniques of this thesis and the low-power enabling capabilities of DC-DC converters in portable applications. Voltage scaling for low-power throughput-constrained digital signal processing is reviewed and is shown to provide up to an order of magnitude power reduction compared to existing 3.3 V standards when enabled by high-efficiency low-voltage DC-DC conversion. A new ultra-low-swing I/O strategy, enabled by an ultra-low-voltage and low-power DCDC converter, is used to reduce the power of high-speed inter-chip communication by greater than two orders of magnitude. Dynamic voltage scaling is proposed to dynamically trade general-purpose processor throughput for energy-efficiency, yielding up to an order of magnitude improvement in the average energy per operation of the processor. This is made possible by a new class of voltage converter, called the dynamic DC-DC converter, whose primary performance objectives and design considerations are introduced in this thesis. Robert W. Brodersen, Chairman of Committee Table of",
"title": ""
},
{
"docid": "7ef20dc3eb5ec7aee75f41174c9fae12",
"text": "As the data and ontology layers of the Semantic Web stack have achieved a certain level of maturity in standard recommendations such as RDF and OWL, the current focus lies on two related aspects. On the one hand, the definition of a suitable query language for RDF, SPARQL, is close to recommendation status within the W3C. The establishment of the rules layer on top of the existing stack on the other hand marks the next step to be taken, where languages with their roots in Logic Programming and Deductive Databases are receiving considerable attention. The purpose of this paper is threefold. First, we discuss the formal semantics of SPARQLextending recent results in several ways. Second, weprovide translations from SPARQL to Datalog with negation as failure. Third, we propose some useful and easy to implement extensions of SPARQL, based on this translation. As it turns out, the combination serves for direct implementations of SPARQL on top of existing rules engines as well as a basis for more general rules and query languages on top of RDF.",
"title": ""
},
{
"docid": "8b5ca0f4b12aa5d07619078d44dbb337",
"text": "Crimeware-as-a-service (CaaS) has become a prominent component of the underground economy. CaaS provides a new dimension to cyber crime by making it more organized, automated, and accessible to criminals with limited technical skills. This paper dissects CaaS and explains the essence of the underground economy that has grown around it. The paper also describes the various crimeware services that are provided in the underground",
"title": ""
},
{
"docid": "dc43d3324e3fc67ca2f45fae76fcee3e",
"text": "We characterize the solution to the consumption and investment problem of a power utility investor in a continuous-time dynamically complete market with stochastic changes in the opportunity set. Under stochastic interest rates the investor optimally hedges against changes in the term structure of interest rates by investing in a coupon bond, or portfolio of bonds, with a payment schedule that equals the forward-expected (i.e. certainty equivalent) consumption pattern. Numerical experiments with two different specifications of the term structure dynamics (the Vasicek model and a three-factor non-Markovian Heath–Jarrow–Morton model) suggest that the hedge portfolio is more sensitive to the form of the term structure than to the dynamics of interest rates. 2003 Elsevier B.V. All rights reserved. JEL classification: G11",
"title": ""
},
{
"docid": "e07cb04e3000607d4a3f99d47f72a906",
"text": "As part of the NSF-funded Dark Web research project, this paper presents an exploratory study of cyber extremism on the Web 2.0 media: blogs, YouTube, and Second Life. We examine international Jihadist extremist groups that use each of these media. We observe that these new, interactive, multimedia-rich forms of communication provide effective means for extremists to promote their ideas, share resources, and communicate among each other. The development of automated collection and analysis tools for Web 2.0 can help policy makers, intelligence analysts, and researchers to better understand extremistspsila ideas and communication patterns, which may lead to strategies that can counter the threats posed by extremists in the second-generation Web.",
"title": ""
},
{
"docid": "3d401d8d3e6968d847795ccff4646b43",
"text": "In spite of growing frequency and sophistication of attacks two factor authentication schemes have seen very limited adoption in the US, and passwords remain the single factor of authentication for most bank and brokerage accounts. Clearly the cost benefit analysis is not as strongly in favor of two factor as we might imagine. Upgrading from passwords to a two factor authentication system usually involves a large engineering effort, a discontinuity of user experience and a hard key management problem. In this paper we describe a system to convert a legacy password authentication server into a two factor system. The existing password system is untouched, but is cascaded with a new server that verifies possession of a smartphone device. No alteration, patching or updates to the legacy system is necessary. There are now two alternative authentication paths: one using passwords alone, and a second using passwords and possession of the trusted device. The bank can leave the password authentication path available while users migrate to the two factor scheme. Once migration is complete the password-only path can be severed. We have implemented the system and carried out two factor authentication against real accounts at several major banks.",
"title": ""
},
{
"docid": "b3214224f699aaabab3c9336d1b88705",
"text": "This work is concerned with the field of static program analysis —in particular with analyses aimed to guarantee certain security properties of programs, like confidentiality and integrity. Our approach uses socalled dependence graphs to capture the program behavior as well as the information flow between the individual program points. Using this technique, we can guarantee for example that a program does not reveal any information about a secret password. In particular we focus on techniques that improve the dependence graph computation —the basis for many advanced security analyses. We incorporated the presented algorithms and improvements into our analysis tool Joana and published its source code as open source. Several collaborations with other researchers and publications using Joana demonstrate the relevance of these improvements for practical research. This work consists essentially of three parts. Part 1 deals with improvements in the computation of the dependence graph, Part 2 introduces a new approach to the analysis of incomplete programs and Part 3 shows current use cases of Joana on concrete examples. In the first part we describe the algorithms used to compute a dependence graph, with special attention to the problems and challenges that arise when analyzing object-oriented languages such as Java. For example we present an analysis that improves the precision of detected control flow by incorporating the effects of exceptions. The main improvement concerns the way side effects —caused by communication over methods boundaries— are modelled. Dependence graphs capture side effects —memory locations read or changed by a method— in the form of additional nodes called parameter nodes. We show that the structure and computation of these nodes have a huge impact on both the precision and scalability of the entire analysis. The so-called parameter model describes the algorithms used to compute these nodes. We explain the weakness of the old parameter model based on object-trees and present our improvements in form of a new model using object-graphs. The new graph structure merges redundant information of multiple nodes into a single node and thus reduces the number of overall parameter nodes",
"title": ""
},
{
"docid": "b6c1aa9e3b55b6ad7bd01f8b1c017e7b",
"text": "In the last decade, with availability of large datasets and more computing power, machine learning systems have achieved (super)human performance in a wide variety of tasks. Examples of this rapid development can be seen in image recognition, speech analysis, strategic game planning and many more. The problem with many state-of-the-art models is a lack of transparency and interpretability. The lack of thereof is a major drawback in many applications, e.g. healthcare and finance, where rationale for model's decision is a requirement for trust. In the light of these issues, explainable artificial intelligence (XAI) has become an area of interest in research community. This paper summarizes recent developments in XAI in supervised learning, starts a discussion on its connection with artificial general intelligence, and gives proposals for further research directions.",
"title": ""
},
{
"docid": "db3f317940f308407d217bbedf14aaf0",
"text": "Imagine your daily activities. Perhaps you will be at home today, relaxing and completing chores. Maybe you are a scientist, and plan to conduct a long series of experiments in a laboratory. You might work in an office building: you walk about your floor, greeting others, getting coffee, preparing documents, etc. There are many activities you perform regularly in large environments. If a system understood your intentions it could help you achieve your goals, or automate aspects of your environment. More generally, an understanding of human intentions would benefit, and is perhaps prerequisite for, AI systems that assist and augment human capabilities. We present a framework that continuously forecasts long-term spatial and semantic intentions (what you will do and where you will go) of a first-person camera wearer. We term our algorithm “Demonstrating Agent Rewards for K-futures Online” (DARKO). We use a first-person camera to meet the challenge of observing the wearer’s behavior everywhere. In Figure 1, DARKO forecasts multiple quantities: (1) the user intends to go to the shower (out of all possible destinations in their house), (2) their trajectory through Figure 1: Forecasting future behavior from first-person video. The overhead map shows where the person is likely to go, predicted from the first frame. Each s",
"title": ""
},
{
"docid": "e42d4340522622430a9018b900b03afc",
"text": "Kernel-based methods are widely used for relation extraction task and obtain good results by leveraging lexical and syntactic information. However, in biomedical domain these methods are limited by the size of dataset and have difficulty in coping with variations in text. To address this problem, we propose Extended Dependency Graph (EDG) by incorporating a few simple linguistic ideas and include information beyond syntax. We believe the use of EDG will enable machine learning methods to generalize more easily. Experiments confirm that EDG provides up to 10% f-value improvement over dependency graph using mainstream kernel methods over five corpora. We conducted additional experiments to provide a more detailed analysis of the contributions of individual modules in EDG construction.",
"title": ""
},
{
"docid": "40c88fe58f655c20844baadaa310abaa",
"text": "Pleated pneumatic artificial muscles (PPAMs), which have recently been developed at the Vrije Universiteit Brussel, Department of Mechanical Engineering are brought forward as robotic actuators in this paper. Their distinguishing feature is their pleated design, as a result of which their contraction forces and maximum displacement are very high compared to other pneumatic artificial muscles. The PPAM design, operation and characteristics are presented. To show how well they are suited for robotics, a rotative joint actuator, made of two antagonistically coupled PPAMs, is discussed. It has several properties that are similar to those of skeletal joint actuators. Positioning tasks are seen to be performed very accurately using simple PI control. Furthermore, the antagonistic actuator can easily be made to have a soft or careful touch, contributing greatly to a safe robot operation. In view of all characteristics PPAMs are very well suited for automation and robotic applications.",
"title": ""
},
{
"docid": "8aa89148182a413b303994b6b49f7402",
"text": "airport operations. In this paper, we study the Airport Gate Assignment Problem (AGAP), propose a new model and implement the model with Optimization Programming language (OPL). With the objective to minimize the number of conflicts of any two adjacent aircrafts assigned to the same gate, we build a mathematical model with logical constraints and the binary constraints, which can provide an efficient evaluation criterion for the Airlines to estimate the current gate assignment. To illustrate the feasibility of the model we construct experiments with the data obtained from Continental Airlines, Houston Gorge Bush Intercontinental Airport IAH, which indicate that our model is both energetic and effective. Moreover, we interpret experimental results, which further demonstrate that our proposed model can provide a powerful tool for airline companies to estimate the efficiency of their current work of gate assignment.",
"title": ""
},
{
"docid": "81bfa507b8cd849f30c410ba96b0034e",
"text": "Augmented reality (AR) makes it possible to create games in which virtual objects are overlaid on the real world, and real objects are tracked and used to control virtual ones. We describe the development of an AR racing game created by modifying an existing racing game, using an AR infrastructure that we developed for use with the XNA game development platform. In our game, the driver wears a tracked video see-through head-worn display, and controls the car with a passive tangible controller. Other players can participate by manipulating waypoints that the car must pass and obstacles with which the car can collide. We discuss our AR infrastructure, which supports the creation of AR applications and games in a managed code environment, the user interface we developed for the AR racing game, the game's software and hardware architecture, and feedback and observations from early demonstrations.",
"title": ""
},
{
"docid": "f447d9aadcaa4fb56f951838f84eb6af",
"text": "A systematic method for developing isolated buck-boost (IBB) converters is proposed in this paper, and single-stage power conversion, soft-switching operation, and high-efficiency performance can be achieved with the proposed family of converters. On the basis of a nonisolated two-switch buck-boost converter, the proposed IBB converters are generated by replacing the dc buck-cell and boost-cell in the non-IBB converter with the ac buck-cell and boost-cell, respectively. Furthermore, a family of semiactive rectifiers (SARs) is proposed to serve as the secondary rectification circuit for the IBB converters, which helps to extend the converter voltage gain and reduce the voltage stresses on the devices in the rectification circuit. Hence, the efficiency is improved by employing a transformer with a smaller turns ratio and reduced parasitic parameters, by using low-voltage rating MOSFETs and diodes with better switching and conduction performances. A full-bridge IBB converter is proposed and analyzed in detail as an example. The phase-shift modulation strategy is applied to the full-bridge IBB converter to achieve IBB conversion. Moreover, soft-switching performance of all active switches and diodes can be achieved over a wide load and voltage range by the proposed converter and control strategy. A 380-V-output prototype is fabricated to verify the effectiveness of the proposed family of IBB converters, the SARs, and the control strategies.",
"title": ""
},
{
"docid": "5fcda05ef200cd326ecb9c2412cf50b3",
"text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.",
"title": ""
},
{
"docid": "c240da3cde126606771de3e6b3432962",
"text": "Oscillations in the alpha and beta bands can display either an event-related blocking response or an event-related amplitude enhancement. The former is named event-related desynchronization (ERD) and the latter event-related synchronization (ERS). Examples of ERS are localized alpha enhancements in the awake state as well as sigma spindles in sleep and alpha or beta bursts in the comatose state. It was found that alpha band activity can be enhanced over the visual region during a motor task, or during a visual task over the sensorimotor region. This means ERD and ERS can be observed at nearly the same time; both form a spatiotemporal pattern, in which the localization of ERD characterizes cortical areas involved in task-relevant processing, and ERS marks cortical areas at rest or in an idling state.",
"title": ""
},
{
"docid": "7b0945c77fbe5b207ff02fce811e98e6",
"text": "T he map has evolved over the past few centuries as humanity’s primary method for storing and communicating knowledge of the Earth’s surface. Topographic maps portray the general form of the surface and its primary physical and cultural features; thematic maps show the variation of more specialized properties such as soil type or population density; and bathymetric maps and hydrographic charts show the characteristics of the sea floor. Maps serve as one of the most important repositories of both the raw data and the results of geographic inquiry, and mapmaking has always figured prominently in the skill set of geographers or their supporting staff. Maps are thus important and indispensable tools in the geographer’s search for understanding of how human and physical processes act and interact on the Earth’s surface: of how the world works. Geographic information systems (GIS) were devised in the 1960s as computer applications for handling large volumes of information obtained from maps and for performing operations that would be too tedious, expensive, or inaccurate to perform by hand. The Canada Geographic Information System, widely recognized as the first GIS, was built for the purpose of making vast numbers of calculations of area, reporting the results in tables. Over time, the range of functions performed by GIS has grown exponentially, and today it is reasonable to think of a GIS as able to perform virtually any conceivable operation on data obtained from maps (Longley et al. 2001). Geographers have adopted GIS enthusiastically, seeing it as a powerful device for storing, analyzing, and visualizing map information and thus as a much more effective substitute for the paper map (Goodchild 1988). Over the past decade numerous journals, conferences, academic positions, and programs have adopted titles that combine information with spatial or geographic and with science or theory. In what follows I will use the term geographic information science (GIScience) for simplicity and not enquire into the subtle differences between, for example, spatial and geographic information theory (Goodchild 2001). Geographers have been associated with many of these changes—and, in many cases, have been at the forefront—and many of the new programs and positions are found in departments of geography. But there has been relatively little general commentary on these trends, or on what they might mean for the discipline of geography as a whole. The first centennial of the Association of American Geographers is an appropriate occasion to reflect on the nature of GIScience and its relationship, if any, to the discipline of geography. I begin with a discussion of the nature of GIScience, of its relationship to GIS and of its links to the traditional sciences of geographic information. This leads to a discussion of whether GIScience is a natural science, concerned with discovering empirical principles and law-like statements about the world; or whether it is a design science, concerned with identifying practical principles for achieving human ends, or both. In the third major section I examine how GIScience is positioned with respect to the historic tension in geography between form and process and whether the growth of interest in GIScience has tended to favor form over process. The final section examines a future for GIScience that places greater emphasis on process and discusses the steps that will be needed to make such a future possible.",
"title": ""
},
{
"docid": "1dab3a23e6d9c6992c385d3ec95dc0e2",
"text": "The need for automatic methods of topic discovery in the Internet grows exponentially with the amount of available textual information. Nowadays it becomes impossible to manually read even a small part of the information in order to reveal the underlying topics. Social media provide us with a great pool of user generated content, where topic discovery may be extremely useful for businesses, politicians, researchers, and other stakeholders. However, conventional topic discovery methods, which are widely used in large text corpora, face several challenges when they are applied in social media and particularly in Twitter – the most popular microblogging platform. To the best of our knowledge no comprehensive overview of these challenges and of the methods dedicated to address these challenges does exist in IS literature until now. Therefore, this paper provides an overview of these challenges, matching methods and their expected usefulness for social media analytics.",
"title": ""
}
] |
scidocsrr
|
914589f0f58dbcd725815afc36198781
|
Pose-robust face signature for multi-view face recognition
|
[
{
"docid": "dfacd79df58a78433672f06fdb10e5a2",
"text": "“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.",
"title": ""
},
{
"docid": "3e6010f951eba0c82e8678f7d076162c",
"text": "In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality",
"title": ""
},
{
"docid": "b2d8c0397151ca043ffb5cef8046d2af",
"text": "This paper describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looked at recognition from high-resolution still frontal face images and 3D face images, and measured performance for still frontal face images taken under controlled and uncontrolled illumination. The ICE 2006 evaluation reported verification performance for both left and right irises. The images in the ICE 2006 intentionally represent a broader range of quality than the ICE 2006 sensor would normally acquire. This includes images that did not pass the quality control software embedded in the sensor. The FRVT 2006 results from controlled still and 3D images document at least an order-of-magnitude improvement in recognition performance over the FRVT 2002. The FRVT 2006 and the ICE 2006 compared recognition performance from high-resolution still frontal face images, 3D face images, and the single-iris images. On the FRVT 2006 and the ICE 2006 data sets, recognition performance was comparable for high-resolution frontal face, 3D face, and the iris images. In an experiment comparing human and algorithms on matching face identity across changes in illumination on frontal face images, the best performing algorithms were more accurate than humans on unfamiliar faces.",
"title": ""
}
] |
[
{
"docid": "dadd9fec98c7dbc05d4e898d282e78fa",
"text": "Managing unanticipated changes in turbulent and dynamic market environments requires organizations to reach an extended level of flexibility, which is known as agility. Agility can be defined as ability to sense environmental changes and to readily respond to those. While information systems are alleged to have a major influence on organizational agility, service-oriented architecture (SOA) poses an opportunity to shape agile information systems and ultimately organizational agility. However, related research studies predominantly comprise theoretical claims only. Seeking a detailed picture and in-depth insights, we conduct a qualitative exploratory case study. The objective of our research-in-progress is therefore to provide first-hand empirical data to contribute insights into SOA’s influence on organizational agility. We contribute to the two related research fields of SOA and organizational agility by addressing lack of empirical research on SOA’s organizational implications.",
"title": ""
},
{
"docid": "72aa5cb8cf9cf2aff5612352b01822e1",
"text": "A hallmark of variational autoencoders (VAEs) for text processing is their combination of powerful encoder-decoder models, such as LSTMs, with simple latent distributions, typically multivariate Gaussians. These models pose a difficult optimization problem: there is an especially bad local optimum where the variational posterior always equals the prior and the model does not use the latent variable at all, a kind of “collapse” which is encouraged by the KL divergence term of the objective. In this work, we experiment with another choice of latent distribution, namely the von Mises-Fisher (vMF) distribution, which places mass on the surface of the unit hypersphere. With this choice of prior and posterior, the KL divergence term now only depends on the variance of the vMF distribution, giving us the ability to treat it as a fixed hyperparameter. We show that doing so not only averts the KL collapse, but consistently gives better likelihoods than Gaussians across a range of modeling conditions, including recurrent language modeling and bag-ofwords document modeling. An analysis of the properties of our vMF representations shows that they learn richer and more nuanced structures in their latent representations than their Gaussian counterparts.1",
"title": ""
},
{
"docid": "cd3a01ec1db7672e07163534c0dfb32c",
"text": "In order to dock an embedded intelligent wheelchair into a U-shape bed automatically through visual servo, this paper proposes a real-time U-shape bed localization method on an embedded vision system based on FPGA and DSP. This method locates the U-shape bed through finding its line contours. The task can be done in a parallel way with FPGA does line extraction and DSP does line contour finding. Experiments show that, the speed and precision of the U-shape bed localization method proposed in this paper which based on an embedded vision system can satisfy the needs of the system.",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "43654115b3c64eef7b3a26d90c092e9b",
"text": "We investigate the problem of domain adaptation for parallel data in Statistical Machine Translation (SMT). While techniques for domain adaptation of monolingual data can be borrowed for parallel data, we explore conceptual differences between translation model and language model domain adaptation and their effect on performance, such as the fact that translation models typically consist of several features that have different characteristics and can be optimized separately. We also explore adapting multiple (4–10) data sets with no a priori distinction between in-domain and out-of-domain data except for an in-domain development set.",
"title": ""
},
{
"docid": "4b5336c5f2352fb7cd79b19d2538049b",
"text": "Energy-efficient computation is critical if we are going to continue to scale performance in power-limited systems. For floating-point applications that have large amounts of data parallelism, one should optimize the throughput/mm2 given a power density constraint. We present a method for creating a trade-off curve that can be used to estimate the maximum floating-point performance given a set of area and power constraints. Looking at FP multiply-add units and ignoring register and memory overheads, we find that in a 90 nm CMOS technology at 1 W/mm2, one can achieve a performance of 27 GFlops/mm2 single precision, and 7.5 GFlops/mm double precision. Adding register file overheads reduces the throughput by less than 50 percent if the compute intensity is high. Since the energy of the basic gates is no longer scaling rapidly, to maintain constant power density with scaling requires moving the overall FP architecture to a lower energy/performance point. A 1 W/mm2 design at 90 nm is a \"high-energy\" design, so scaling it to a lower energy design in 45 nm still yields a 7× performance gain, while a more balanced 0.1 W/mm2 design only speeds up by 3.5× when scaled to 45 nm. Performance scaling below 45 nm rapidly decreases, with a projected improvement of only ~3x for both power densities when scaling to a 22 nm technology.",
"title": ""
},
{
"docid": "03097e1239e5540fe1ec45729d1cbbc2",
"text": "Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQ’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQ. In particular, we tested PGQ on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.",
"title": ""
},
{
"docid": "7b4567b9f32795b267f2fb2d39bbee51",
"text": "BACKGROUND\nWearable and mobile devices that capture multimodal data have the potential to identify risk factors for high stress and poor mental health and to provide information to improve health and well-being.\n\n\nOBJECTIVE\nWe developed new tools that provide objective physiological and behavioral measures using wearable sensors and mobile phones, together with methods that improve their data integrity. The aim of this study was to examine, using machine learning, how accurately these measures could identify conditions of self-reported high stress and poor mental health and which of the underlying modalities and measures were most accurate in identifying those conditions.\n\n\nMETHODS\nWe designed and conducted the 1-month SNAPSHOT study that investigated how daily behaviors and social networks influence self-reported stress, mood, and other health or well-being-related factors. We collected over 145,000 hours of data from 201 college students (age: 18-25 years, male:female=1.8:1) at one university, all recruited within self-identified social groups. Each student filled out standardized pre- and postquestionnaires on stress and mental health; during the month, each student completed twice-daily electronic diaries (e-diaries), wore two wrist-based sensors that recorded continuous physical activity and autonomic physiology, and installed an app on their mobile phone that recorded phone usage and geolocation patterns. We developed tools to make data collection more efficient, including data-check systems for sensor and mobile phone data and an e-diary administrative module for study investigators to locate possible errors in the e-diaries and communicate with participants to correct their entries promptly, which reduced the time taken to clean e-diary data by 69%. We constructed features and applied machine learning to the multimodal data to identify factors associated with self-reported poststudy stress and mental health, including behaviors that can be possibly modified by the individual to improve these measures.\n\n\nRESULTS\nWe identified the physiological sensor, phone, mobility, and modifiable behavior features that were best predictors for stress and mental health classification. In general, wearable sensor features showed better classification performance than mobile phone or modifiable behavior features. Wearable sensor features, including skin conductance and temperature, reached 78.3% (148/189) accuracy for classifying students into high or low stress groups and 87% (41/47) accuracy for classifying high or low mental health groups. Modifiable behavior features, including number of naps, studying duration, calls, mobility patterns, and phone-screen-on time, reached 73.5% (139/189) accuracy for stress classification and 79% (37/47) accuracy for mental health classification.\n\n\nCONCLUSIONS\nNew semiautomated tools improved the efficiency of long-term ambulatory data collection from wearable and mobile devices. Applying machine learning to the resulting data revealed a set of both objective features and modifiable behavioral features that could classify self-reported high or low stress and mental health groups in a college student population better than previous studies and showed new insights into digital phenotyping.",
"title": ""
},
{
"docid": "7eeb2bf2aaca786299ebc8507482e109",
"text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.",
"title": ""
},
{
"docid": "4f2112175c5d8175c5c0f8cb4d9185a2",
"text": "It is difficult to fully assess the quality of software inhouse, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports inhouse debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",
"title": ""
},
{
"docid": "268e434cedbf5439612b2197be73a521",
"text": "We have recently developed a chaotic gas turbine whose rotational motion might simulate turbulent Rayleigh-Bénard convection. The nondimensionalized equations of motion of our turbine are expressed as a star network of N Lorenz subsystems, referred to as augmented Lorenz equations. Here, we propose an application of the augmented Lorenz equations to chaotic cryptography, as a type of symmetric secret-key cryptographic method, wherein message encryption is performed by superimposing the chaotic signal generated from the equations on a plaintext in much the same way as in one-time pad cryptography. The ciphertext is decrypted by unmasking the chaotic signal precisely reproduced with a secret key consisting of 2N-1 (e.g., N=101) real numbers that specify the augmented Lorenz equations. The transmitter and receiver are assumed to be connected via both a quantum communication channel on which the secret key is distributed using a quantum key distribution protocol and a classical data communication channel on which the ciphertext is transmitted. We discuss the security and feasibility of our cryptographic method.",
"title": ""
},
{
"docid": "bffd230e76ec32eefe70904a9290bf41",
"text": "This paper introduces a new idea in describing people using their first names, i.e., the name assigned at birth. We show that describing people in terms of similarity to a vector of possible first names is a powerful description of facial appearance that can be used for face naming and building facial attribute classifiers. We build models for 100 common first names used in the United States and for each pair, construct a pair wise first-name classifier. These classifiers are built using training images downloaded from the Internet, with no additional user interaction. This gives our approach important advantages in building practical systems that do not require additional human intervention for labeling. We use the scores from each pair wise name classifier as a set of facial attributes. We show several surprising results. Our name attributes predict the correct first names of test faces at rates far greater than chance. The name attributes are applied to gender recognition and to age classification, outperforming state-of-the-art methods with all training images automatically gathered from the Internet.",
"title": ""
},
{
"docid": "10bd4900b81375e0d89b202cb5a01e4b",
"text": "We introduce a network that directly predicts the 3D layout of lanes in a road scene from a single image. This work marks a first attempt to address this task with onboard sensing instead of relying on pre-mapped environments. Our network architecture, 3D-LaneNet, applies two new concepts: intra-network inverse-perspective mapping (IPM) and anchor-based lane representation. The intranetwork IPM projection facilitates a dual-representation information flow in both regular image-view and top-view. An anchor-per-column output representation enables our endto-end approach replacing common heuristics such as clustering and outlier rejection. In addition, our approach explicitly handles complex situations such as lane merges and splits. Promising results are shown on a new 3D lane synthetic dataset. For comparison with existing methods, we verify our approach on the image-only tuSimple lane detection benchmark and reach competitive performance.",
"title": ""
},
{
"docid": "1ca4294857fcdd1a12402a0d985914c7",
"text": "Alignment of 3D objects from 2D images is one of the most important and well studied problems in computer vision. A typical object alignment system consists of a landmark appearance model which is used to obtain an initial shape and a shape model which refines this initial shape by correcting the initialization errors. Since errors in landmark initialization from the appearance model propagate through the shape model, it is critical to have a robust landmark appearance model. While there has been much progress in designing sophisticated and robust shape models, there has been relatively less progress in designing robust landmark detection models. In this paper we present an efficient and robust landmark detection model which is designed specifically to minimize localization errors thereby leading to state-of-the-art object alignment performance. We demonstrate the efficacy and speed of the proposed approach on the challenging task of multi-view car alignment.",
"title": ""
},
{
"docid": "5deae44a9c14600b1a2460836ed9572d",
"text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.",
"title": ""
},
{
"docid": "b1ef75c4a0dc481453fb68e94ec70cdc",
"text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.",
"title": ""
},
{
"docid": "324c0fe0d57734b54dd03e468b7b4603",
"text": "This paper studies the use of received signal strength indicators (RSSI) applied to fingerprinting method in a Bluetooth network for indoor positioning. A Bayesian fusion (BF) method is proposed to combine the statistical information from the RSSI measurements and the prior information from a motion model. Indoor field tests are carried out to verify the effectiveness of the method. Test results show that the proposed BF algorithm achieves a horizontal positioning accuracy of about 4.7 m on the average, which is about 6 and 7 % improvement when compared with Bayesian static estimation and a point Kalman filter method, respectively.",
"title": ""
},
{
"docid": "e9e2887e7aae5315a8661c9d7456aa2e",
"text": "It has been shown that learning distributed word representations is highly useful for Twitter sentiment classification. Most existing models rely on a single distributed representation for each word. This is problematic for sentiment classification because words are often polysemous and each word can contain different sentiment polarities under different topics. We address this issue by learning topic-enriched multi-prototype word embeddings (TMWE). In particular, we develop two neural networks which 1) learn word embeddings that better capture tweet context by incorporating topic information, and 2) learn topic-enriched multiple prototype embeddings for each word. Experiments on Twitter sentiment benchmark datasets in SemEval 2013 show that TMWE outperforms the top system with hand-crafted features, and the current best neural network model.",
"title": ""
},
{
"docid": "0f87fefbe2cfc9893b6fc490dd3d40b7",
"text": "With the tremendous amount of textual data available in the Internet, techniques for abstractive text summarization become increasingly appreciated. In this paper, we present work in progress that tackles the problem of multilingual text summarization using semantic representations. Our system is based on abstract linguistic structures obtained from an analysis pipeline of disambiguation, syntactic and semantic parsing tools. The resulting structures are stored in a semantic repository, from which a text planning component produces content plans that go through a multilingual generation pipeline that produces texts in English, Spanish, French, or German. In this paper we focus on the lingusitic components of the summarizer, both analysis and generation.",
"title": ""
},
{
"docid": "ca1ebdf96eeeb6c55116a70ed6db5ea5",
"text": "Acknowledgements: We would like to recognize our expert contributors who participated in the first eWorkshop on Agile Methods and thereby contributed to the section on State-of-the-Practice: We also would like to thank our collogues who helped arrange the eWorkshop and co-authored that same section:",
"title": ""
}
] |
scidocsrr
|
5250c8e427616f2977fbd700998be950
|
Pseudo-Relevance Feedback Based on Matrix Factorization
|
[
{
"docid": "c10d33abc6ed1d47c11bf54ed38e5800",
"text": "The past decade has seen a steady growth of interest in statistical language models for information retrieval, and much research work has been conducted on this subject. This book by ChengXiang Zhai summarizes most of this research. It opens with an introduction covering the basic concepts of information retrieval and statistical languagemodels, presenting the intuitions behind these concepts. This introduction is then followed by a chapter providing an overview of:",
"title": ""
},
{
"docid": "e5b73193158b98a536d2d296e816c325",
"text": "We use a low-dimensional linear model to describe the user rating matrix in a recommendation system. A non-negativity constraint is enforced in the linear model to ensure that each user’s rating profile can be represented as an additive linear combination of canonical coordinates. In order to learn such a constrained linear model from an incomplete rating matrix, we introduce two variations on Non-negative Matrix Factorization (NMF): one based on the Expectation-Maximization (EM) procedure and the other a Weighted Nonnegative Matrix Factorization (WNMF). Based on our experiments, the EM procedure converges well empirically and is less susceptible to the initial starting conditions than WNMF, but the latter is much more computationally efficient. Taking into account the advantages of both algorithms, a hybrid approach is presented and shown to be effective in real data sets. Overall, the NMF-based algorithms obtain the best prediction performance compared with other popular collaborative filtering algorithms in our experiments; the resulting linear models also contain useful patterns and features corresponding to user communities.",
"title": ""
}
] |
[
{
"docid": "703cda264eddc139597b9ef9d4c0e977",
"text": "Multi-processor systems are becoming the de-facto standard across different computing domains, ranging from high-end multi-tenant cloud servers to low-power mobile platforms. The denser integration of CPUs creates an opportunity for great economic savings achieved by packing processes of multiple tenants or by bundling all kinds of tasks at various privilege levels to share the same platform. This level of sharing carries with it a serious risk of leaking sensitive information through the shared microarchitectural components. Microarchitectural attacks initially only exploited core-private resources, but were quickly generalized to resources shared within the CPU. We present the first fine grain side channel attack that works across processors. The attack does not require CPU co-location of the attacker and the victim. The novelty of the proposed work is that, for the first time the directory protocol of high efficiency CPU interconnects is targeted. The directory protocol is common to all modern multi-CPU systems. Examples include AMD's HyperTransport, Intel's Quickpath, and ARM's AMBA Coherent Interconnect. The proposed attack does not rely on any specific characteristic of the cache hierarchy, e.g. inclusiveness. Note that inclusiveness was assumed in all earlier works. Furthermore, the viability of the proposed covert channel is demonstrated with two new attacks: by recovering a full AES key in OpenSSL, and a full ElGamal key in libgcrypt within the range of seconds on a shared AMD Opteron server.",
"title": ""
},
{
"docid": "e16f013717320ab7dcac54f752f9d79d",
"text": "In order to drive safely and efficiently on public roads, autonomous vehicles will have to understand the intentions of surrounding vehicles, and adapt their own behavior accordingly. If experienced human drivers are generally good at inferring other vehicles' motion up to a few seconds in the future, most current Advanced Driving Assistance Systems (ADAS) are unable to perform such medium-term forecasts, and are usually limited to high-likelihood situations such as emergency braking. In this article, we present a first step towards consistent trajectory prediction by introducing a long short-term memory (LSTM) neural network, which is capable of accurately predicting future longitudinal and lateral trajectories for vehicles on highway. Unlike previous work focusing on a low number of trajectories collected from a few drivers, our network was trained and validated on the NGSIM US-101 dataset, which contains a total of 800 hours of recorded trajectories in various traffic densities, representing more than 6000 individual drivers.",
"title": ""
},
{
"docid": "5c90cd6c4322c30efb90589b1a65192e",
"text": "The sure thing principle and the law of total probability are basic laws in classic probability theory. A disjunction fallacy leads to the violation of these two classical laws. In this paper, an Evidential Markov (EM) decision making model based on Dempster-Shafer (D-S) evidence theory and Markov modelling is proposed to address this issue and model the real human decision-making process. In an evidential framework, the states are extended by introducing an uncertain state which represents the hesitance of a decision maker. The classical Markov model can not produce the disjunction effect, which assumes that a decision has to be certain at one time. However, the state is allowed to be uncertain in the EM model before the final decision is made. An extra uncertainty degree parameter is defined by a belief entropy, named Deng entropy, to assignment the basic probability assignment of the uncertain state, which is the key to predict the disjunction effect. A classical categorization decision-making experiment is used to illustrate the effectiveness and validity of EM model. The disjunction effect can be well predicted ∗Corresponding author at Wen Jiang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address: jiangwen@nwpu.edu.cn, jiangwenpaper@hotmail.com Preprint submitted to Elsevier May 19, 2017 and the free parameters are less compared with the existing models.",
"title": ""
},
{
"docid": "bc6877a5a83531a794ac1c8f7a4c7362",
"text": "A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the 'holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an 'inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of out-of-sample predictions) generates 'holdout' predictions with perfectly opposite rankings on random data. While this would raise immediate suspicion upon inspection, we would like to caution the data mining community against using CV for stacking or in currently popular ensemble methods. They can reverse the predictions by assigning negative weights and produce in the end a model that appears to have close to perfect predictability while in reality the data was random.",
"title": ""
},
{
"docid": "67db2885a2b8780cbfd19c1ff0cfba36",
"text": "Mechanocomputational techniques in conjunction with artificial intelligence (AI) are revolutionizing the interpretations of the crucial information from the medical data and converting it into optimized and organized information for diagnostics. It is possible due to valuable perfection in artificial intelligence, computer aided diagnostics, virtual assistant, robotic surgery, augmented reality and genome editing (based on AI) technologies. Such techniques are serving as the products for diagnosing emerging microbial or non microbial diseases. This article represents a combinatory approach of using such approaches and providing therapeutic solutions towards utilizing these techniques in disease diagnostics.",
"title": ""
},
{
"docid": "30d6cdcde60583ddaa514182d4d1e1d5",
"text": "OBJECTIVES/HYPOTHESIS\nAnalysis of auditory brainstem response (ABR) in very preterm infants can be difficult owing to the poor detectability of the various components of the ABR. We evaluated the ABR morphology and tried to extend the current assessment system.\n\n\nSTUDY DESIGN\nProspective cohort study.\n\n\nMETHODS\nWe included 28 preterm very low birth weight infants admitted to the neonatal intensive care unit of Sophia Children's Hospital. ABRs were measured between 26 and 34 weeks postconceptional age. The presence of the following ABR parameters was recorded: the ipsilateral peaks I, III and V, the contralateral peaks III and V, and the response threshold.\n\n\nRESULTS\nIn 82% of our population, a typical \"bow tie\" response pattern was present as a sign of early auditory development. This bow tie pattern is the narrowest part of the response wave and is predominantly characterized by the ipsilateral negative peak III. This effect may be emphasized by the contralateral peak III. The bow tie pattern is seen approximately 0.1 milliseconds before the ipsilateral peak III. From 30 weeks postconceptional age onward, a more extensive morphologic pattern is recorded in 90% of the infants. A flow chart was designed to analyze the ABR morphology of preterm infants in an unambiguous stepwise fashion.\n\n\nCONCLUSIONS\nA typical bow tie pattern preceding peak III seems to be the earliest characteristic of the developing ABR morphology in preterm infants. As ABR characteristics will improve with increasing age, neonatal hearing screening should be postponed until after 34 weeks.",
"title": ""
},
{
"docid": "7f1244e44dcacd35a2ed5e8f75f69567",
"text": "An otherwise healthy 22-year-old Caucasian woman was referred regarding a 6-year history of bifrontotemporal hair loss. Previously, she was treated with topical minoxidil with slight improvement. In addition, upon questioning, the patient described using a ponytail hair style daily since adolescence. Physical examination revealed two ovoid-shaped alopecic patches on both frontotemporal regions (Fig. 1). Fringe sign was present (Fig. 1, arrows). There were no pustules, erythema, or scale. Hair pull test was negative. On the basis of these findings, a clinical diagnosis of traction alopecia was made. Traction alopecia is a form of trauma-induced alopecia that results from prolonged or repeated pulling of hair shafts related to various types of hairstyles or occupational uniforms (nurses0caps or nuns0coifs).1–4 It is a reversible condition, but if the cause persists, permanent scarring alopecia may be developed. African-American women have a higher prevalence of traction alopecia than Caucasian women. In a pilot study, it was found in 37% of women who visited a primary care health center in Cape Town. Cornrows and other cultural hair care practices have been implicated as risk factors for traction alopecia in African-American patients. Traction alopecia is a diagnostic challenge when the external factor is not suspected or admitted, and it could be misdiagnosed as alopecia areata or trichotillomania. The fringe sign, persistence of residual hairs along the frontotemporal rim, may be a useful clinical marker of this condition. Dermoscopy can also be helpful in the diagnostic work-up process. Trichoscopy examination usually reveals white hair casts encircling the proximal portion of the hair shafts. The differential diagnosis of traction alopecia includes alopecia areata, trichotillomania, frontal fibrosing alopecia, and triangular temporal alopecia. Prompt investigation is important to avoid unnecessary treatments and to reverse hair loss. We recommended that the patient stop using the ponytail style. Partial regrowth was observed 6 months later.",
"title": ""
},
{
"docid": "a5a586966fc5622fd871ce1a05298863",
"text": "Churning is the movement of customers from a company to another. For any company, being able to predict with some time which of their customers will churn is essential to take actions in order to retain them, and for this reason most sectors invest substantial effort in techniques for (semi)automatically predicting churning, and data mining and machine learning are among the techniques successfully used to this effect. In this paper we describe a prototype for churn prediction using stream mining methods, which offer the additional promise of detecting new patterns of churn in real-time streams of high-speed data, and adapting quickly to a changing reality. The prototype is implemented on top of the MOA (Massive Online Analysis) framework for stream mining. The application implicit in the prototype is the telecommunication operator (mobile phone) sector. A shorter version of this paper, omitting Section 5, was presented at CCIA’13 (http://mon.uvic.cat/ccia2013/en/).",
"title": ""
},
{
"docid": "05eb1af3e6838640b6dc5c1c128cc78a",
"text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.",
"title": ""
},
{
"docid": "966a156b1ebf6981c4218edc002cec7e",
"text": "Exposure to green space has been associated with better physical and mental health. Although this exposure could also influence cognitive development in children, available epidemiological evidence on such an impact is scarce. This study aimed to assess the association between exposure to green space and measures of cognitive development in primary schoolchildren. This study was based on 2,593 schoolchildren in the second to fourth grades (7-10 y) of 36 primary schools in Barcelona, Spain (2012-2013). Cognitive development was assessed as 12-mo change in developmental trajectory of working memory, superior working memory, and inattentiveness by using four repeated (every 3 mo) computerized cognitive tests for each outcome. We assessed exposure to green space by characterizing outdoor surrounding greenness at home and school and during commuting by using high-resolution (5 m × 5 m) satellite data on greenness (normalized difference vegetation index). Multilevel modeling was used to estimate the associations between green spaces and cognitive development. We observed an enhanced 12-mo progress in working memory and superior working memory and a greater 12-mo reduction in inattentiveness associated with greenness within and surrounding school boundaries and with total surrounding greenness index (including greenness surrounding home, commuting route, and school). Adding a traffic-related air pollutant (elemental carbon) to models explained 20-65% of our estimated associations between school greenness and 12-mo cognitive development. Our study showed a beneficial association between exposure to green space and cognitive development among schoolchildren that was partly mediated by reduction in exposure to air pollution.",
"title": ""
},
{
"docid": "ef3ac22e7d791113d08fd778a79008c3",
"text": "Great efforts have been dedicated to harvesting knowledge bases from online encyclopedias. These knowledge bases play important roles in enabling machines to understand texts. However, most current knowledge bases are in English and non-English knowledge bases, especially Chinese ones, are still very rare. Many previous systems that extract knowledge from online encyclopedias, although are applicable for building a Chinese knowledge base, still suffer from two challenges. The first is that it requires great human efforts to construct an ontology and build a supervised knowledge extraction model. The second is that the update frequency of knowledge bases is very slow. To solve these challenges, we propose a never-ending Chinese Knowledge extraction system, CN-DBpedia, which can automatically generate a knowledge base that is of ever-increasing in size and constantly updated. Specially, we reduce the human costs by reusing the ontology of existing knowledge bases and building an end-to-end facts extraction model. We further propose a smart active update strategy to keep the freshness of our knowledge base with little human costs. The 164 million API calls of the published services justify the success of our system.",
"title": ""
},
{
"docid": "57224fab5298169be0da314e55ca6b43",
"text": "Although users’ preference is semantically reflected in the free-form review texts, this wealth of information was not fully exploited for learning recommender models. Specifically, almost all existing recommendation algorithms only exploit rating scores in order to find users’ preference, but ignore the review texts accompanied with rating information. In this paper, we propose a novel matrix factorization model (called TopicMF) which simultaneously considers the ratings and accompanied review texts. Experimental results on 22 real-world datasets show the superiority of our model over the state-of-the-art models, demonstrating its effectiveness for recommendation tasks.",
"title": ""
},
{
"docid": "08a62894bac4e272530d1630e720c7ad",
"text": "Recently, along with the rapid development of mobile communication technology, edge computing theory and techniques have been attracting more and more attentions from global researchers and engineers, which can significantly bridge the capacity of cloud and requirement of devices by the network edges, and thus can accelerate the content deliveries and improve the quality of mobile services. In order to bring more intelligence to the edge systems, compared to traditional optimization methodology, and driven by the current deep learning techniques, we propose to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with the mobile edge systems, for optimizing the mobile edge computing, caching and communication. And thus, we design the “In-Edge AI” framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load. “In-Edge AI” is evaluated and proved to have near-optimal performance but relatively low overhead of learning, while the system is cognitive and adaptive to the mobile communication systems. Finally, we discuss several related challenges and opportunities for unveiling a promising upcoming future of “In-Edge AI”.",
"title": ""
},
{
"docid": "e2efdcc80b6159eeff05c59611005a4b",
"text": "Many software development organizations strive to enhance the productivity of their developers. All too often, efforts aimed at improving developer productivity are undertaken without knowledge about how developers spend their time at work and how it influences their own perception of productivity. To fill in this gap, we deployed a monitoring application at 20 computers of professional software developers from four companies for an average of 11 full work day in situ. Corroborating earlier findings, we found that developers spend their time on a wide variety of activities and switch regularly between them, resulting in highly fragmented work. Our findings extend beyond existing research in that we correlate developers’ work habits with perceived productivity and also show productivity is a personal matter. Although productivity is personal, developers can be roughly grouped into morning, low-at-lunch and afternoon people. A stepwise linear regression per participant revealed that more user input is most often associated with a positive, and emails, planned meetings and work unrelated websites with a negative perception of productivity. We discuss opportunities of our findings, the potential to predict high and low productivity and suggest design approaches to create better tool support for planning developers’ work day and improving their personal productivity.",
"title": ""
},
{
"docid": "ae8292c58a58928594d5f3730a6feacf",
"text": "Photoplethysmography (PPG) signals, captured using smart phones are generally noisy in nature. Although they have been successfully used to determine heart rate from frequency domain analysis, further indirect markers like blood pressure (BP) require time domain analysis for which the signal needs to be substantially cleaned. In this paper we propose a methodology to clean such noisy PPG signals. Apart from filtering, the proposed approach reduces the baseline drift of PPG signal to near zero. Furthermore it models each cycle of PPG signal as a sum of 2 Gaussian functions which is a novel contribution of the method. We show that, the noise cleaning effect produces better accuracy and consistency in estimating BP, compared to the state of the art method that uses the 2-element Windkessel model on features derived from raw PPG signal, captured from an Android phone.",
"title": ""
},
{
"docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8",
"text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"title": ""
},
{
"docid": "42fd940e239ed3748b007fde8b583b25",
"text": "The ImageCLEF’s plant identification task provides a testbed for the system-oriented evaluation of plant identification, more precisely on the 126 tree species identification based on leaf images. Three types of image content are considered: Scan, Scan-like (leaf photographs with a white uniform background), and Photograph (unconstrained leaf with natural background). The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of eleven groups from eight countries and with a total of 30 runs submitted, involving distinct and original methods, this second year pilot task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification.",
"title": ""
},
{
"docid": "f861b693a060d8da8df2d680d68566de",
"text": "Density-based clustering algorithms are attractive for the task of class identification in spatial database. However, in many cases, very different local-density clusters exist in different regions of data space, therefore, DBSCAN [Ester, M. et al., A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In E. Simoudis, J. Han, & U. M. Fayyad (Eds.), Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining (pp. 226-231). Portland, OR: AAAI.] using a global density parameter is not suitable. As an improvement, OPTICS [Ankerst, M. et al,(1999). OPTICS: Ordering Points To Identify the Clustering Structure. In A. Delis, C. Faloutsos, & S. Ghandeharizadeh (Eds.), Proc. ACM SIGMOD Int. Conf. on Management of Data (pp. 49-60). Philadelphia, PA: ACM.] creates an augmented ordering of the database representing its density-based clustering structure, but it only generates the clusters whose local-density exceeds some threshold instead of similar local-density clusters and doesn't produce a clustering of a data set explicitly. Furthermore the parameters required by almost all the well-known clustering algorithms are hard to determine but have a significant influence on the clustering result. In this paper, a new clustering algorithm LDBSCAN relying on a local-density-based notion of clusters is proposed to solve those problems and, what is more, it is very easy for us to pick the appropriate parameters and takes the advantage of the LOF [Breunig, M. M., et al.,(2000). LOF: Identifying Density-Based Local Outliers. In W. Chen, J. F. Naughton, & P. A. Bernstein (Eds.), Proc. ACM SIGMOD Int. Conf. on Management of Data (pp. 93-104). Dalles, TX: ACM.] to detect the noises comparing with other density-based clustering algorithms. The proposed algorithm has potential applications in business intelligence and enterprise information systems.",
"title": ""
},
{
"docid": "e6c5ca76cd14b398ac82a2f38b0a9b12",
"text": "Modern dairies cause the accumulation of considerable quantity of dairy manure which is a potential hazard to the environment. Dairy manure can also act as a principal larval resource for many insects such as the black soldier fly, Hermetia illucens. The black soldier fly larvae (BSFL) are considered as a new biotechnology to convert dairy manure into biodiesel and sugar. BSFL are a common colonizer of large variety of decomposing organic material in temperate and tropical areas. Adults do not need to be fed, except to take water, and acquired enough nutrition during larval development for reproduction. Dairy manure treated by BSFL is an economical way in animal facilities. Grease could be extracted from BSFL by petroleum ether, and then be treated with a two-step method to produce biodiesel. The digested dairy manure was hydrolyzed into sugar. In this study, approximately 1248.6g fresh dairy manure was converted into 273.4 g dry residue by 1200 BSFL in 21 days. Approximately 15.8 g of biodiesel was gained from 70.8 g dry BSFL, and 96.2g sugar was obtained from the digested dairy manure. The residual dry BSFL after grease extraction can be used as protein feedstuff.",
"title": ""
}
] |
scidocsrr
|
0034eca41a9eb8c85d950f35d60df3d6
|
Gated Self-Matching Networks for Reading Comprehension and Question Answering
|
[
{
"docid": "6d594c21ff1632b780b510620484eb62",
"text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.",
"title": ""
},
{
"docid": "9387c02974103731846062b549022819",
"text": "Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features.",
"title": ""
},
{
"docid": "da2f99dd979a1c4092c22ed03537bbe8",
"text": "Several large cloze-style context-questionanswer datasets have been introduced recently: the CNN and Daily Mail news data and the Children’s Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Our model outperforms models previously proposed for these tasks by a large margin.",
"title": ""
}
] |
[
{
"docid": "26638c9c6d4c430e74c0269fc1edeb7f",
"text": "OBJECTIVE\nThis systematic review aimed to assess the efficacy, effectiveness, safety, and tolerability of osteopathic manipulative treatment (OMT) in patients with headache.\n\n\nBACKGROUND\nMigraine is one of the most common and disabling medical conditions. It affects more than 15% of the general population, causing high global socioeconomic costs, and the currently available treatment options are inadequate.\n\n\nMETHODS\nWe systematically reviewed all available studies investigating the use of OMT in patients with migraine and other forms of headache.\n\n\nRESULTS\nThe search of literature produced six studies, five of which were eligible for review. The reviewed papers collectively support the notion that patients with migraine can benefit from OMT. OMT could most likely reduce the number of episodes per month as well as drug use. None of the included studies, however, was classified as low risk of bias according to the Cochrane Collaboration's tool for assessing risk of bias.\n\n\nCONCLUSION\nThe results from this systematic review show a preliminary low level of evidence that OMT is effective in the management of headache. However, studies with more rigorous designs and methodology are needed to strengthen this evidence. Moreover, this review suggests that new manual interventions for the treatment of acute migraine are available and developing.",
"title": ""
},
{
"docid": "9e79c88e5504f0267007aaa107314aa3",
"text": "We evaluate two dependency parsers, MSTParser and MaltParser, with respect to their capacity to recover unbounded dependencies in English, a type of evaluation that has been applied to grammarbased parsers and statistical phrase structure parsers but not to dependency parsers. The evaluation shows that when combined with simple post-processing heuristics, the parsers correctly recall unbounded dependencies roughly 50% of the time, which is only slightly worse than two grammar-based parsers specifically designed to cope with such dependencies.",
"title": ""
},
{
"docid": "a0a46f9ec5221b1a6c95bb8c45f1a8a7",
"text": "This paper describes the steps for achieving data processing in a methodological context, which take part of a methodology previously proposed by the authors for developing Data Mining (DM) applications, called \"Methodology for the development of data mining applications based on organizational analysis\". The methodology has three main phases: Knowledge of the Organization, Preparation and treatment of data, and finally, development of the DM application. We will focus on the second phase. The main contribution of this proposal is the design of a methodological framework of the second phase based on the paradigm of Data Science (DS), in order to get what we have called “Vista Minable Operacional” (VMO) from the “Vista Minable Conceptual” (VMC). The VMO built is used in the third phase. This methodological framework has been applied in two different cases of study, oil and public health.",
"title": ""
},
{
"docid": "d08529ef66abefda062a414acb278641",
"text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this inductive logic programming techniques and applications tends to be the representative book in this website.",
"title": ""
},
{
"docid": "2603c07864b92c6723b40c83d3c216b9",
"text": "Background: A study was undertaken to record exacerbations and health resource use in patients with COPD during 6 months of treatment with tiotropium, salmeterol, or matching placebos. Methods: Patients with COPD were enrolled in two 6-month randomised, placebo controlled, double blind, double dummy studies of tiotropium 18 μg once daily via HandiHaler or salmeterol 50 μg twice daily via a metered dose inhaler. The two trials were combined for analysis of heath outcomes consisting of exacerbations, health resource use, dyspnoea (assessed by the transitional dyspnoea index, TDI), health related quality of life (assessed by St George’s Respiratory Questionnaire, SGRQ), and spirometry. Results: 1207 patients participated in the study (tiotropium 402, salmeterol 405, placebo 400). Compared with placebo, tiotropium but not salmeterol was associated with a significant delay in the time to onset of the first exacerbation. Fewer COPD exacerbations/patient year occurred in the tiotropium group (1.07) than in the placebo group (1.49, p<0.05); the salmeterol group (1.23 events/year) did not differ from placebo. The tiotropium group had 0.10 hospital admissions per patient year for COPD exacerbations compared with 0.17 for salmeterol and 0.15 for placebo (not statistically different). For all causes (respiratory and non-respiratory) tiotropium, but not salmeterol, was associated with fewer hospital admissions while both groups had fewer days in hospital than the placebo group. The number of days during which patients were unable to perform their usual daily activities was lowest in the tiotropium group (tiotropium 8.3 (0.8), salmeterol 11.1 (0.8), placebo 10.9 (0.8), p<0.05). SGRQ total score improved by 4.2 (0.7), 2.8 (0.7) and 1.5 (0.7) units during the 6 month trial for the tiotropium, salmeterol and placebo groups, respectively (p<0.01 tiotropium v placebo). Compared with placebo, TDI focal score improved in both the tiotropium group (1.1 (0.3) units, p<0.001) and the salmeterol group (0.7 (0.3) units, p<0.05). Evaluation of morning pre-dose FEV1, peak FEV1 and mean FEV1 (0–3 hours) showed that tiotropium was superior to salmeterol while both active drugs were more effective than placebo. Conclusions: Exacerbations of COPD and health resource usage were positively affected by daily treatment with tiotropium. With the exception of the number of hospital days associated with all causes, salmeterol twice daily resulted in no significant changes compared with placebo. Tiotropium also improved health related quality of life, dyspnoea, and lung function in patients with COPD.",
"title": ""
},
{
"docid": "060cf7fd8a97c1ddf852373b63fe8ae1",
"text": "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"title": ""
},
{
"docid": "0772992b4c5a57b1c8e03fdabfa60218",
"text": "Investigation of the cryptanalytic strength of RSA cryptography requires computing many GCDs of two long integers (e.g., of length 1024 bits). This paper presents a high throughput parallel algorithm to perform many GCD computations concurrently on a GPU based on the CUDA architecture. The experiments with an NVIDIA GeForce GTX285 GPU and a single core of 3.0 GHz Intel Core2 Duo E6850 CPU show that the proposed GPU algorithm runs 11.3 times faster than the corresponding CPU algorithm.",
"title": ""
},
{
"docid": "6d587b8b955058468c3c397bca822bd4",
"text": "In this work we propose a novel approach to the problem of multi-view stereo reconstruction. Building upon the previously proposed PatchMatch stereo and PM-Huber algorithm we introduce an extension to the multi-view scenario that employs an iterative refinement scheme. Our proposed approach uses an extended and robustified volumetric truncated signed distance function representation, which is advantageous for the fusion of refined depth maps and also for raycasting the current reconstruction estimation together with estimated depth normals into arbitrary camera views. We formulate the combined multi-view stereo reconstruction and refinement as a variational optimization problem. The newly introduced plane based smoothing term in the energy formulation is guided by the current reconstruction confidence and the image contents. Further we propose an extension of the PatchMatch scheme with an additional KLT step to avoid unnecessary sampling iterations. Improper camera poses are corrected by a direct image aligment step that performs robust outlier compensation by means of a recently proposed kernel lifting framework. To speed up the optimization of the variational formulation an adapted scheme is used for faster convergence.",
"title": ""
},
{
"docid": "abdd688f821a450ebe0eb70d720989c2",
"text": "In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.",
"title": ""
},
{
"docid": "68a90df0f3de170d64d3245c8b316460",
"text": "In this paper, we propose a new framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model [Mnih et al. (2016)]) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents’ information [Lample & Chaplot (2016)]. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZDoom AI Competition 2016 by a large margin, 35% higher score than the second place.",
"title": ""
},
{
"docid": "de0a118cfc02cb830142001f55872ecb",
"text": "The inherent uncertainty associated with unstructured grasping tasks makes establishing a successful grasp difficult. Traditional approaches to this problem involve hands that are complex, fragile, require elaborate sensor suites, and are difficult to control. In this paper, we demonstrate a novel autonomous grasping system that is both simple and robust. The four-fingered hand is driven by a single actuator, yet can grasp objects spanning a wide range of size, shape, and mass. The hand is constructed using polymer-based shape deposition manufacturing, with joints formed by elastomeric flexures and actuator and sensor components embedded in tough rigid polymers. The hand has superior robustness properties, able to withstand large impacts without damage and capable of grasping objects in the presence of large positioning errors. We present experimental results showing that the hand mounted on a three degree of freedom manipulator arm can reliably grasp 5 cm-scale objects in the presence of positioning error of up to 100% of the object size and 10 cm-scale objects in the presence of positioning error of up to 33% of the object size, while keeping acquisition contact forces low.",
"title": ""
},
{
"docid": "fce8ec4c0cc90c085ce5d269c4f8d683",
"text": "Hardware simulation of channel codes offers the potential of improving code evaluation speed by orders of magnitude over workstationor PC-based simulation. We describe a hardware-based Gaussian noise generator used as a key component in a hardware simulation system, for exploring channel code behavior at very low bit error rates (BERs) in the range of 10−9 to 10−10. The main novelty is the design and use of non-uniform piecewise linear approximations in computing trigonometric and logarithmic functions. The parameters of the approximation are chosen carefully to enable rapid computation of coefficients from the inputs, while still retaining extremely high fidelity to the modelled functions. The output of the noise generator accurately models a true Gaussian PDF even at very high σ values. Its properties are explored using: (a) several different statistical tests, including the chi-square test and the Kolmogorov-Smirnov test, and (b) an application for decoding of low density parity check (LDPC) codes. An implementation at 133MHz on a Xilinx Virtex-II XC2V4000-6 FPGA produces 133 million samples per second, which is 40 times faster than a 2.13GHz PC; another implementation on a Xilinx Spartan-IIE XC2S300E-7 FPGA at 62MHz is capable of a 20 times speedup. The performance can be improved by exploiting parallelism: an XC2V4000-6 FPGA with three parallel instances of the noise generator at 126MHz can run 100 times faster than a 2.13GHz PC. We illustrate the deterioration of clock speed with the increase in the number of instances.",
"title": ""
},
{
"docid": "5a6b5e5a977f2a8732c260fb99a67cad",
"text": "The configuration design for a wall-climbing robot which is capable of moving on diversified surfaces of wall and has high payload capability, is discussed, and a developed quadruped wall-climbing robot, NINJA-1, is introduced. NINJA-1 is composed of (1) legs based on a 3D parallel link mechanism capable of producing a powerful driving force for moving on the surface of a wall, (2) a conduit-wire-driven parallelogram mechanism to adjust the posture of the ankles, and (3) a valve-regulated multiple sucker which can provide suction even if there are grooves and small differences in level of the wall. Finally, the data of the trial-manufactured NINJA-1, and the up-to-date status of the walking motion are shown.<<ETX>>",
"title": ""
},
{
"docid": "5cf2c4239507b7d66cec3cf8fabf7f60",
"text": "Government corruption is more prevalent in poor countries than in rich countries. This paper uses cross-industry heterogeneity in growth rates within Vietnam to test empirically whether growth leads to lower corruption. We find that it does. We begin by developing a model of government officials’ choice of how much bribe money to extract from firms that is based on the notion of inter-regional tax competition, and consider how officials’ choices change as the economy grows. We show that economic growth is predicted to decrease the rate of bribe extraction under plausible assumptions, with the benefit to officials of demanding a given share of revenue as bribes outweighed by the increased risk that firms will move elsewhere. This effect is dampened if firms are less mobile. Our empirical analysis uses survey data collected from over 13,000 Vietnamese firms between 2006 and 2010 and an instrumental variables strategy based on industry growth in other provinces. We find, first, that firm growth indeed causes a decrease in bribe extraction. Second, this pattern is particularly true for firms with strong land rights and those with operations in multiple provinces, consistent with these firms being more mobile. Our results suggest that as poor countries grow, corruption could subside “on its own,” and they demonstrate one type of positive feedback between economic growth and good institutions. ∗Contact information: Bai: jieb@mit.edu; Jayachandran: seema@northwestern.edu; Malesky: ejm5@duke.edu; Olken: bolken@mit.edu. We thank Lori Beaman, Raymond Fisman, Chang-Tai Hsieh, Supreet Kaur, Neil McCulloch, Andrei Shleifer, Matthew Stephenson, Eric Verhoogen, and Ekaterina Zhuravskaya for helpful comments.",
"title": ""
},
{
"docid": "584456ef251fbf31363832fc82bd3d42",
"text": "Neural network architectures found by sophistic search algorithms achieve strikingly good test performance, surpassing most human-crafted network models by significant margins. Although computationally efficient, their design is often very complex, impairing execution speed. Additionally, finding models outside of the search space is not possible by design. While our space is still limited, we implement undiscoverable expert knowledge into the economic search algorithm Efficient Neural Architecture Search (ENAS), guided by the design principles and architecture of ShuffleNet V2. While maintaining baselinelike 2.85% test error on CIFAR-10, our ShuffleNASNets are significantly less complex, require fewer parameters, and are two times faster than the ENAS baseline in a classification task. These models also scale well to a low parameter space, achieving less than 5% test error with little regularization and only 236K parameters.",
"title": ""
},
{
"docid": "335e92a896c6cce646f3ae81c5d9a02c",
"text": "Vulnerabilities in web applications allow malicious users to obtain unrestricted access to private and confidential information. SQL injection attacks rank at the top of the list of threats directed at any database-driven application written for the Web. An attacker can take advantages of web application programming security flaws and pass unexpected malicious SQL statements through a web application for execution by the back-end database. This paper proposes a novel specification-based methodology for the detection of exploitations of SQL injection vulnerabilities. The new approach on the one hand utilizes specifications that define the intended syntactic structure of SQL queries that are produced and executed by the web application and on the other hand monitors the application for executing queries that are in violation of the specification.\n The three most important advantages of the new approach against existing analogous mechanisms are that, first, it prevents all forms of SQL injection attacks; second, its effectiveness is independent of any particular target system, application environment, or DBMS; and, third, there is no need to modify the source code of existing web applications to apply the new protection scheme to them.\n We developed a prototype SQL injection detection system (SQL-IDS) that implements the proposed algorithm. The system monitors Java-based applications and detects SQL injection attacks in real time. We report some preliminary experimental results over several SQL injection attacks that show that the proposed query-specific detection allows the system to perform focused analysis at negligible computational overhead without producing false positives or false negatives. Therefore, the new approach is very efficient in practice.",
"title": ""
},
{
"docid": "364d57031cf64e2f8d1b6ab84409bc2e",
"text": "The ability to influence behaviour is central to many of the key policy challenges in areas such as health, finance and climate change. The usual route to behaviour change in economics and psychology has been to attempt to ‘change minds’ by influencing the way people think through information and incentives. There is, however, increasing evidence to suggest that ‘changing contexts’ by influencing the environments within which people act (in largely automatic ways) can have important effects on behaviour. We present a mnemonic, MINDSPACE, which gathers up the nine most robust effects that influence our behaviour in mostly automatic (rather than deliberate) ways. This framework is being used by policymakers as an accessible summary of the academic literature. To motivate further research and academic scrutiny, we provide some evidence of the effects in action and highlight some of the significant gaps in our knowledge. 2011 Elsevier B.V. All rights reserved. 0167-4870/$ see front matter 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.joep.2011.10.009 ⇑ Corresponding author. Tel.: +44 (0)2033127259. E-mail addresses: p.h.dolan@lse.ac.uk (P. Dolan), michael.hallsworth@instituteforgovernment.org.uk (M. Hallsworth), david.halpern@cabinet-office.x.gsi.gov.uk (D. Halpern), dominic.king05@imperial.ac.uk (D. King), robert.metcalfe@merton.ox.ac.uk (R. Metcalfe), i.vlaev@imperial.ac.uk (I. Vlaev). Journal of Economic Psychology 33 (2012) 264–277",
"title": ""
},
{
"docid": "17bf75156f1ffe0daffd3dbc5dec5eb9",
"text": "Celebrities are admired, appreciated and imitated all over the world. As a natural result of this, today many brands choose to work with celebrities for their advertisements. It can be said that the more the brands include celebrities in their marketing communication strategies, the tougher the competition in this field becomes and they allocate a large portion of their marketing budget to this. Brands invest in celebrities who will represent them in order to build the image they want to create. This study aimed to bring under spotlight the perceptions of Turkish customers regarding the use of celebrities in advertisements and marketing communication and try to understand their possible effects on subsequent purchasing decisions. In addition, consumers’ reactions and perceptions were investigated in the context of the product-celebrity match, to what extent the celebrity conforms to the concept of the advertisement and the celebrity-target audience match. In order to achieve this purpose, a quantitative research was conducted as a case study concerning Mavi Jeans (textile company). Information was obtained through survey. The results from this case study are supported by relevant theories concerning the main subject. The most valuable result would be that instead of creating an advertisement around a celebrity in demand at the time, using a celebrity that fits the concept of the advertisement and feeds the concept rather than replaces it, that is celebrity endorsement, will lead to more striking and positive results. Keywords—Celebrity endorsement, product-celebrity match, advertising.",
"title": ""
},
{
"docid": "d0623e90f8bce6818c6cb2f150757659",
"text": "In this paper, an efficient offline signature verification method based on an interval symbolic representation and a fuzzy similarity measure is proposed. In the feature extraction step, a set of local binary pattern-based features is computed from both the signature image and its under-sampled bitmap. Interval-valued symbolic data is then created for each feature in every signature class. As a result, a signature model composed of a set of interval values (corresponding to the number of features) is obtained for each individual’s handwritten signature class. A novel fuzzy similarity measure is further proposed to compute the similarity between a test sample signature and the corresponding interval-valued symbolic model for the verification of the test sample. To evaluate the proposed verification approach, a benchmark offline English signature data set (GPDS-300) and a large data set (BHSig260) composed of Bangla and Hindi offline signatures were used. A comparison of our results with some recent signature verification methods available in the literature was provided in terms of average error rate and we noted that the proposed method always outperforms when the number of training samples is eight or more.",
"title": ""
},
{
"docid": "a2b39f2efb1eeb3f774abe039974700f",
"text": "Beam search reduces the memory consumption of bestfirst search at the cost of finding longer paths but its memory consumption can still exceed the given memory capacity quickly. We therefore develop BULB (Beam search Using Limited discrepancy Backtracking), a complete memory-bounded search method that is able to solve more problem instances of large search problems than beam search and does so with a reasonable runtime. At the same time, BULB tends to find shorter paths than beam search because it is able to use larger beam widths without running out of memory. We demonstrate these properties of BULB experimentally for three standard benchmark domains.",
"title": ""
}
] |
scidocsrr
|
f4c508e456f082695e1a1e052ed2dae7
|
An empirical comparison of models for dropout prophecy in MOOCs
|
[
{
"docid": "2f761de3f94d86a2c73aac3dce413dca",
"text": "The class imbalance problem has been recognized in many practical domains and a hot topic of machine learning in recent years. In such a problem, almost all the examples are labeled as one class, while far fewer examples are labeled as the other class, usually the more important class. In this case, standard machine learning algorithms tend to be overwhelmed by the majority class and ignore the minority class since traditional classifiers seeking an accurate performance over a full range of instances. This paper reviewed academic activities special for the class imbalance problem firstly. Then investigated various remedies in four different levels according to learning phases. Following surveying evaluation metrics and some other related factors, this paper showed some future directions at last.",
"title": ""
},
{
"docid": "af254a16b14a3880c9b8fe5b13f1a695",
"text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.",
"title": ""
}
] |
[
{
"docid": "e13dd00f1bb5ba3a83caf8830714bc79",
"text": "The concensus view has traditionally been that brains evolved to process information of ecological relevance. This view, however, ignores an important consideration: Brains are exceedingly expensive both to evolve and to maintain. The adult human brain weighs about 2% of body weight but consumes about 20% of total energy intake.2 In the light of this, it is difficult to justify the claim that primates, and especially humans, need larger brains than other species merely to do the same ecological job. Claims that primate ecological strategies involve more complex problem-solving3,4 are plausible when applied to the behaviors of particular species, such as termite-extraction by chimpanzees and nut-cracking by Cebus monkeys, but fail to explain why all primates, including those that are conventional folivores, require larger brains than those of all other mammals. An alternative hypothesis offered during the late 1980s was that primates’ large brains reflect the computational demands of the complex social systems that characterize the order.5,6 Prima facie, this suggestion seems plausible: There is ample evidence that primate social systems are more complex than those of other species. These systems can be shown to involve processes such as tactical deception5 and coalition-formation,7,8 which are rare or occur only in simpler forms in other taxonomic groups. Because of this, the suggestion was rapidly dubbed the Machiavellian intelligence hypothesis, although there is a growing preference to call it the social brain hypothesis.9,10 Plausible as it seems, the social brain hypothesis faced a problem that was recognized at an early date. Specifically, what quantitative empirical evidence there was tended to favor one or the other of the ecological hypotheses,1 whereas the evidence adduced in favor of the social brain hypothesis was, at best, anecdotal.6 In this article, I shall first show how we can test between the competing hypotheses more conclusively and then consider some of the implications of the social brain hypothesis for humans. Finally, I shall briefly consider some of the underlying cognitive mechanisms that might be involved.",
"title": ""
},
{
"docid": "8571835aad236d639533680232cdca6c",
"text": "A new approach for the personal identification using hand images is presented. This paper attempts to improve the performance of palmprint-based verification system by integrating hand geometry features. Unlike other bimodal biometric systems, the users does not have to undergo the inconvenience of passing through two sensors since the palmprint and hand geometry features can be are acquired from the same image, using a digital camera, at the same time. Each of these gray level images are aligned and then used to extract palmprint and hand geometry features. These features are then examined for their individual and combined performance. The image acquisition setup used in this work was inherently simple and it does not employ any special illumination nor does it use any pegs to cause any inconvenience to the users. Our experimental results on the image dataset from 100 users confirm the utility of hand geometry features with those from palmprints and achieve promising results with a simple image acquisition setup.",
"title": ""
},
{
"docid": "9808d306dcb3378629718952a0517b26",
"text": "Legged robots have the potential to navigate a much larger variety of terrain than their wheeled counterparts. In this paper we present a hierarchical control architecture that enables a quadruped, the \"LittleDog\" robot, to walk over rough terrain. The controller consists of a high-level planner that plans a set of footsteps across the terrain, a low-level planner that plans trajectories for the robot's feet and center of gravity (COG), and a low-level controller that tracks these desired trajectories using a set of closed-loop mechanisms. We conduct extensive experiments to verify that the controller is able to robustly cross a wide variety of challenging terrains, climbing over obstacles nearly as tall as the robot's legs. In addition, we highlight several elements of the controller that we found to be particularly crucial for robust locomotion, and which are applicable to quadruped robots in general. In such cases we conduct empirical evaluations to test the usefulness of these elements.",
"title": ""
},
{
"docid": "b619ed1913d9db4dd4175fa7caf88c8e",
"text": "This paper presents a comparative study of several state of the art background subtraction (BS) algorithms. The goal is to provide brief solid overview of the strengths and weaknesses of the most widely applied BS methods. Approaches ranging from simple background subtraction with global thresholding to more sophisticated statistical methods have been implemented and tested with ground truth. The interframe difference, approximate median filtering and Gaussian mixture models (GMM) methods are compared relative to their robustness, computational time, and memory requirement. The performance of the algorithms is tested in public datasets. Interframe difference and approximate median filtering are pretty fast, almost five times faster than GMM. Moreover, GMM occupies five times more memory than simpler methods. However, experimental results of GMM are more accurate than simple methods.",
"title": ""
},
{
"docid": "862a46f0888ab8c4b37d4e63be45eb08",
"text": "In this paper we examine the notion of adaptive user interfac s, interactive systems that invoke machine learning to improve their interact ion with humans. We review some previous work in this emerging area, ranging from softw are that filters information to systems that support more complex tasks like scheduling. After this, we describe three ongoing research efforts that extend this framework in new d irections. Finally, we review previous work that has addressed similar issues and conside r some challenges that are presented by the design of adaptive user interfaces. 1 The Need for Automated User Modeling As computers have become more widespread, the software that runs on them has also become more interactive and responsive. Only a few early users remember the days of programming on punch cards and submitting overnight jobs, and even the era of time-s haring systems and text editors has become a dim memory. Modern operating systems support a wid e range of interactive software, from WYSIWYG editors to spreadsheets to computer games, most em bedded in some form of graphical user interface. Such packages have become an essential part of bus iness and academic life, with millions of people depending on them to accomplish thei r daily goals. Naturally, the increased emphasis on interactive software has led to greater in ter st in the study of human-computer interaction. However, most research in this area h s focused on the manner in which computer interfaces present information and choices to the user , and thus tells only part of the story. An equally important issue, yet one that has receiv d much less attention, concerns thecontentthat the interface offers to the user. And a concern with content leads directl y to a focus onuser models , since it seems likely that people will differ in the content they prefer to encounter during their interactions with computers. Developers of software for the Internet are quite aware of the need for p ersonalized content, and many established portals on the World Wide Web provide simple too s f r filtering information. But these tools typically focus on a narrow class of applications an d require manual setting of parameters, a process that users are likely to find tedious. Moreover, som e facets of users’ preferences may be reflected in their behavior but not subject to introspection . Clearly, there is a need for increased personalization in many areas of interactive software, both i n supporting a greater variety of tasks and in ways to automate this process. This suggests turning to techniques from machine learning in order to personalize computer interfaces. ? Also affiliated with the Institute for the Study of Learning a nd Expertise and the Center for the Study of Language and Information at Stanford University. 358 USERMODELING AND ADAPTIVE INTERFACES In the rest of this paper, we examine the notion of adaptive user interfaces – systems that learn a user model from traces of interaction with that user. We start by defining ad aptive interfaces more precisely, drawing a close analogy with algorithms for machine lear ning. Next, we consider some examples of such software artifacts that have appeared in the literatur e, af e which we report on three research efforts that attempt to extend the basic framework in new directions. Finally, we discuss kinships between adaptive user interfaces and some sim ilar paradigms, then close with some challenges they pose for researchers and software developer s. 2 Adaptive User Interfaces and Machine Learning For most readers, the basic idea of an adaptive user interface will already be cl ear, but for the sake of discussion, we should define this notion somewhat more preci sely: An adaptive user interface is a software artifact that improves its ability to interact with a user by constructing a user model based on partial experience with that user . This definition makes clear that an adaptive interface does not exist in isolat ion, but rather is designed to interact with a human user. Moreover, for the system to be adapt ive, it must improve its interaction with that user, and simple memorization of such interactio ns does not suffice. Rather, improvement should result from generalization over past experiences a d carry over to new user interactions. The above definition will seem familiar to some readers, and for good reason , since it takes the same form as common definitions of machine learning (e.g., Langley, 1 995). The main differences are that the user plays the role of the environment in which learning o ccurs, the user model takes the place of the learned knowledge base, and interaction with the user ser ves as the performance task on which learning should lead to improvement. In this view, adap tive user interfaces constitute a special class of learning systems that are designed to aid humans , in contrast with much of the early applied work on machine learning, which aimed to develop kn wledge-based systems that would replace domain experts. Despite this novel emphasis, many lessons acquired from these earlier appli cations of machine learning should prove relevant in the design of adaptive interfaces . The most important has been the realization that we are still far from entirely automating the lear ning process, and that some essential steps must still be done manually (Brodley and Smy th, 1997; Langley and Simon, 1995; Rudström, 1995). Briefly, to solve an applied probl em using established induction methods, the developer must typically: reformulate the problem in some form that these methods can directly addr ess; engineer a set of features that describe the training cases adequately; and devise some approach to collecting and preparing the training instances. Only after the developer has addressed these issues can he run some learning m ethod over the data to produce the desired domain knowledge or, in the case of an adaptive interface, the desired user model. Moreover, there is an emerging consensus within the applied learning commu nity that these steps of problem formulation, representation engineering, and data collect ion/preparation play a role at least as important as the induction stage itself. Indeed, there is a common belief that, once USERMODELING AND ADAPTIVE INTERFACES 359 they are handled well, the particular induction method one uses has littl e effect on the outcome (Langley and Simon, 1995). In contrast, most academic work on machine lear ning still focuses on refining induction techniques and downplays the steps that must occur b efore and after their invocation. Indeed, some research groups still emphasize differences between b road classes of learning methods, despite evidence that decision-tree induction, connect ionist algorithms, casebased methods, and probabilistic schemes often produce very similar resul ts. We will adopt the former viewpoint in our discussion of adaptive us r interfaces. As a result, we will have little to say about the particular learning methods used to construct and refine user models, but we will have comments about the formulation of the t ask, the features used to describe behavior, the source of data about user preferences, and similar is sues. This bias reflects our belief that strategies which have proved successful in other applicatio ns of machine learning will also serve us well in the design of adaptive interfaces. 3 Examples of Adaptive User Interfaces We can clarify the notion of an adaptive user interface by considering some e xamples that have appeared in the literature during recent years. Many of these systems focus on the generic task of information filtering, which involves directing a user’s attention toward items from a large s et that he is likely to find interesting or useful. Naturally, the most popul ar applications revolve around the World Wide Web, which provides both a wealth of information to fil ter and a convenient mechanism for interacting with users. However, the same basic techniques can be extended to broaderrecommendationtasks, such as suggesting products a consumer might want to buy. One example comes from Pazzani, Muramatsu, and Billsus (1996), who descri be SYSKILL & W EBERT, an adaptive interface which recommends web pages on a given topic that a user should find interesting. Much like typical search engines, this system pr sents the user with a list of web pages, but it also labels those candidates it predicts the user will esp ecially like or dislike. Moreover, it lets the user mark pages as desirable or undesirable, and the sys tem records the marked pages as training data for learning the user’s preferences. S YSKILL & W EBERT encodes each user model in terms of the probabilities that certain words will occur gi ven that the person likes (or dislikes) the document. The system invokes the naive Bayesian classifier to learn these probabilities and to predict whether the user will find a particular page des irable. This general approach to selection and learning is often referred to as c ntent-based filtering . Briefly, this scheme represents each item with a set of descriptors, usually t he words that occur in a document, and the filtering system uses these descriptors as predictive f ea ures when deciding whether to recommend a document to the user. This biases the selection process toward documents that are similar to ones the user has previously ranked highly. Ot er examples of adaptive user interfaces that embody the content-based approach include Lang’s (1995) NEWSWEEDER, which recommends news stories, and Boone’s (1998) Re:Agent, which sugg ests actions for handling electronic mail. Of course, content-based methods are also widely used in arch engines for the World Wide Web, and they predominate in the literature on inf ormation retrieval, but these typically do not employ learning algorithms to construct users mo dels. Another example of an adaptive interface is Shardanand and Maes’ (1995) R INGO, an interactive syste",
"title": ""
},
{
"docid": "4b3425ce40e46b7a595d389d61daca06",
"text": "Genetic or acquired destabilization of the dermal extracellular matrix evokes injury- and inflammation-driven progressive soft tissue fibrosis. Dystrophic epidermolysis bullosa (DEB), a heritable human skin fragility disorder, is a paradigmatic disease to investigate these processes. Studies of DEB have generated abundant new information on cellular and molecular mechanisms at play in skin fibrosis which are not only limited to intractable diseases, but also applicable to some of the most common acquired conditions. Here, we discuss recent advances in understanding the biological and mechanical mechanisms driving the dermal fibrosis in DEB. Much of this progress is owed to the implementation of cell and tissue omics studies, which we pay special attention to. Based on the novel findings and increased understanding of the disease mechanisms in DEB, translational aspects and future therapeutic perspectives are emerging.",
"title": ""
},
{
"docid": "c08518b806c93dde1dd04fdf3c9c45bb",
"text": "Purpose – The objectives of this article are to develop a multiple-item scale for measuring e-service quality and to study the influence of perceived quality on consumer satisfaction levels and the level of web site loyalty. Design/methodology/approach – First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings – The validation process of scales suggested that perceived quality is a multidimensional construct: web design, customer service, assurance and order management; that perceived quality influences on satisfaction; and that satisfaction influences on consumer loyalty. Moreover, no differences in these conclusions were observed if the total sample is divided between buyers and information searchers. Practical implications – First, the need to develop user-friendly web sites which ease consumer purchasing and searching, thus creating a suitable framework for the generation of higher satisfaction and loyalty levels. Second, the web site manager should enhance service loyalty, customer sensitivity, personalised service and a quick response to complaints. Third, the web site should uphold sufficient security levels in communications and meet data protection requirements regarding the privacy. Lastly, the need for correct product delivery and product manipulation or service is recommended. Originality/value – Most relevant studies about perceived quality in the internet have focused on web design aspects. Moreover, the existing literature regarding internet consumer behaviour has not fully analysed profits generated by higher perceived quality in terms of user satisfaction and loyalty.",
"title": ""
},
{
"docid": "e2d65924f76331ca8425bd5b2f4a3a83",
"text": "This review is intended to highlight some recent and particularly interesting examples of the synthesis of thiophene derivatives by heterocyclization of readily available S-containing alkyne substrates.",
"title": ""
},
{
"docid": "3b4607a6b0135eba7c4bb0852b78dda9",
"text": "Heart rate variability for the treatment of major depression is a novel, alternative approach that can offer symptom reduction with minimal-to-no noxious side effects. The following material will illustrate some of the work being conducted at our laboratory to demonstrate the efficacy of heart rate variability. Namely, results will be presented regarding our published work on an initial open-label study and subsequent results of a small, unfinished randomized controlled trial.",
"title": ""
},
{
"docid": "dc2c952b5864a167c19b34be6db52389",
"text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.",
"title": ""
},
{
"docid": "349df0d3c48b6c1b6fcad1935f5e1e0a",
"text": "Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.",
"title": ""
},
{
"docid": "ba50550de9920eb3c40da0550663dd32",
"text": "Bile acids are important signaling molecules that regulate cholesterol, glucose, and energy homoeostasis and have thus been implicated in the development of metabolic disorders. Their bioavailability is strongly modulated by the gut microbiota, which contributes to generation of complex individual-specific bile acid profiles. Hence, it is important to have accurate methods at hand for precise measurement of these important metabolites. Here, a rapid and sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for simultaneous identification and quantitation of primary and secondary bile acids as well as their taurine and glycine conjugates was developed and validated. Applicability of the method was demonstrated for mammalian tissues, biofluids, and cell culture media. The analytical approach mainly consists of a simple and rapid liquid-liquid extraction procedure in presence of deuterium-labeled internal standards. Baseline separation of all isobaric bile acid species was achieved and a linear correlation over a broad concentration range was observed. The method showed acceptable accuracy and precision on intra-day (1.42-11.07 %) and inter-day (2.11-12.71 %) analyses and achieved good recovery rates for representative analytes (83.7-107.1 %). As a proof of concept, the analytical method was applied to mouse tissues and biofluids, but especially to samples from in vitro fermentations with gut bacteria of the family Coriobacteriaceae. The developed method revealed that the species Eggerthella lenta and Collinsella aerofaciens possess bile salt hydrolase activity, and for the first time that the species Enterorhabdus mucosicola is able to deconjugate and dehydrogenate primary bile acids in vitro.",
"title": ""
},
{
"docid": "be29c412c17f9a87829cfe86fd3b1040",
"text": "Nowadays there is a continuously increasing worldwide concern for the development of wastewater treatment technologies. The utilization of iron oxide nanomaterials has received much attention due to their unique properties, such as extremely small size, high surface-area-to-volume ratio, surface modifiability, excellent magnetic properties and great biocompatibility. A range of environmental clean-up technologies have been proposed in wastewater treatment which applied iron oxide nanomaterials as nanosorbents and photocatalysts. Moreover, iron oxide based immobilization technology for enhanced removal efficiency tends to be an innovative research point. This review outlined the latest applications of iron oxide nanomaterials in wastewater treatment, and gaps which limited their large-scale field applications. The outlook for potential applications and further challenges, as well as the likely fate of nanomaterials discharged to the environment were discussed.",
"title": ""
},
{
"docid": "6af7d655d12fb276f5db634f4fc7cb74",
"text": "The letter presents a compact 3-bit 90 ° phase shifter for phased-array applications at the 60 GHz ISM band (IEEE 802.11ad standard). The designed phase shifter is based on reflective-type topology using the proposed reflective loads with binary-weighted digitally-controlled varactor arrays and the transformer-type directional coupler. The measured eight output states of the implemented phase shifter in 65 nm CMOS technology, exhibit phase-resolution of 11.25 ° with an RMS phase error of 5.2 °. The insertion loss is 5.69 ± 1.22 dB at 60 GHz and the return loss is better than 12 dB over 54-66 GHz. The chip demonstrates a compact size of only 0.034 mm2.",
"title": ""
},
{
"docid": "9bae1002ee5ebf0231fe687fd66b8bb5",
"text": "We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.",
"title": ""
},
{
"docid": "9a75902f8e91aaabaca6e235a91c33f3",
"text": "This article presents and discusses the implementation of a direct volume rendering system for the Web, which articulates a large portion of the rendering task in the client machine. By placing the rendering emphasis in the local client, our system takes advantage of its power, while at the same time eliminates processing from unreliable bottlenecks (e.g. network). The system developed articulates in efficient manner the capabilities of the recently released WebGL standard, which makes available the accelerated graphic pipeline (formerly unusable). The dependency on specially customized hardware is eliminated, and yet efficient rendering rates are achieved. The Web increasingly competes against desktop applications in many scenarios, but the graphical demands of some of the applications (e.g. interactive scientific visualization by volume rendering), have impeded their successful settlement in Web scenarios. Performance, scalability, accuracy, security are some of the many challenges that must be solved before visual Web applications popularize. In this publication we discuss both performance and scalability of the volume rendering by WebGL ray-casting in two different but challenging application domains: medical imaging and radar meteorology.",
"title": ""
},
{
"docid": "b1958bbb9348a05186da6db649490cdd",
"text": "Fourier ptychography (FP) utilizes illumination control and computational post-processing to increase the resolution of bright-field microscopes. In effect, FP extends the fixed numerical aperture (NA) of an objective lens to form a larger synthetic system NA. Here, we build an FP microscope (FPM) using a 40X 0.75NA objective lens to synthesize a system NA of 1.45. This system achieved a two-slit resolution of 335 nm at a wavelength of 632 nm. This resolution closely adheres to theoretical prediction and is comparable to the measured resolution (315 nm) associated with a standard, commercially available 1.25 NA oil immersion microscope. Our work indicates that Fourier ptychography is an attractive method to improve the resolution-versus-NA performance, increase the working distance, and enlarge the field-of-view of high-resolution bright-field microscopes by employing lower NA objectives.",
"title": ""
},
{
"docid": "749b380acf38c39ee3ae7a6576dd63af",
"text": "We present a new method for real-time physics-based simulation supporting many different types of hyperelastic materials. Previous methods such as Position Based or Projective Dynamics are fast, but support only limited selection of materials; even classical materials such as the Neo-Hookean elasticity are not supported. Recently, Xu et al. [2015] introduced new “splinebased materials” which can be easily controlled by artists to achieve desired animation effects. Simulation of these types of materials currently relies on Newton’s method, which is slow, even with only one iteration per timestep. In this paper, we show that Projective Dynamics can be interpreted as a quasi-Newton method. This insight enables very efficient simulation of a large class of hyperelastic materials, including the Neo-Hookean, splinebased materials, and others. The quasi-Newton interpretation also allows us to leverage ideas from numerical optimization. In particular, we show that our solver can be further accelerated using L-BFGS updates (Limitedmemory Broyden-Fletcher-Goldfarb-Shanno algorithm). Our final method is typically more than 10 times faster than one iteration of Newton’s method without compromising quality. In fact, our result is often more accurate than the result obtained with one iteration of Newton’s method. Our method is also easier to implement, implying reduced software development costs.",
"title": ""
},
{
"docid": "4096499f4e34f6c1f0c3bb0bb63fb748",
"text": "A detailed examination of evolving traffic characteristics, operator requirements, and network technology trends suggests a move away from nonblocking interconnects in data center networks (DCNs). As a result, recent efforts have advocated oversubscribed networks with the capability to adapt to traffic requirements on-demand. In this paper, we present the design, implementation, and evaluation of OSA, a novel Optical Switching Architecture for DCNs. Leveraging runtime reconfigurable optical devices, OSA dynamically changes its topology and link capacities, thereby achieving unprecedented flexibility to adapt to dynamic traffic patterns. Extensive analytical simulations using both real and synthetic traffic patterns demonstrate that OSA can deliver high bisection bandwidth (60%-100% of the nonblocking architecture). Implementation and evaluation of a small-scale functional prototype further demonstrate the feasibility of OSA.",
"title": ""
},
{
"docid": "a157987bf55765c495b9949b38c91ea2",
"text": "www.thelancet.com Vol 373 May 16, 2009 1693 Anthony Costello, Mustafa Abbas, Adriana Allen, Sarah Ball, Sarah Bell, Richard Bellamy, Sharon Friel, Nora Groce, Anne Johnson, Maria Kett, Maria Lee, Caren Levy, Mark Maslin, David McCoy, Bill McGuire, Hugh Montgomery, David Napier, Christina Pagel, Jinesh Patel, Jose Antonio Puppim de Oliveira, Nanneke Redclift, Hannah Rees, Daniel Rogger, Joanne Scott, Judith Stephenson, John Twigg, Jonathan Wolff , Craig Patterson*",
"title": ""
}
] |
scidocsrr
|
09b95bca11476f6d8dd9131fcd29a4a7
|
A Hierarchical Model of Approach and Avoidance Achievement Motivation
|
[
{
"docid": "2a1f1576ab73e190dce400dedf80df36",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading motivation reconsidered the concept of competence is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "c56c71775a0c87f7bb6c59d6607e5280",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
}
] |
[
{
"docid": "88f43c85c32254a5c2859e983adf1c43",
"text": "This study observed naturally occurring emergent leadership behavior in distributed virtual teams. The goal of the study was to understand how leadership behaviors emerge and are distributed in these kinds of teams. Archived team interaction captured during the course of a virtual collaboration exercise was analyzed using an a priori content analytic scheme derived from behaviorally-based leadership theory to capture behavior associated with leadership in virtual environments. The findings lend support to the notion that behaviorally-based leadership theory can provide insights into emergent leadership in virtual environments. This study also provides additional insights into the patterns of leadership that emerge in virtual environments and relationship to leadership behaviors.",
"title": ""
},
{
"docid": "055071ff6809eaea4eeb0a9f64e49274",
"text": "Compressed bitmap indexes are used in systems such as Git or Oracle to accelerate queries. They represent sets and often support operations such as unions, intersections, differences, and symmetric differences. Several important systems such as Elasticsearch, Apache Spark, Netflix’s Atlas, LinkedIn’s Pivot, Metamarkets’ Druid, Pilosa, Apache Hive, Apache Tez, Microsoft Visual Studio Team Services and Apache Kylin rely on a specific type of compressed bitmap index called Roaring. We present an optimized software library written in C implementing Roaring bitmaps: CRoaring. It benefits from several algorithms designed for the single-instruction-multiple-data (SIMD) instructions available on commodity processors. In particular, we present vectorized algorithms to compute the intersection, union, difference and symmetric difference between arrays. We benchmark the library against a wide range of competitive alternatives, identifying weaknesses and strengths in our software. Our work is available under a liberal open-source license.",
"title": ""
},
{
"docid": "5f30867cb3071efa8fb0d34447b8a8f6",
"text": "Money laundering is a global problem that affects all countries to various degrees. Although, many countries take benefits from money laundering, by accepting the money from laundering but keeping the crime abroad, at the long run, “money laundering attracts crime”. Criminals come to know a country, create networks and eventually also locate their criminal activities there. Most financial institutions have been implementing antimoney laundering solutions (AML) to fight investment fraud. The key pillar of a strong Anti-Money Laundering system for any financial institution depends mainly on a well-designed and effective monitoring system. The main purpose of the Anti-Money Laundering transactions monitoring system is to identify potential suspicious behaviors embedded in legitimate transactions. This paper presents a monitor framework that uses various techniques to enhance the monitoring capabilities. This framework is depending on rule base monitoring, behavior detection monitoring, cluster monitoring and link analysis based monitoring. The monitor detection processes are based on a money laundering deterministic finite automaton that has been obtained from their corresponding regular expressions. Index Terms – Anti Money Laundering system, Money laundering monitoring and detecting, Cycle detection monitoring, Suspected Link monitoring.",
"title": ""
},
{
"docid": "66313e7ec725fa6081a9d834ce87cb2e",
"text": "In this paper, the DCXO is based on a Pierce oscillator with two MIM capacitor arrays for tuning the anti-resonant frequency of a 19.2MHz crystal. Each array of MIM capacitors is thermometer-coded and formatted in a matrix shape to facilitate layout. Although a segmented architecture is an area-efficient method for implementing a SC array, a thermometer-coded array provides the best linearity and guarantees a monotonic frequency tuning characteristic, which is of utmost importance in an AFC system.",
"title": ""
},
{
"docid": "90d06c97cdf3b67a81345f284d839c25",
"text": "Open information extraction is an important task in Biomedical domain. The goal of the OpenIE is to automatically extract structured information from unstructured text with no or little supervision. It aims to extract all the relation tuples from the corpus without requiring pre-specified relation types. The existing tools may extract ill-structured or incomplete information, or fail on the Biomedical literature due to the long and complicated sentences. In this paper, we propose a novel pattern-based information extraction method for the wide-window entities (WW-PIE). WW-PIE utilizes dependency parsing to break down the long sentences first and then utilizes frequent textual patterns to extract the high-quality information. The pattern hierarchical grouping organize and structure the extractions to be straightforward and precise. Consequently, comparing with the existing OpenIE tools, WW-PIE produces structured output that can be directly used for downstream applications. The proposed WW-PIE is also capable in extracting n-ary and nested relation structures, which is less studied in the existing methods. Extensive experiments on real-world biomedical corpus from PubMed abstracts demonstrate the power of WW-PIE at extracting precise and well-structured information.",
"title": ""
},
{
"docid": "5b0eaf636d6d8cf0523e3f00290b780f",
"text": "Toward materializing the recently identified potential of cognitive neuroscience for IS research (Dimoka, Pavlou and Davis 2007), this paper demonstrates how functional neuroimaging tools can enhance our understanding of IS theories. Specifically, this study aims to uncover the neural mechanisms that underlie technology adoption by identifying the brain areas activated when users interact with websites that differ on their level of usefulness and ease of use. Besides localizing the neural correlates of the TAM constructs, this study helps understand their nature and dimensionality, as well as uncover hidden processes associated with intentions to use a system. The study also identifies certain technological antecedents of the TAM constructs, and shows that the brain activations associated with perceived usefulness and perceived ease of use predict selfreported intentions to use a system. The paper concludes by discussing the study’s implications for underscoring the potential of functional neuroimaging for IS research and the TAM literature.",
"title": ""
},
{
"docid": "1856090b401a304f1172c2958d05d6b3",
"text": "The Iranian government operates one of the largest and most sophisticated Internet censorship regimes in the world, but the mechanisms it employs have received little research attention, primarily due to lack of access to network connections within the country and personal risks to Iranian citizens who take part. In this paper, we examine the status of Internet censorship in Iran based on network measurements conducted from a major Iranian ISP during the lead up to the June 2013 presidential election. We measure the scope of the censorship by probing Alexa’s top 500 websites in 18 different categories. We investigate the technical mechanisms used for HTTP Host–based blocking, keyword filtering, DNS hijacking, and protocol-based throttling. Finally, we map the network topology of the censorship infrastructure and find evidence that it relies heavily on centralized equipment, a property that might be fruitfully exploited by next generation approaches to censorship circumvention.",
"title": ""
},
{
"docid": "34398d644ba55ea1a49e5703dd3275ae",
"text": "Swimming is a sport that requires considerable training commitment to reach individual performance goals. Nutrition requirements are specific to the macrocycle, microcycle, and individual session. Swimmers should ensure suitable energy availability to support training while maintaining long term health. Carbohydrate intake, both over the day and in relation to a workout, should be manipulated (3-10 g/kg of body mass/day) according to the fuel demands of training and the varying importance of undertaking these sessions with high carbohydrate availability. Swimmers should aim to consume 0.3 g of high-biological-value protein per kilogram of body mass immediately after key sessions and at regular intervals throughout the day to promote tissue adaptation. A mixed diet consisting of a variety of nutrient-dense food choices should be sufficient to meet the micronutrient requirements of most swimmers. Specific dietary supplements may prove beneficial to swimmers in unique situations, but should be tried only with the support of trained professionals. All swimmers, particularly adolescent and youth swimmers, are encouraged to focus on a well-planned diet to maximize training performance, which ensures sufficient energy availability especially during periods of growth and development. Swimmers are encouraged to avoid rapid weight fluctuations; rather, optimal body composition should be achieved over longer periods by modest dietary modifications that improve their food choices. During periods of reduced energy expenditure (taper, injury, off season) swimmers are encouraged to match energy intake to requirement. Swimmers undertaking demanding competition programs should ensure suitable recovery practices are used to maintain adequate glycogen stores over the entirety of the competition period.",
"title": ""
},
{
"docid": "9090999f7fdaad88943f4dc4dca414d6",
"text": "Collaborative reasoning for understanding each image-question pair is very critical but underexplored for an interpretable visual question answering system. Although very recent works also attempted to use explicit compositional processes to assemble multiple subtasks embedded in the questions, their models heavily rely on annotations or handcrafted rules to obtain valid reasoning processes, leading to either heavy workloads or poor performance on composition reasoning. In this paper, to better align image and language domains in diverse and unrestricted cases, we propose a novel neural network model that performs global reasoning on a dependency tree parsed from the question, and we thus phrase our model as parse-tree-guided reasoning network (PTGRN). This network consists of three collaborative modules: i) an attention module to exploit the local visual evidence for each word parsed from the question, ii) a gated residual composition module to compose the previously mined evidence, and iii) a parse-tree-guided propagation module to pass the mined evidence along the parse tree. Our PTGRN is thus capable of building an interpretable VQA system that gradually derives the image cues following a question-driven parse-tree reasoning route. Experiments on relational datasets demonstrate the superiority of our PTGRN over current state-of-the-art VQA methods, and the visualization results highlight the explainable capability of our reasoning system.",
"title": ""
},
{
"docid": "54e2dfd355e9e082d9a6f8c266c84360",
"text": "The wealth and value of organizations are increasingly based on intellectual capital. Although acquiring talented individuals and investing in employee learning adds value to the organization, reaping the benefits of intellectual capital involves translating the wisdom of employees into reusable and sustained actions. This requires a culture that creates employee commitment, encourages learning, fosters sharing, and involves employees in decision making. An infrastructure to recognize and embed promising and best practices through social networks, evidence-based practice, customization of innovations, and use of information technology results in increased productivity, stronger financial performance, better patient outcomes, and greater employee and customer satisfaction.",
"title": ""
},
{
"docid": "a958ded315a2de150f46c92ac9a5a414",
"text": "Dynamic binary analysis techniques play a central role to study the security of software systems and detect vulnerabilities in a broad range of devices and applications. Over the past decade, a variety of different techniques have been published, often alongside the release of prototype tools to demonstrate their effectiveness. Unfortunately, most of those techniques’ implementations are deeply coupled with their dynamic analysis frameworks and are not easy to integrate in other frameworks. Those frameworks are not designed to expose their internal state or their results to other components. This prevents analysts from being able to combine together different tools to exploit their strengths and tackle complex problems which requires a combination of sophisticated techniques. Fragmentation and isolation are two important problems which too often results in duplicated efforts or in multiple equivalent solutions for the same problem – each based on a different programming language, abstraction model, or execution environment. In this paper, we present avatar2, a dynamic multi-target orchestration framework designed to enable interoperability between different dynamic binary analysis frameworks, debuggers, emulators, and real physical devices. Avatar2 allows the analyst to organize different tools in a complex topology and then “move” the execution of binary code from one system to the other. The framework supports the automated transfer of the internal state of the device/application, as well as the configurable forwarding of input/output and memory accesses to physical peripherals or emulated targets. To demonstrate avatar2 usage and versatility, in this paper we present three very different use cases in which we replicate a PLC rootkit presented at NDSS 2017, we test Firefox combining Angr and GDB, and we record the execution of an embedded device firmware using PANDA and OpenOCD. All tools and the three use cases will be released as open source to help other researchers to replicate our experiments and perform their own analysis tasks with avatar2.",
"title": ""
},
{
"docid": "e2be1b93be261deac59b5afde2f57ae1",
"text": "The electronic and transport properties of carbon nanotube has been investigated in presence of ammonia gas molecule, using Density Functional Theory (DFT) based ab-initio approach. The model of CNT sensor has been build using zigzag (7, 0) CNT with a NH3 molecule adsorbed on its surface. The presence of NH3 molecule results in increase of CNT band gap. From the analysis of I-V curve, it is observed that the adsorption of NH3 leads to different voltage and current curve in comparison to its pristine state confirms the presence of NH3.",
"title": ""
},
{
"docid": "ca8aba51ab75cb86a32b6913ed9690cc",
"text": "Capsicum is a lightweight operating system capability and sandbox framework planned for inclusion in FreeBSD 9. Capsicum extends, rather than replaces, UNIX APIs, providing new kernel primitives (sandboxed capability mode and capabilities) and a userspace sandbox API. These tools support the compartmentalization of monolithic UNIX applications into logical applications. We demonstrate our approach by adapting core FreeBSD utilities and Google’s Chromium web browser to use Capsicum primitives, and compare the complexity and robustness of Capsicum with other sandboxing techniques.",
"title": ""
},
{
"docid": "c3e46c3317d81b2d8b8c53f7e5cd37b9",
"text": "A novel rainfall prediction method has been proposed. In the present work rainfall prediction in Southern part of West Bengal (India) has been conducted. A two-step method has been employed. Greedy forward selection algorithm is used to reduce the feature set and to find the most promising features for rainfall prediction. First, in the training phase the data is clustered by applying k-means algorithm, then for each cluster a separate Neural Network (NN) is trained. The proposed two step prediction model (Hybrid Neural Network or HNN) has been compared with MLP-FFN classifier in terms of several statistical performance measuring metrics. The data for experimental purpose is collected by Dumdum meteorological station (West Bengal, India) over the period from 1989 to 1995. The experimental results have suggested a reasonable improvement over traditional methods in predicting rainfall. The proposed HNN model outperformed the compared models by achieving 84.26% accuracy without feature selection and 89.54% accuracy with feature selection.",
"title": ""
},
{
"docid": "882390e6f557c044cd0774b3edf9ce89",
"text": "Over five years ago, The Leadership Quarterly published a special issue on complexity to advance a new way of thinking about leadership. In shifting attention away from the individual to the organizing process itself, complexity added an important focus on process and context to leadership and management research. Yet, the complexity approach creates challenges for researchers who must combine or replace individual level constructs—like those built through surveys or factor analysis—with richer theories that investigate networked meso dynamics, multilevel phenomena, emergent processes, and organizational outcomes. To address this challenge, the present analysis draws on theoretical and empirical work over the last several years to identify five specific areas where complexity inspired research has led to new insights about the mechanisms that enable the organization to perform and adapt. It suggests propositions that describe how leadership and management, defined holistically, might activate complexity mechanisms to perform five essential organizing functions.",
"title": ""
},
{
"docid": "edaa5ba6a10b6c4e66a895b7647881b9",
"text": "A possible side-effect of exposure to non-native sounds is a change in the way we perceive native sounds. Previous studies have demonstrated that native speakers’ speech production can change as a result of learning a new language, but little work has been carried out to measure the perceptual consequences of exposure. The current study examined how intensive exposure to Spanish intervocalic consonants affected Chinese learners with no prior experience of Spanish. Before, during and after a training period, listeners undertook both an adaptive noise task, which measured the noise level at which listeners could identify native language consonants, and an assimilation task, in which listeners assigned Spanish consonants to Chinese consonant categories. Listeners exhibited a significantly reduced noise tolerance for the Chinese consonants /l/ and /w/ following exposure to Spanish. These two consonants also showed the largest reductions in Spanish to Chinese category assimilations. Taken together, these findings suggest that Chinese listeners modified their native language categories boundaries as a result of exposure to Spanish sounds in order to accommodate them, and that as a consequence their identification performance in noise reduced. Some differences between the two sounds in the time-course of recovery from perceptual adaptation were observed.",
"title": ""
},
{
"docid": "b99b9f80b4f0ca4a8d42132af545be76",
"text": "By: Catherine L. Anderson Decision, Operations, and Information Technologies Department Robert H. Smith School of Business University of Maryland Van Munching Hall College Park, MD 20742-1815 U.S.A. Catherine_Anderson@rhsmith.umd.edu Ritu Agarwal Center for Health Information and Decision Systems University of Maryland 4327 Van Munching Hall College Park, MD 20742-1815 U.S.A. ragarwal@rhsmith.umd.edu",
"title": ""
},
{
"docid": "7ff291833a25ca1a073ebc2a2e5274e7",
"text": "High precision ground truth data is a very important factor for the development and evaluation of computer vision algorithms and especially for advanced driver assistance systems. Unfortunately, some types of data, like accurate optical flow and depth as well as pixel-wise semantic annotations are very difficult to obtain. In order to address this problem, in this paper we present a new framework for the generation of high quality synthetic camera images, depth and optical flow maps and pixel-wise semantic annotations. The framework is based on a realistic driving simulator called VDrift [1], which allows us to create traffic scenarios very similar to those in real life. We show how we can use the proposed framework to generate an extensive dataset for the task of multi-class image segmentation. We use the dataset to train a pairwise CRF model and to analyze the effects of using various combinations of features in different image modalities.",
"title": ""
},
{
"docid": "8e23ef656b501814fc44c609feebe823",
"text": "This paper proposes an approach for segmentation and semantic labeling of RGBD data based on the joint usage of geometrical clues and deep learning techniques. An initial oversegmentation is performed using spectral clustering and a set of NURBS surfaces is then fitted on the extracted segments. The input data are then fed to a Convolutional Neural Network (CNN) together with surface fitting parameters. The network is made of nine convolutional stages followed by a softmax classifier and produces a per-pixel descriptor vector for each sample. An iterative merging procedure is then used to recombine the segments into the regions corresponding to the various objects and surfaces. The couples of adjacent segments with higher similarity according to the CNN features are considered for merging and the NURBS surface fitting accuracy is used in order to understand if the selected couples correspond to a single surface. By combining the obtained segmentation with the descriptors from the CNN a set of labeled segments is obtained. The comparison with state-of-the-art methods shows how the proposed method provides an accurate and reliable scene segmentation and labeling.",
"title": ""
},
{
"docid": "b229aa8b39b3df3fec941ce4791a2fe9",
"text": "Translating information between text and image is a fundamental problem in artificial intelligence that connects natural language processing and computer vision. In the past few years, performance in image caption generation has seen significant improvement through the adoption of recurrent neural networks (RNN). Meanwhile, text-to-image generation begun to generate plausible images using datasets of specific categories like birds and flowers. We've even seen image generation from multi-category datasets such as the Microsoft Common Objects in Context (MSCOCO) through the use of generative adversarial networks (GANs). Synthesizing objects with a complex shape, however, is still challenging. For example, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image (I2T2I) which integrates text-to-image and image-to-text (image captioning) synthesis to improve the performance of text-to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state-of-the-art. We also demonstrate that I2T2I can achieve transfer learning by using a pre-trained image captioning module to generate human images on the MPII Human Pose dataset (MHP) without using sentence annotation.",
"title": ""
}
] |
scidocsrr
|
755e63c8ac7001a58cab1f72e1b00f68
|
A Multiplicative Coordinated Stealthy Attack and its Detection for Cyber Physical Systems
|
[
{
"docid": "4438015370e500c4bcdc347b3e332538",
"text": "This article provides a tutorial introduction to modeling, estimation, and control for multirotor aerial vehicles that includes the common four-rotor or quadrotor case.",
"title": ""
},
{
"docid": "931c7ce54ed22a838a5b2b44c9182a4c",
"text": "This is the second-part paper of the survey on fault diagnosis and fault-tolerant techniques, where fault diagnosis methods and applications are overviewed, respectively, from the knowledge-based and hybrid/active viewpoints. With the aid of the first-part survey paper, the second-part review paper completes a whole overview on fault diagnosis techniques and their applications. Comments on the advantages and constraints of various diagnosis techniques, including model-based, signal-based, knowledge-based, and hybrid/active diagnosis techniques, are also given. An overlook on the future development of fault diagnosis is presented.",
"title": ""
},
{
"docid": "f330cfad6e7815b1b0670217cd09b12e",
"text": "In this paper we study the effect of false data injection attacks on state estimation carried over a sensor network monitoring a discrete-time linear time-invariant Gaussian system. The steady state Kalman filter is used to perform state estimation while a failure detector is employed to detect anomalies in the system. An attacker wishes to compromise the integrity of the state estimator by hijacking a subset of sensors and sending altered readings. In order to inject fake sensor measurements without being detected the attacker will need to carefully design his actions to fool the estimator as abnormal sensor measurements would result in an alarm. It is important for a designer to determine the set of all the estimation biases that an attacker can inject into the system without being detected, providing a quantitative measure of the resilience of the system to such attacks. To this end, we will provide an ellipsoidal algorithm to compute its inner and outer approximations of such set. A numerical example is presented to further illustrate the effect of false data injection attack on state estimation.",
"title": ""
}
] |
[
{
"docid": "f2707d7fcd5d8d9200d4cc8de8ff1042",
"text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.",
"title": ""
},
{
"docid": "211652e7019b1f03048a715b65710163",
"text": "Current industry trends in enterprise architectures indicate movement from Service-Oriented Architecture (SOA) to Microservices. By understanding the key differences between these two approaches and their features, we can design a more effective Microservice architecture by avoiding SOA pitfalls. To do this, we must know why this shift is happening and how key SOA functionality is addressed by key features of the Microservice-based system. Unfortunately, Microservices do not address all SOA shortcomings. In addition, Microservices introduce new challenges. This work provides a detailed analysis of the differences between these two architectures and their features. Next, we describe both research and industry perspectives on the strengths and weaknesses of both architectural directions. Finally, we perform a systematic mapping study related to Microservice research, identifying interest and challenges in multiple categories from a range of recent research.",
"title": ""
},
{
"docid": "ccf8e1f627af3fe1327a4fa73ac12125",
"text": "One of the most common needs in manufacturing plants is rejecting products not coincident with the standards as anomalies. Accurate and automatic anomaly detection improves product reliability and reduces inspection cost. Probabilistic models have been employed to detect test samples with lower likelihoods as anomalies in unsupervised manner. Recently, a probabilistic model called deep generative model (DGM) has been proposed for end-to-end modeling of natural images and already achieved a certain success. However, anomaly detection of machine components with complicated structures is still challenging because they produce a wide variety of normal image patches with low likelihoods. For overcoming this difficulty, we propose unregularized score for the DGM. As its name implies, the unregularized score is the anomaly score of the DGM without the regularization terms. The unregularized score is robust to the inherent complexity of a sample and has a smaller risk of rejecting a sample appearing less frequently but being coincident with the standards.",
"title": ""
},
{
"docid": "4933f3f3007dab687fc852e9c2b1ab0a",
"text": "This paper presents a topology for bidirectional solid-state transformers with a minimal device count. The topology, referenced as dynamic-current or Dyna-C, has two current-source inverter stages with a high-frequency galvanic isolation, requiring 12 switches for four-quadrant three-phase ac/ac power conversion. The topology has voltage step-up/down capability, and the input and output can have arbitrary power factors and frequencies. Further, the Dyna-C can be configured as isolated power converters for single- or multiterminal dc, and single- or multiphase ac systems. The modular nature of the Dyna-C lends itself to be connected in series and/or parallel for high-voltage high-power applications. The proposed converter topology can find a broad range of applications such as isolated battery chargers, uninterruptible power supplies, renewable energy integration, smart grid, and power conversion for space-critical applications including aviation, locomotives, and ships. This paper outlines various configurations of the Dyna-C, as well as the relative operation and controls. The converter functionality is validated through simulations and experimental measurements of a 50-kVA prototype.",
"title": ""
},
{
"docid": "3725922023dbb52c1bde309dbe4d76ca",
"text": "BACKGROUND\nRecent studies demonstrate that low-level laser therapy (LLLT) modulates many biochemical processes, especially the decrease of muscle injures, the increase in mitochondrial respiration and ATP synthesis for accelerating the healing process.\n\n\nOBJECTIVE\nIn this work, we evaluated mitochondrial respiratory chain complexes I, II, III and IV and succinate dehydrogenase activities after traumatic muscular injury.\n\n\nMETHODS\nMale Wistar rats were randomly divided into three groups (n=6): sham (uninjured muscle), muscle injury without treatment, muscle injury with LLLT (AsGa) 5J/cm(2). Gastrocnemius injury was induced by a single blunt-impact trauma. LLLT was used 2, 12, 24, 48, 72, 96, and 120 hours after muscle-trauma.\n\n\nRESULTS\nOur results showed that the activities of complex II and succinate dehydrogenase after 5days of muscular lesion were significantly increased when compared to the control group. Moreover, our results showed that LLLT significantly increased the activities of complexes I, II, III, IV and succinate dehydrogenase, when compared to the group of injured muscle without treatment.\n\n\nCONCLUSION\nThese results suggest that the treatment with low-level laser may induce an increase in ATP synthesis, and that this may accelerate the muscle healing process.",
"title": ""
},
{
"docid": "14fac04f802367a56a03fcdce88044f8",
"text": "Humidity measurement is one of the most significant issues in various areas of applications such as instrumentation, automated systems, agriculture, climatology and GIS. Numerous sorts of humidity sensors fabricated and developed for industrial and laboratory applications are reviewed and presented in this article. The survey frequently concentrates on the RH sensors based upon their organic and inorganic functional materials, e.g., porous ceramics (semiconductors), polymers, ceramic/polymer and electrolytes, as well as conduction mechanism and fabrication technologies. A significant aim of this review is to provide a distinct categorization pursuant to state of the art humidity sensor types, principles of work, sensing substances, transduction mechanisms, and production technologies. Furthermore, performance characteristics of the different humidity sensors such as electrical and statistical data will be detailed and gives an added value to the report. By comparison of overall prospects of the sensors it was revealed that there are still drawbacks as to efficiency of sensing elements and conduction values. The flexibility offered by thick film and thin film processes either in the preparation of materials or in the choice of shape and size of the sensor structure provides advantages over other technologies. These ceramic sensors show faster response than other types.",
"title": ""
},
{
"docid": "bdee6c92bcc4437e2f4139078dde72b3",
"text": "In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the \"visual world\" eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., \"point at the candle\"). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions.",
"title": ""
},
{
"docid": "9b2d4f4ac47582be573700b5446a5ff5",
"text": "The 1-min sit-to-stand (1-min STS) test and handgrip strength test have been proposed as simple tests of functional exercise performance in chronic obstructive pulmonary disease (COPD) patients. We assessed the long-term (5-year) predictive performance of the 1-min sit-to-stand and handgrip strength tests for mortality, health-related quality of life (HRQoL) and exacerbations in COPD patients. In 409 primary care patients, we found the 1-min STS test to be strongly associated with long-term morality (hazard ratio per 3 more repetitions: 0.81, 95% CI 0.65 to 0.86) and moderately associated with long-term HRQoL. Neither test was associated with exacerbations. Our results suggest that the 1-min STS test may be useful for assessing the health status and long-term prognosis of COPD patients. This study was registered at http://www.clinicaltrials.gov/ (NCT00706602, 25 June 2008).",
"title": ""
},
{
"docid": "be0f836ec6431b74342b670921ac41f7",
"text": "This paper addresses the issue of expert finding in a social network. The task of expert finding, as one of the most important research issues in social networks, is aimed at identifying persons with relevant expertise or experience for a given topic. In this paper, we propose a propagation-based approach that takes into consideration of both person local information and network information (e.g. relationships between persons). Experimental results show that our approach can outperform the baseline approach.",
"title": ""
},
{
"docid": "5350af2d42f9321338e63666dcd42343",
"text": "Robot-aided physical therapy should encourage subject's voluntary participation to achieve rapid motor function recovery. In order to enhance subject's cooperation during training sessions, the robot should allow deviation in the prescribed path depending on the subject's modified limb motions subsequent to the disability. In the present work, an interactive training paradigm based on the impedance control was developed for a lightweight intrinsically compliant parallel ankle rehabilitation robot. The parallel ankle robot is powered by pneumatic muscle actuators (PMAs). The proposed training paradigm allows the patients to modify the robot imposed motions according to their own level of disability. The parallel robot was operated in four training modes namely position control, zero-impedance control, nonzero-impedance control with high compliance, and nonzero-impedance control with low compliance to evaluate the performance of proposed control scheme. The impedance control scheme was evaluated on 10 neurologically intact subjects. The experimental results show that an increase in robotic compliance encouraged subjects to participate more actively in the training process. This work advances the current state of the art in the compliant actuation of parallel ankle rehabilitation robots in the context of interactive training.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "197797b3bb51791a5986d0ee0ea04d2b",
"text": "Energy harvesting for wireless communication networks is a new paradigm that allows terminals to recharge their batteries from external energy sources in the surrounding environment. A promising energy harvesting technology is wireless power transfer where terminals harvest energy from electromagnetic radiation. Thereby, the energy may be harvested opportunistically from ambient electromagnetic sources or from sources that intentionally transmit electromagnetic energy for energy harvesting purposes. A particularly interesting and challenging scenario arises when sources perform simultaneous wireless information and power transfer (SWIPT), as strong signals not only increase power transfer but also interference. This article provides an overview of SWIPT systems with a particular focus on the hardware realization of rectenna circuits and practical techniques that achieve SWIPT in the domains of time, power, antennas, and space. The article also discusses the benefits of a potential integration of SWIPT technologies in modern communication networks in the context of resource allocation and cooperative cognitive radio networks.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "dafd0e92a6f06bf488c2a52a3e439ffb",
"text": "Twitter (and similar microblogging services) has become a central nexus for discussion of the topics of the day. Twitter data contains rich content and structured information on users’ topics of interest and behavior patterns. Correctly analyzing and modeling Twitter data enables the prediction of the user behavior and preference in a variety of practical applications, such as tweet recommendation and followee recommendation. Although a number of models have been developed on Twitter data in prior work, most of these only model the tweets from users, while neglecting their valuable retweet information in the data. Models would enhance their predictive power by incorporating users’ retweet content as well as their retweet behavior. In this paper, we propose two novel Bayesian nonparametric models, URM and UCM, on retweet data. Both of them are able to integrate the analysis of tweet text and users’ retweet behavior in the same probabilistic framework. Moreover, they both jointly model users’ interest in tweet and retweet. As nonparametric models, URM and UCM can automatically determine the parameters of the models based on input data, avoiding arbitrary parameter settings. Extensive experiments on real-world Twitter data show that both URM and UCM are superior to all the baselines, while UCM further outperforms URM, confirming the appropriateness of our models in retweet modeling.",
"title": ""
},
{
"docid": "7c8d5da89424dfba8fc84c7cb4f36856",
"text": "Advances in sensor data collection technology, such as pervasive and embedded devices, and RFID Technology have lead to a large number of smart devices which are connected to the net and continuously transmit their data over time. It has been estimated that the number of internet connected devices has overtaken the number of humans on the planet, since 2008. The collection and processing of such data leads to unprecedented challenges in mining and processing such data. Such data needs to be processed in real-time and the processing may be highly distributed in nature. Even in cases, where the data is stored offline, the size of the data is often so large and distributed, that it requires the use of big data analytical tools for processing. In addition, such data is often sensitive, and brings a number of privacy challenges associated 384 MANAGING AND MINING SENSOR DATA with it. This chapter will discuss a data analytics perspective about mining and managing data associated with this phenomenon, which is now known as the internet of things.",
"title": ""
},
{
"docid": "a925a1eda15fea570c6c11ea2661c8b0",
"text": "Many social interactions and services today depend on gender. In this paper, we investigate the problem of gender classification from hand shape. Our work has been motivated by studies in anthropometry and psychology suggesting that it is possible to distinguish between male and female hands by considering certain geometric features. Our system segments the hand silhouette into six different parts corresponding to the palm and fingers. To represent the geometry of each part, we use region and boundary features based on Zernike moments and Fourier descriptors. For classification, we compute the distance of a given part from two different eigenspaces, one corresponding to the male class and the other corresponding to female class. We have experimented using each part of the hand separately as well as fusing information from different parts of the hand. Using a small database containing 20 males and 20 females, we report classification results close to 98% using score-level fusion and LDA.",
"title": ""
},
{
"docid": "96e9c66453ba91d1bc44bb0242f038ce",
"text": "Body temperature is one of the key parameters for health monitoring of premature infants at the neonatal intensive care unit (NICU). In this paper, we propose and demonstrate a design of non-invasive neonatal temperature monitoring with wearable sensors. A negative temperature coefficient (NTC) resistor is applied as the temperature sensor due to its accuracy and small size. Conductive textile wires are used to make the sensor integration compatible for a wearable non-invasive monitoring platform, such as a neonatal smart jacket. Location of the sensor, materials and appearance are designed to optimize the functionality, patient comfort and the possibilities for aesthetic features. A prototype belt is built of soft bamboo fabrics with NTC sensor integrated to demonstrate the temperature monitoring. Experimental results from the testing on neonates at NICU of Máxima Medical Center (MMC), Veldhoven, the Netherlands, show the accurate temperature monitoring by the prototype belt comparing with the standard patient monitor.",
"title": ""
},
{
"docid": "f120d34996b155a413247add6adc6628",
"text": "The storage and computation requirements of Convolutional Neural Networks (CNNs) can be prohibitive for exploiting these models over low-power or embedded devices. This paper reduces the computational complexity of the CNNs by minimizing an objective function, including the recognition loss that is augmented with a sparsity-promoting penalty term. The sparsity structure of the network is identified using the Alternating Direction Method of Multipliers (ADMM), which is widely used in large optimization problems. This method alternates between promoting the sparsity of the network and optimizing the recognition performance, which allows us to exploit the two-part structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsity-inducing penalty functions to decompose the minimization problem into sub-problems that can be solved sequentially. Applying our method to a variety of state-of-the-art CNN models, our proposed method is able to simplify the original model, generating models with less computation and fewer parameters, while maintaining and often improving generalization performance. Accomplishments on a variety of models strongly verify that our proposed ADMM-based method can be a very useful tool for simplifying and improving deep CNNs.",
"title": ""
},
{
"docid": "462afb864b255f94deefb661174a598b",
"text": "Due to the heterogeneous and resource-constrained characters of Internet of Things (IoT), how to guarantee ubiquitous network connectivity is challenging. Although LTE cellular technology is the most promising solution to provide network connectivity in IoTs, information diffusion by cellular network not only occupies its saturating bandwidth, but also costs additional fees. Recently, NarrowBand-IoT (NB-IoT), introduced by 3GPP, is designed for low-power massive devices, which intends to refarm wireless spectrum and increase network coverage. For the sake of providing high link connectivity and capacity, we stimulate effective cooperations among user equipments (UEs), and propose a social-aware group formation framework to allocate resource blocks (RBs) effectively following an in-band NB-IoT solution. Specifically, we first introduce a social-aware multihop device-to-device (D2D) communication scheme to upload information toward the eNodeB within an LTE, so that a logical cooperative D2D topology can be established. Then, we formulate the D2D group formation as a scheduling optimization problem for RB allocation, which selects the feasible partition for the UEs by jointly considering relay method selection and spectrum reuse for NB-IoTs. Since the formulated optimization problem has a high computational complexity, we design a novel heuristic with a comprehensive consideration of power control and relay selection. Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.",
"title": ""
}
] |
scidocsrr
|
d9fde5ec276095538537f2bdcc536ba8
|
OpenCog: A Software Framework for Integrative Artificial General Intelligence
|
[
{
"docid": "981b4df564d412024d3c9603cc575c69",
"text": "The Novamente AI Engine, a novel AI software system, is briefly reviewed. Unlike the majority of contemporary AI projects, Novamente is aimed at artificial general intelligence, rather than being restricted by design to one particular application domain, or to a narrow range of cognitive functions. Novamente integrates aspects of many prior AI projects and paradigms, including symbolic, neural-network, evolutionary programming and reinforcement learning approaches; but its overall architecture is unique, drawing on system-theoretic ideas regarding complex mental dynamics and associated emergent patterns.",
"title": ""
}
] |
[
{
"docid": "d99005ab76808d74611bc290442019ec",
"text": "Over the last decade, the isoxazoline motif has become the intense focus of crop protection and animal health companies in their search for novel pesticides and ectoparasiticides. Herein we report the discovery of sarolaner, a proprietary, optimized-for-animal health use isoxazoline, for once-a-month oral treatment of flea and tick infestation on dogs.",
"title": ""
},
{
"docid": "f9143c2bb6c8271efa516ca54c9baef7",
"text": "In recent years several measures for the gold standard based evaluation of ontology learning were proposed. They can be distinguished by the layers of an ontology (e.g. lexical term layer and concept hierarchy) they evaluate. Judging those measures with a list of criteria we show that there exist some measures sufficient for evaluating the lexical term layer. However, existing measures for the evaluation of concept hierarchies fail to meet basic criteria. This paper presents a new taxonomic measure which overcomes the problems of current approaches.",
"title": ""
},
{
"docid": "601ab07a9169073032e713b0f5251c1b",
"text": "We discuss fast exponential time solutions for NP-complete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NP-complete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more.",
"title": ""
},
{
"docid": "f67b0131de800281ceb8522198302cde",
"text": "This brief presents an approach for identifying the parameters of linear time-varying systems that repeat their trajectories. The identification is based on the concept that parameter identification results can be improved by incorporating information learned from previous executions. The learning laws for this iterative learning identification are determined through an optimization framework. The convergence analysis of the algorithm is presented along with the experimental results to demonstrate its effectiveness. The algorithm is demonstrated to be capable of simultaneously estimating rapidly varying parameters and addressing robustness to noise by adopting a time-varying design approach.",
"title": ""
},
{
"docid": "429e6aa5eed1a3aa53152faf8f0c4f8c",
"text": "Lexical Simplification is the task of replacing complex words in a text with simpler alternatives. A variety of strategies have been devised for this challenge, yet there has been little effort in comparing their performance. In this contribution, we present a benchmarking of several Lexical Simplification systems. By combining resources created in previous work with automatic spelling and inflection correction techniques, we introduce BenchLS: a new evaluation dataset for the task. Using BenchLS, we evaluate the performance of solutions for various steps in the typical Lexical Simplification pipeline, both individually and jointly. This is the first time Lexical Simplification systems are compared in such fashion on the same data, and the findings introduce many contributions to the field, revealing several interesting properties of the systems evaluated.",
"title": ""
},
{
"docid": "d3fd8c1ce41892f54aedff187f4872c2",
"text": "In the first year of the TREC Micro Blog track, our participation has focused on building from scratch an IR system based on the Whoosh IR library. Though the design of our system (CipCipPy) is pretty standard it includes three ad-hoc solutions for the track: (i) a dedicated indexing function for hashtags that automatically recognizes the distinct words composing an hashtag, (ii) expansion of tweets based on the title of any referred Web page, and (iii) a tweet ranking function that ranks tweets in results by their content quality, which is compared against a reference corpus of Reuters news. In this preliminary paper we describe all the components of our system, and the efficacy scored by our runs. The CipCipPy system is available under a GPL license.",
"title": ""
},
{
"docid": "9328c119a7622b742749d357f58c7617",
"text": "An algorithm is described for recovering the six degrees of freedom of motion of a vehicle from a sequence of range images of a static environment taken by a range camera rigidly attached to the vehicle. The technique utilizes a least-squares minimization of the difference between the measured rate of change of elevation at a point and the rate predicted by the so-called elevation rate constmint equation. It is assumed that most of the surface is smooth enough so that local tangent planes can be constructed, and that the motion between frames is smaller than the size of most features in the range image. This method does not depend on the determination of correspondences between isolated high-level features in the range images. The algorithm has been successfully applied to data obtained from the range imager on the Autonomous Land Vehicle (ALV). Other sensors on the ALV provide an initial approximation to the motion between frames. It was found that the outputs of the vehicle sensors themselves are not suitable for accurate motion recovery because of errors in dead reckoning resulting from such problems as wheel slippage. The sensor measurements are used only to approximately register range data. The algorithm described here then recovers the difference between the true motion and that estimated from the sensor outputs. s 1991",
"title": ""
},
{
"docid": "3e06d3b5ca50bf4fcd9d354a149dd40c",
"text": "In this paper, the classification via sprepresentation and multitask learning is presented for target recognition in SAR image. To capture the characteristics of SAR image, a multidimensional generalization of the analytic signal, namely the monogenic signal, is employed. The original signal can be then orthogonally decomposed into three components: 1) local amplitude; 2) local phase; and 3) local orientation. Since the components represent the different kinds of information, it is beneficial by jointly considering them in a unifying framework. However, these components are infeasible to be directly utilized due to the high dimension and redundancy. To solve the problem, an intuitive idea is to define an augmented feature vector by concatenating the components. This strategy usually produces some information loss. To cover the shortage, this paper considers three components into different learning tasks, in which some common information can be shared. Specifically, the component-specific feature descriptor for each monogenic component is produced first. Inspired by the recent success of multitask learning, the resulting features are then fed into a joint sparse representation model to exploit the intercorrelation among multiple tasks. The inference is reached in terms of the total reconstruction error accumulated from all tasks. The novelty of this paper includes 1) the development of three component-specific feature descriptors; 2) the introduction of multitask learning into sparse representation model; 3) the numerical implementation of proposed method; and 4) extensive comparative experimental studies on MSTAR SAR dataset, including target recognition under standard operating conditions, as well as extended operating conditions, and the capability of outliers rejection.",
"title": ""
},
{
"docid": "5536e605e0b8a25ee0a5381025484f60",
"text": "Relational Markov Random Fields are a general and flexible framework for reasoning about the joint distribution over attributes of a large number of interacting entities. The main computational difficulty in learning such models is inference. Even when dealing with complete data, where one can summarize a large domain by sufficient statistics, learning requires one to compute the expectation of the sufficient statistics given different parameter choices. The typical solution to this problem is to resort to approximate inference procedures, such as loopy belief propagation. Although these procedures are quite efficient, they still require computation that is on the order of the number of interactions (or features) in the model. When learning a large relational model over a complex domain, even such approximations require unrealistic running time. In this paper we show that for a particular class of relational MRFs, which have inherent symmetry, we can perform the inference needed for learning procedures using a template-level belief propagation. This procedure’s running time is proportional to the size of the relational model rather than the size of the domain. Moreover, we show that this computational procedure is equivalent to sychronous loopy belief propagation. This enables a dramatic speedup in inference and learning time. We use this procedure to learn relational MRFs for capturing the joint distribution of large protein-protein interaction networks.",
"title": ""
},
{
"docid": "078578f356cb7946e3956c571bef06ee",
"text": "Background: Dysphagia is common and costly. The ability of patient symptoms to predict objective swallowing dysfunction is uncertain. Purpose: This study aimed to evaluate the ability of the Eating Assessment Tool (EAT-10) to screen for aspiration risk in patients with dysphagia. Methods: Data from individuals with dysphagia undergoing a videofluoroscopic swallow study between January 2012 and July 2013 were abstracted from a clinical database. Data included the EAT-10, Penetration Aspiration Scale (PAS), total pharyngeal transit (TPT) time, and underlying diagnoses. Bivariate linear correlation analysis, sensitivity, specificity, and predictive values were calculated. Results: The mean age of the entire cohort (N = 360) was 64.40 (± 14.75) years. Forty-six percent were female. The mean EAT-10 was 16.08 (± 10.25) for nonaspirators and 23.16 (± 10.88) for aspirators (P < .0001). There was a linear correlation between the total EAT-10 score and the PAS (r = 0.273, P < .001). Sensitivity and specificity of an EAT-10 > 15 in predicting aspiration were 71% and 53%, respectively. Conclusion: Subjective dysphagia symptoms as documented with the EAT-10 can predict aspiration risk. A linear correlation exists between the EAT-10 and aspiration events (PAS) and aspiration risk (TPT time). Persons with an EAT10 > 15 are 2.2 times more likely to aspirate (95% confidence interval, 1.3907-3.6245). The sensitivity of an EAT-10 > 15 is 71%.",
"title": ""
},
{
"docid": "16426be05f066e805e48a49a82e80e2e",
"text": "Ontologies have been developed and used by several researchers in different knowledge domains aiming to ease the structuring and management of knowledge, and to create a unique standard to represent concepts of such a knowledge domain. Considering the computer security domain, several tools can be used to manage and store security information. These tools generate a great amount of security alerts, which are stored in different formats. This lack of standard and the amount of data make the tasks of the security administrators even harder, because they have to understand, using their tacit knowledge, different security alerts to make correlation and solve security problems. Aiming to assist the administrators in executing these tasks efficiently, this paper presents the main features of the computer security incident ontology developed to model, using a unique standard, the concepts of the security incident domain, and how the ontology has been evaluated.",
"title": ""
},
{
"docid": "81928a29f210e68815022fcb634c414d",
"text": "Reactions to stress vary between individuals, and physiological and behavioral responses tend to be associated in distinct suites of correlated traits, often termed stress-coping styles. In mammals, individuals exhibiting divergent stress-coping styles also appear to exhibit intrinsic differences in cognitive processing. A connection between physiology, behavior, and cognition was also recently demonstrated in strains of rainbow trout (Oncorhynchus mykiss) selected for consistently high or low cortisol responses to stress. The low-responsive (LR) strain display longer retention of a conditioned response, and tend to show proactive behaviors such as enhanced aggression, social dominance, and rapid resumption of feed intake after stress. Differences in brain monoamine neurochemistry have also been reported in these lines. In comparative studies, experiments with the lizard Anolis carolinensis reveal connections between monoaminergic activity in limbic structures, proactive behavior in novel environments, and the establishment of social status via agonistic behavior. Together these observations suggest that within-species diversity of physiological, behavioral and cognitive correlates of stress responsiveness is maintained by natural selection throughout the vertebrate sub-phylum.",
"title": ""
},
{
"docid": "bba813ba24b8bc3a71e1afd31cf0454d",
"text": "Betweenness-Centrality measure is often used in social and computer communication networks to estimate the potential monitoring and control capabilities a vertex may have on data flowing in the network. In this article, we define the Routing Betweenness Centrality (RBC) measure that generalizes previously well known Betweenness measures such as the Shortest Path Betweenness, Flow Betweenness, and Traffic Load Centrality by considering network flows created by arbitrary loop-free routing strategies.\n We present algorithms for computing RBC of all the individual vertices in the network and algorithms for computing the RBC of a given group of vertices, where the RBC of a group of vertices represents their potential to collaboratively monitor and control data flows in the network. Two types of collaborations are considered: (i) conjunctive—the group is a sequences of vertices controlling traffic where all members of the sequence process the traffic in the order defined by the sequence and (ii) disjunctive—the group is a set of vertices controlling traffic where at least one member of the set processes the traffic. The algorithms presented in this paper also take into consideration different sampling rates of network monitors, accommodate arbitrary communication patterns between the vertices (traffic matrices), and can be applied to groups consisting of vertices and/or edges.\n For the cases of routing strategies that depend on both the source and the target of the message, we present algorithms with time complexity of O(n2m) where n is the number of vertices in the network and m is the number of edges in the routing tree (or the routing directed acyclic graph (DAG) for the cases of multi-path routing strategies). The time complexity can be reduced by an order of n if we assume that the routing decisions depend solely on the target of the messages.\n Finally, we show that a preprocessing of O(n2m) time, supports computations of RBC of sequences in O(kn) time and computations of RBC of sets in O(n3n) time, where k in the number of vertices in the sequence or the set.",
"title": ""
},
{
"docid": "77ad0c6db11775478902855f95c48ad8",
"text": "This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized spectral complexity: their Lipschitz constant, meaning the product of the spectral norms of the weight matrices, times a certain correction factor. This bound is empirically investigated for a standard AlexNet network trained with SGD on the mnist and cifar10 datasets, with both original and random labels; the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and secondly that the presented bound is sensitive to this complexity.",
"title": ""
},
{
"docid": "7a5167ffb79f35e75359c979295c22ee",
"text": "Precise forecast of the electrical load plays a highly significant role in the electricity industry and market. It provides economic operations and effective future plans for the utilities and power system operators. Due to the intermittent and uncertain characteristic of the electrical load, many research studies have been directed to nonlinear prediction methods. In this paper, a hybrid prediction algorithm comprised of Support Vector Regression (SVR) and Modified Firefly Algorithm (MFA) is proposed to provide the short term electrical load forecast. The SVR models utilize the nonlinear mapping feature to deal with nonlinear regressions. However, such models suffer from a methodical algorithm for obtaining the appropriate model parameters. Therefore, in the proposed method the MFA is employed to obtain the SVR parameters accurately and effectively. In order to evaluate the efficiency of the proposed methodology, it is applied to the electrical load demand in Fars, Iran. The obtained results are compared with those obtained from the ARMA model, ANN, SVR-GA, SVR-HBMO, SVR-PSO and SVR-FA. The experimental results affirm that the proposed algorithm outperforms other techniques. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7d85ac66b7b8ae77ec7f7c50bd7bbd9",
"text": "This article compares several page ordering strategies for Web crawling under several metrics. The objective of these strategies is to download the most \"important\" pages \"early\" during the crawl. As the coverage of modern search engines is small compared to the size of the Web, and it is impossible to index all of the Web for both theoretical and practical reasons, it is relevant to index at least the most important pages.We use data from actual Web pages to build Web graphs and execute a crawler simulator on those graphs. As the Web is very dynamic, crawling simulation is the only way to ensure that all the strategies considered are compared under the same conditions. We propose several page ordering strategies that are more efficient than breadth- first search and strategies based on partial Pagerank calculations.",
"title": ""
},
{
"docid": "5d2c1095a34ee582f490f4b0392a3da0",
"text": "We study the problem of online learning to re-rank, where users provide feedback to improve the quality of displayed lists. Learning to rank has been traditionally studied in two settings. In the offline setting, rankers are typically learned from relevance labels of judges. These approaches have become the industry standard. However, they lack exploration, and thus are limited by the information content of offline data. In the online setting, an algorithm can propose a list and learn from the feedback on it in a sequential fashion. Bandit algorithms developed for this setting actively experiment, and in this way overcome the biases of offline data. But they also tend to ignore offline data, which results in a high initial cost of exploration. We propose BubbleRank, a bandit algorithm for re-ranking that combines the strengths of both settings. The algorithm starts with an initial base list and improves it gradually by swapping higher-ranked less attractive items for lower-ranked more attractive items. We prove an upper bound on the n-step regret of BubbleRank that degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive numerical experiments on a large real-world click dataset.",
"title": ""
},
{
"docid": "edb92440895801051e0bf63ade2cfbf8",
"text": "Over the last three decades, dietary pattern analysis has come to the forefront of nutritional epidemiology, where the combined effects of total diet on health can be examined. Two analytical approaches are commonly used: a priori and a posteriori. Cluster analysis is a commonly used a posteriori approach, where dietary patterns are derived based on differences in mean dietary intake separating individuals into mutually exclusive, non-overlapping groups. This review examines the literature on dietary patterns derived by cluster analysis in adult population groups, focusing, in particular, on methodological considerations, reproducibility, validity and the effect of energy mis-reporting. There is a wealth of research suggesting that the human diet can be described in terms of a limited number of eating patterns in healthy population groups using cluster analysis, where studies have accounted for differences in sex, age, socio-economic status, geographical area and weight status. Furthermore, patterns have been used to explore relationships with health and chronic diseases and more recently with nutritional biomarkers, suggesting that these patterns are biologically meaningful. Overall, it is apparent that consistent trends emerge when using cluster analysis to derive dietary patterns; however, future studies should focus on the inconsistencies in methodology and the effect of energy mis-reporting.",
"title": ""
},
{
"docid": "c589dd4a3da018fbc62d69e2d7f56e88",
"text": "More than 520 soil samples were surveyed for species of the mycoparasitic zygomycete genus Syncephalis using a culture-based approach. These fungi are relatively common in soil using the optimal conditions for growing both the host and parasite. Five species obtained in dual culture are unknown to science and are described here: (i) S. digitata with sporangiophores short, merosporangia separate at the apices, simple, 3-5 spored; (ii) S. floridana, which forms galls in the host and has sporangiophores up to 170 µm long with unbranched merosporangia that contain 2-4 spores; (iii) S. pseudoplumigaleta, with an abrupt apical bend in the sporophore; (iv) S. pyriformis with fertile vesicles that are long-pyriform; and (v) S. unispora with unispored merosporangia. To facilitate future molecular comparisons between species of Syncephalis and to allow identification of these fungi from environmental sampling datasets, we used Syncephalis-specific PCR primers to generate internal transcribed spacer (ITS) sequences for all five new species.",
"title": ""
},
{
"docid": "03bd81d3c50b81c6cfbae847aa5611f6",
"text": "We present a fast, automatic method for accurately capturing full-body motion data using a single depth camera. At the core of our system lies a realtime registration process that accurately reconstructs 3D human poses from single monocular depth images, even in the case of significant occlusions. The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers. We integrate depth data, silhouette information, full-body geometry, temporal pose priors, and occlusion reasoning into a unified MAP estimation framework. Our 3D tracking process, however, requires manual initialization and recovery from failures. We address this challenge by combining 3D tracking with 3D pose detection. This combination not only automates the whole process but also significantly improves the robustness and accuracy of the system. Our whole algorithm is highly parallel and is therefore easily implemented on a GPU. We demonstrate the power of our approach by capturing a wide range of human movements in real time and achieve state-of-the-art accuracy in our comparison against alternative systems such as Kinect [2012].",
"title": ""
}
] |
scidocsrr
|
82ca8e9281cf37aa08ebe53a36663298
|
Using Personality Information in Collaborative Filtering for New Users
|
[
{
"docid": "f415b38e6d43c8ed81ce97fd924def1b",
"text": "Collaborative filtering is one of the most successful and widely used methods of automated product recommendation in online stores. The most critical component of the method is the mechanism of finding similarities among users using product ratings data so that products can be recommended based on the similarities. The calculation of similarities has relied on traditional distance and vector similarity measures such as Pearson’s correlation and cosine which, however, have been seldom questioned in terms of their effectiveness in the recommendation problem domain. This paper presents a new heuristic similarity measure that focuses on improving recommendation performance under cold-start conditions where only a small number of ratings are available for similarity calculation for each user. Experiments using three different datasets show the superiority of the measure in new user cold-start conditions. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "f25aef35500ed74e5ef41d5e45d2e2df",
"text": "With recommender systems, users receive items recommended on the basis of their profile. New users experience the cold start problem: as their profile is very poor, the system performs very poorly. In this paper, classical new user cold start techniques are improved by exploiting the cold user data, i.e. the user data that is readily available (e.g. age, occupation, location, etc.), in order to automatically associate the new user with a better first profile. Relying on the existing α-community spaces model, a rule-based induction process is used and a recommendation process based on the \"level of agreement\" principle is defined. The experiments show that the quality of recommendations compares to that obtained after a classical new user technique, while the new user effort is smaller as no initial ratings are asked.",
"title": ""
}
] |
[
{
"docid": "75fda2fa6c35c915dede699c12f45d84",
"text": "This work presents an open-source framework called systemc-clang for analyzing SystemC models that consist of a mixture of register-transfer level, and transaction-level components. The framework statically parses mixed-abstraction SystemC models, and represents them using an intermediate representation. This intermediate representation captures the structural information about the model, and certain behavioural semantics of the processes in the model. This representation can be used for multiple purposes such as static analysis of the model, code transformations, and optimizations. We describe with examples, the key details in implementing systemc-clang, and show an example of constructing a plugin that analyzes the intermediate representation to discover opportunities for parallel execution of SystemC processes. We also experimentally evaluate the capabilities of this framework with a subset of examples from the SystemC distribution including register-transfer, and transaction-level models.",
"title": ""
},
{
"docid": "b59a2c49364f3e95a2c030d800d5f9ce",
"text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.",
"title": ""
},
{
"docid": "f80430c36094020991f167aeb04f21e0",
"text": "Participants in recent discussions of AI-related issues ranging from intelligence explosion to technological unemployment have made diverse claims about the nature, pace, and drivers of progress in AI. However, these theories are rarely specified in enough detail to enable systematic evaluation of their assumptions or to extrapolate progress quantitatively, as is often done with some success in other technological domains. After reviewing relevant literatures and justifying the need for more rigorous modeling of AI progress, this paper contributes to that research program by suggesting ways to account for the relationship between hardware speed increases and algorithmic improvements in AI, the role of human inputs in enabling AI capabilities, and the relationships between different sub-fields of AI. It then outlines ways of tailoring AI progress models to generate insights on the specific issue of technological unemployment, and outlines future directions for research on AI progress.",
"title": ""
},
{
"docid": "1d3379e5e70d1fb7fa050c42805fe865",
"text": "While many recent hand pose estimation methods critically rely on a training set of labelled frames, the creation of such a dataset is a challenging task that has been overlooked so far. As a result, existing datasets are limited to a few sequences and individuals, with limited accuracy, and this prevents these methods from delivering their full potential. We propose a semi-automated method for efficiently and accurately labeling each frame of a hand depth video with the corresponding 3D locations of the joints: The user is asked to provide only an estimate of the 2D reprojections of the visible joints in some reference frames, which are automatically selected to minimize the labeling work by efficiently optimizing a sub-modular loss function. We then exploit spatial, temporal, and appearance constraints to retrieve the full 3D poses of the hand over the complete sequence. We show that this data can be used to train a recent state-of-the-art hand pose estimation method, leading to increased accuracy.",
"title": ""
},
{
"docid": "2a1c3f87821e47f5c32d10cb80505dcb",
"text": "We are developing a cardiac pacemaker with a small, cylindrical shape that permits percutaneous implantation into a fetus to treat complete heart block and consequent hydrops fetalis, which can otherwise be fatal. The device uses off-the-shelf components including a rechargeable lithium cell and a highly efficient relaxation oscillator encapsulated in epoxy and glass. A corkscrew electrode made from activated iridium can be screwed into the myocardium, followed by release of the pacemaker and a short, flexible lead entirely within the chest of the fetus to avoid dislodgement from fetal movement. Acute tests in adult rabbits demonstrated the range of electrical parameters required for successful pacing and the feasibility of successfully implanting the device percutaneously under ultrasonic imaging guidance. The lithium cell can be recharged inductively as needed, as indicated by a small decline in the pulsing rate.",
"title": ""
},
{
"docid": "9beaf6c7793633dceca0c8df775e8959",
"text": "The course, antecedents, and implications for social development of effortful control were examined in this comprehensive longitudinal study. Behavioral multitask batteries and parental ratings assessed effortful control at 22 and 33 months (N = 106). Effortful control functions encompassed delaying, slowing down motor activity, suppressing/initiating activity to signal, effortful attention, and lowering voice. Between 22 and 33 months, effortful control improved considerably, its coherence increased, it was stable, and it was higher for girls. Behavioral and parent-rated measures converged. Children's focused attention at 9 months, mothers' responsiveness at 22 months, and mothers' self-reported socialization level all predicted children's greater effortful control. Effortful control had implications for concurrent social development. Greater effortful control at 22 months was linked to more regulated anger, and at 33 months, to more regulated anger and joy and to stronger restraint.",
"title": ""
},
{
"docid": "808115043786372af3e3fb726cc3e191",
"text": "Scapy is a free and open source packet manipulation environment written in Python language. In this paper we present a Modbus extension to Scapy, and show how this environment can be used to build tools for security analysis of industrial network protocols. Our implementation can be extended to other industrial network protocols and can help security analysts to understand how these protocols work under attacks or adverse conditions.",
"title": ""
},
{
"docid": "3a68936b77a49f8deeaaadee762a3435",
"text": "Online service quality is one of the key determinants of the success of online retailers. This exploratory study revealed some important findings about online service quality. First, the study identified six key online retailing service quality dimensions as perceived by online customers: reliable/prompt responses, access, ease of use, attentiveness, security, and credibility. Second, of the six, three dimensions, notably reliable/prompt responses, attentiveness, and ease of use, had significant impacts on both customers’ perceived overall service quality and their satisfaction. Third, the access dimension had a significant effect on overall service quality, but not on satisfaction. Finally, this study discovered a significantly positive relationship between overall service quality and satisfaction. Important managerial implications and recommendations are also presented.",
"title": ""
},
{
"docid": "87e44334828cd8fd1447ab5c1b125ab3",
"text": "the guidance system. The types of steering commands vary depending on the phase of flight and the type of interceptor. For example, in the boost phase the flight control system may be designed to force the missile to track a desired flight-path angle or attitude. In the midcourse and terminal phases the system may be designed to track acceleration commands to effect an intercept of the target. This article explores several aspects of the missile flight control system, including its role in the overall missile system, its subsystems, types of flight control systems, design objectives, and design challenges. Also discussed are some of APL’s contributions to the field, which have come primarily through our role as Technical Direction Agent on a variety of Navy missile programs. he flight control system is a key element that allows the missile to meet its system performance requirements. The objective of the flight control system is to force the missile to achieve the steering commands developed by",
"title": ""
},
{
"docid": "ecea888d3b2d6b9ce0a26a4af6382db8",
"text": "Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provides an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications, and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "a500af4d27774a3f36db90a79dec91c3",
"text": "This paper introduces Internet of Things (IoTs), which offers capabilities to identify and connect worldwide physical objects into a unified system. As a part of IoTs, serious concerns are raised over access of personal information pertaining to device and individual privacy. This survey summarizes the security threats and privacy concerns of IoT..",
"title": ""
},
{
"docid": "372b2aa9810ec12ebf033632cffd5739",
"text": "A simple CFD tool, coupled to a discrete surface representation and a gradient-based optimization procedure, is applied to the design of optimal hull forms and optimal arrangement of hulls for a wave cancellation multihull ship. The CFD tool, which is used to estimate the wave drag, is based on the zeroth-order slender ship approximation. The hull surface is represented by a triangulation, and almost every grid point on the surface can be used as a design variable. A smooth surface is obtained via a simplified pseudo-shell problem. The optimal design process consists of two steps. The optimal center and outer hull forms are determined independently in the first step, where each hull keeps the same displacement as the original design while the wave drag is minimized. The optimal outer-hull arrangement is determined in the second step for the optimal center and outer hull forms obtained in the first step. Results indicate that the new design can achieve a large wave drag reduction in comparison to the original design configuration.",
"title": ""
},
{
"docid": "6559d77de48d153153ce77b0e2969793",
"text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.",
"title": ""
},
{
"docid": "08b8c184ff2230b0df2c0f9b4e3f7840",
"text": "We present an augmented reality magic mirror for teaching anatomy. The system uses a depth camera to track the pose of a user standing in front of a large display. A volume visualization of a CT dataset is augmented onto the user, creating the illusion that the user can look into his body. Using gestures, different slices from the CT and a photographic dataset can be selected for visualization. In addition, the system can show 3D models of organs, text information and images about anatomy. For interaction with this data we present a new interaction metaphor that makes use of the depth camera. The visibility of hands and body is modified based on the distance to a virtual interaction plane. This helps the user to understand the spatial relations between his body and the virtual interaction plane.",
"title": ""
},
{
"docid": "67ea7e099e60379042e6656897b0fbc3",
"text": "This article describes a case study on MuseUs, a pervasive serious game for use in museums, running as a smartphone app. During the museum visit, players are invited to create their own exposition and are guided by the application in doing so. The aim is to provide a learning effect during a visit to a museum exhibition. Central to the MuseUs experience is that it does not necessitate a predefined path trough the museum and that it does not draw the attention away from the exposition itself. Also, the application stimulates the visitor to look at cultural heritage elements in a different way, permitting the construction of personal narratives while creating a personal exposition. Using a methodology derived from action research, we present recommendations for the design of similar applications and conclude by proposing a high-level architecture for pervasive serious games applied to cultural heritage.",
"title": ""
},
{
"docid": "1ab0308539bc6508b924316b39a963ca",
"text": "Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.",
"title": ""
},
{
"docid": "2319e5f20b03abe165b7715e9b69bac5",
"text": "Cloud networking imposes new requirements in terms of connection resiliency and throughput among virtual machines, hypervisors and users. A promising direction is to exploit multipath communications, yet existing protocols have a so limited scope that performance improvements are often unreachable. Generally, multipathing adds signaling overhead and in certain conditions may in fact decrease throughput due to packet arrival disorder. At the transport layer, the most promising protocol is Multipath TCP (MPTCP), a backward compatible TCP extension allowing to balance the load on several TCP subflows, ideally following different physical paths, to maximize connection throughput. Current implementations create a full mesh between hosts IPs, which can be suboptimal. For situation when at least one end-point network is multihomed, we propose to enhance its subflow creation mechanism so that MPTCP creates an adequate number of subflows considering the underlying path diversity offered by an IP-in-IP mapping protocol, the Location/Identifier Separation Protocol (LISP). We defined and implemented a cross-layer cooperation module between MPTCP and LISP, leading to an improved version of MPTCP we name Augmented MPTCP (A-MPTCP). We evaluated A-MPTCP for a realistic Cloud access use-case scenario involving one multi-homed data-center. Results from a large-scale test bed show us that A-MPTCP can halve the transfer times with the simple addition of one additional LIS-Penabled MPTCP subflow, hence showing promising performance for Cloud communications between multi-homed users and multihomed data-centers.",
"title": ""
},
{
"docid": "26d0809a2c8ab5d5897ca43c19fc2b57",
"text": "This study outlines a simple 'Profilometric' method for measuring the size and function of the wrinkles. Wrinkle size was measured in relaxed conditions and the representative parameters were considered to be the mean 'Wrinkle Depth', the mean 'Wrinkle Area', the mean 'Wrinkle Volume', and the mean 'Wrinkle Tissue Reservoir Volume' (WTRV). These parameters were measured in the wrinkle profiles under relaxed conditions. The mean 'Wrinkle to Wrinkle Distance', which measures the distance between two adjacent wrinkles, is an accurate indicator of the muscle relaxation level during replication. This parameter, identified as the 'Muscle Relaxation Level Marker', and its reduction are related to increased muscle tone or contraction and vice versa. The mean Wrinkle to Wrinkle Distance is very important in experiments where the effectiveness of an anti-wrinkle preparation is tested. Thus, the correlative wrinkles' replicas, taken during follow up in different periods, are only those that show the same mean Wrinkle to Wrinkle Distance. The wrinkles' functions were revealed by studying the morphological changes of the wrinkles and their behavior during relaxed conditions, under slight increase of muscle tone and under maximum wrinkling. Facial wrinkles are not a single groove, but comprise an anatomical and functional unit (the 'Wrinkle Unit') along with the surrounding skin. This Wrinkle Unit participates in the functions of a central neuro-muscular system of the face responsible for protection, expression, and communication. Thus, the Wrinkle Unit, the superficial musculoaponeurotic system (superficial fascia of the face), the underlying muscles controlled by the CNS and Psyche, are considered to be a 'Functional Psycho-Neuro-Muscular System of the Face for Protection, Expression and Communication'. The three major functions of this system exerted in the central part of the face and around the eyes are: (1) to open and close the orifices (eyes, nose, and mouth), contributing to their functions; (2) to protect the eyes from sun, foreign bodies, etc.; (3) to contribute to facial expression, reflecting emotions (real, pretended, or theatrical) during social communication. These functions are exercised immediately and easily, without any opposition ('Wrinkling Ability') because of the presence of the Wrinkle Unit that gives (a) the site of refolding (the wrinkle is a waiting fold, ready to respond quickly at any moment for any skin mobility need) and (b) the appropriate skin tissue for extension or compression (this reservoir of tissue is measured by the parameter of WTRV). The Wrinkling Ability of a skin area is linked to the wrinkle's functions and can be measured by the parameter of 'Skin Tissue Volume Compressed around the Wrinkle' in mm(3) per 30 mm wrinkle during maximum wrinkling. The presence of wrinkles is a sign that the skin's 'Recovery Ability' has declined progressively with age. The skin's Recovery Ability is linked to undesirable cosmetic effects of ageing and wrinkling. This new Profilometric method can be applied in studies where the effectiveness of anti-wrinkle preparations or the cosmetic results of surgery modalities are tested, as well as in studies focused on the functional physiology of the Wrinkle Unit.",
"title": ""
}
] |
scidocsrr
|
f8d1777e21c7a6414932505c2f63778a
|
Discourse Relation Sense Classification Using Cross-argument Semantic Similarity Based on Word Embeddings
|
[
{
"docid": "406874f38a7eb9a1d0c8b10e8fd3a1d7",
"text": "In this paper, we describe our system for SemEval-2015 Task 3: Answer Selection in Community Question Answering. In this task, the systems are required to identify the good or potentially good answers from the answer thread in Community Question Answering collections. Our system combines 16 features belong to 5 groups to predict answer quality. Our final model achieves the best result in subtask A for English, both in accuracy and F1score.",
"title": ""
}
] |
[
{
"docid": "50ef3775f9d18fe368c166cfd3ff2bca",
"text": "In many applications that track and analyze spatiotemporal data, movements obey periodic patterns; the objects follow the same routes (approximately) over regular time intervals. For example, people wake up at the same time and follow more or less the same route to their work everyday. The discovery of hidden periodic patterns in spatiotemporal data, apart from unveiling important information to the data analyst, can facilitate data management substantially. Based on this observation, we propose a framework that analyzes, manages, and queries object movements that follow such patterns. We define the spatiotemporal periodic pattern mining problem and propose an effective and fast mining algorithm for retrieving maximal periodic patterns. We also devise a novel, specialized index structure that can benefit from the discovered patterns to support more efficient execution of spatiotemporal queries. We evaluate our methods experimentally using datasets with object trajectories that exhibit periodicity.",
"title": ""
},
{
"docid": "6a9d82d136d5b8841f63cab7ab851c0b",
"text": "Ischemic mitral regurgitation (MR) is a common complication of myocardial infarction thought to result from leaflet tethering caused by displacement of the papillary muscles that occurs as the left ventricle remodels. The author explores the possibility that left atrial remodeling may also play a role in the pathogenesis of ischemic MR, through a novel mechanism: atriogenic leaflet tethering. When ischemic MR is hemodynamically significant, the left ventricle compensates by dilating to preserve forward output using the Starling mechanism. Left ventricular dilatation, however, worsens MR by increasing the mitral valve regurgitant orifice, leading to a vicious cycle in which MR begets more MR. The author proposes that several structural adaptations play a role in reducing ischemic MR. In contrast to the compensatory effects of left ventricular enlargement, these may reduce, rather than increase, its severity. The suggested adaptations involve the mitral valve leaflets, the papillary muscles, the mitral annulus, and the left ventricular false tendons. This review describes the potential role each may play in reducing ischemic MR. Therapies that exploit these adaptations are also discussed.",
"title": ""
},
{
"docid": "2ba35cf1bea1794b060f3d89ac78dd24",
"text": "A Recurrent Neural Network (RNN) is a powerful connectionist model that can be applied to many challenging sequential problems, including problems that naturally arise in language and speech. However, RNNs are extremely hard to train on problems that have long-term dependencies, where it is necessary to remember events for many timesteps before using them to make a prediction. In this paper we consider the problem of training RNNs to predict sequences that exhibit significant long-term dependencies, focusing on a serial recall task where the RNN needs to remember a sequence of characters for a large number of steps before reconstructing it. We introduce the Temporal-Kernel Recurrent Neural Network (TKRNN), which is a variant of the RNN that can cope with long-term dependencies much more easily than a standard RNN, and show that the TKRNN develops short-term memory that successfully solves the serial recall task by representing the input string with a stable state of its hidden units.",
"title": ""
},
{
"docid": "222ab6804b3fe15fe23b27bc7f5ede5f",
"text": "Single-image super-resolution (SR) reconstruction via sparse representation has recently attracted broad interest. It is known that a low-resolution (LR) image is susceptible to noise or blur due to the degradation of the observed image, which would lead to a poor SR performance. In this paper, we propose a novel robust edge-preserving smoothing SR (REPS-SR) method in the framework of sparse representation. An EPS regularization term is designed based on gradient-domain-guided filtering to preserve image edges and reduce noise in the reconstructed image. Furthermore, a smoothing-aware factor adaptively determined by the estimation of the noise level of LR images without manual interference is presented to obtain an optimal balance between the data fidelity term and the proposed EPS regularization term. An iterative shrinkage algorithm is used to obtain the SR image results for LR images. The proposed adaptive smoothing-aware scheme makes our method robust to different levels of noise. Experimental results indicate that the proposed method can preserve image edges and reduce noise and outperforms the current state-of-the-art methods for noisy images.",
"title": ""
},
{
"docid": "d0370d33988698cf69e3b032aff53f49",
"text": "The abundance of discussion forums, Weblogs, e-commerce portals, social networking, product review sites and content sharing sites has facilitated flow of ideas and expression of opinions. The user-generated text content on Internet and Web 2.0 social media can be a rich source of sentiments, opinions, evaluations, and reviews. Sentiment analysis or opinion mining has become an open research domain that involves classifying text documents based on the opinion expressed, about a given topic, being positive or negative. This paper proposes a sentiment classification model using back-propagation artificial neural network (BPANN). Information Gain, and three popular sentiment lexicons are used to extract sentiment representing features that are then used to train and test the BPANN. This novel approach combines the strength of BPANN in classification accuracy with intrinsic subjectivity knowledge available in the sentiment lexicons. The results obtained from experiments on the movie and hotel review corpora have shown that the proposed approach has been able to reduce dimensionality, while producing accurate results for sentiment based classification of text.",
"title": ""
},
{
"docid": "a4ecdccf4370292a31fc38d6602b3f50",
"text": "Loop gain analysis for performance evaluation of current sensors for switching converters is presented. The MOS transistor scaling technique is reviewed and employed in developing high-speed and high-accuracy current-sensors with offset-current cancellation. Using a standard 0.35/spl mu/m CMOS process, and integrated full-range inductor current sensor for a boost converter is designed. It operated at a supply voltage of 1.5 V with a DC loop gain of 38 dB, and a unity gain frequency of 10 MHz. The sensor worked properly at a converter switching frequency of 500 kHz.",
"title": ""
},
{
"docid": "d8472e56a4ffe5d6b0cb0c902186d00b",
"text": "In C. S. Peirce, as well as in the work of many biosemioticians, the semiotic object is sometimes described as a physical “object” with material properties and sometimes described as an “ideal object” or mental representation. I argue that to the extent that we can avoid these types of characterizations we will have a more scientific definition of sign use and will be able to better integrate the various fields that interact with biosemiotics. In an effort to end Cartesian dualism in semiotics, which has been the main obstacle to a scientific biosemiotics, I present an argument that the “semiotic object” is always ultimately the objective of self-affirmation (of habits, physical or mental) and/or self-preservation. Therefore, I propose a new model for the sign triad: response-sign-objective. With this new model it is clear, as I will show, that self-mistaking (not self-negation as others have proposed) makes learning, creativity and purposeful action possible via signs. I define an “interpretation” as a response to something as if it were a sign, but whose semiotic objective does not, in fact, exist. If the response-as-interpretation turns out to be beneficial for the system after all, there is biopoiesis. When the response is not “interpretive,” but self-confirming in the usual way, there is biosemiosis. While the conditions conducive to fruitful misinterpretation (e.g., accidental similarity of non-signs to signs and/or contiguity of non-signs to self-sustaining processes) might be artificially enhanced, according to this theory, the outcomes would be, by nature, more or less uncontrollable and unpredictable. Nevertheless, biosemiotics could be instrumental in the manipulation and/or artificial creation of purposeful systems insofar as it can describe a formula for the conditions under which new objectives and novel purposeful behavior may emerge, however unpredictably.",
"title": ""
},
{
"docid": "8fd97add7e3b48bad9fd82dc01422e59",
"text": "Anaerobic nitrate-dependent Fe(II) oxidation is widespread in various environments and is known to be performed by both heterotrophic and autotrophic microorganisms. Although Fe(II) oxidation is predominantly biological under acidic conditions, to date most of the studies on nitrate-dependent Fe(II) oxidation were from environments of circumneutral pH. The present study was conducted in Lake Grosse Fuchskuhle, a moderately acidic ecosystem receiving humic acids from an adjacent bog, with the objective of identifying, characterizing and enumerating the microorganisms responsible for this process. The incubations of sediment under chemolithotrophic nitrate-dependent Fe(II)-oxidizing conditions have shown the enrichment of TM3 group of uncultured Actinobacteria. A time-course experiment done on these Actinobacteria showed a consumption of Fe(II) and nitrate in accordance with the expected stoichiometry (1:0.2) required for nitrate-dependent Fe(II) oxidation. Quantifications done by most probable number showed the presence of 1 × 104 autotrophic and 1 × 107 heterotrophic nitrate-dependent Fe(II) oxidizers per gram fresh weight of sediment. The analysis of microbial community by 16S rRNA gene amplicon pyrosequencing showed that these actinobacterial sequences correspond to ∼0.6% of bacterial 16S rRNA gene sequences. Stable isotope probing using 13CO2 was performed with the lake sediment and showed labeling of these Actinobacteria. This indicated that they might be important autotrophs in this environment. Although these Actinobacteria are not dominant members of the sediment microbial community, they could be of functional significance due to their contribution to the regeneration of Fe(III), which has a critical role as an electron acceptor for anaerobic microorganisms mineralizing sediment organic matter. To the best of our knowledge this is the first study to show the autotrophic nitrate-dependent Fe(II)-oxidizing nature of TM3 group of uncultured Actinobacteria.",
"title": ""
},
{
"docid": "6a2380bdabdbe25d8c335ca077790bf1",
"text": "Current generation electronic health records suffer a number of problems that make them inefficient and associated with poor clinical satisfaction. Digital scribes or intelligent documentation support systems, take advantage of advances in speech recognition, natural language processing and artificial intelligence, to automate the clinical documentation task currently conducted by humans. Whilst in their infancy, digital scribes are likely to evolve through three broad stages. Human led systems task clinicians with creating documentation, but provide tools to make the task simpler and more effective, for example with dictation support, semantic checking and templates. Mixed-initiative systems are delegated part of the documentation task, converting the conversations in a clinical encounter into summaries suitable for the electronic record. Computer-led systems are delegated full control of documentation and only request human interaction when exceptions are encountered. Intelligent clinical environments permit such augmented clinical encounters to occur in a fully digitised space where the environment becomes the computer. Data from clinical instruments can be automatically transmitted, interpreted using AI and entered directly into the record. Digital scribes raise many issues for clinical practice, including new patient safety risks. Automation bias may see clinicians automatically accept scribe documents without checking. The electronic record also shifts from a human created summary of events to potentially a full audio, video and sensor record of the clinical encounter. Digital scribes promisingly offer a gateway into the clinical workflow for more advanced support for diagnostic, prognostic and therapeutic tasks.",
"title": ""
},
{
"docid": "489a7070d175d5d53ca43f3aa7896677",
"text": "Picture-taking has never been easier. We now use our phones to snap photos and instantly share them with friends, family and strangers all around the world. Consequently, we seek ways to visualize, analyze and discover concealed sociocultural characteristics and trends in this ever-growing flow of visual information. How do we then trace global and local patterns from the analysis of visual planetary–scale data? What types of insights can we draw from the study of these massive visual materials? In this study we use Cultural Analytics visualization techniques for the study of approximately 550,000 images taken by users of the location-based social photo sharing application Instagram. By analyzing images from New York City and Tokyo, we offer a comparative visualization research that indicates differences in local color usage, cultural production rate, and varied hue’s intensities— all form a unique, local, ‘Visual Rhythm’: a framework for the analysis of location-based visual information flows.",
"title": ""
},
{
"docid": "33a9140fb57200a489b9150d39f0ab65",
"text": "In this paper, a double-quadrant state-of-charge (SoC)-based droop control method for distributed energy storage system is proposed to reach the proper power distribution in autonomous dc microgrids. In order to prolong the lifetime of the energy storage units (ESUs) and avoid the overuse of a certain unit, the SoC of each unit should be balanced and the injected/output power should be gradually equalized. Droop control as a decentralized approach is used as the basis of the power sharing method for distributed energy storage units. In the charging process, the droop coefficient is set to be proportional to the nth order of SoC, while in the discharging process, the droop coefficient is set to be inversely proportional to the nth order of SoC. Since the injected/output power is inversely proportional to the droop coefficient, it is obtained that in the charging process the ESU with higher SoC absorbs less power, while the one with lower SoC absorbs more power. Meanwhile, in the discharging process, the ESU with higher SoC delivers more power and the one with lower SoC delivers less power. Hence, SoC balancing and injected/output power equalization can be gradually realized. The exponent n of SoC is employed in the control diagram to regulate the speed of SoC balancing. It is found that with larger exponent n, the balancing speed is higher. MATLAB/simulink model comprised of three ESUs is implemented and the simulation results are shown to verify the proposed approach.",
"title": ""
},
{
"docid": "4513872c2240390dca8f4b704e606157",
"text": "We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.",
"title": ""
},
{
"docid": "1b7e958b7505129a150da0e186e5d022",
"text": "The study of complex adaptive systems, from cells to societies, is a study of the interplay among processes operating at diverse scales of space, time and organizational complexity. The key to such a study is an understanding of the interrelationships between microscopic processes and macroscopic patterns, and the evolutionary forces that shape systems. In particular, for ecosystems and socioeconomic systems, much interest is focused on broad scale features such as diversity and resiliency, while evolution operates most powerfully at the level of individual agents. Understanding the evolution and development of complex adaptive systems thus involves understanding how cooperation, coalitions and networks of interaction emerge from individual behaviors and feed back to influence those behaviors. In this paper, some of the mathematical challenges are discussed.",
"title": ""
},
{
"docid": "e16bf4ab7c56b6827369f19afb2d4744",
"text": "In acoustic modeling for large vocabulary continuous speech recognition, it is essential to model long term dependency within speech signals. Usually, recurrent neural network (RNN) architectures, especially the long short term memory (LSTM) models, are the most popular choice. Recently, a novel architecture, namely feedforward sequential memory networks (FSMN), provides a non-recurrent architecture to model long term dependency in sequential data and has achieved better performance over RNNs on acoustic modeling and language modeling tasks. In this work, we propose a compact feedforward sequential memory networks (cFSMN) by combining FSMN with low-rank matrix factorization. We also make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the Switchboard task, the proposed new cFSMN structures can reduce the model size by 60% and speed up the learning by more than 7 times while the models still significantly outperform the popular bidirection LSTMs for both frame-level cross-entropy (CE) criterion based training and MMI based sequence training.",
"title": ""
},
{
"docid": "c8cd0c0ebd38b3e287d6e6eed965db6b",
"text": "Goalball, one of the official Paralympic events, is popular with visually impaired people all over the world. The purpose of goalball is to throw the specialized ball, with bells inside it, to the goal line of the opponents as many times as possible while defenders try to block the thrown ball with their bodies. Since goalball players cannot rely on visual information, they need to grasp the game situation using their auditory sense. However, it is hard, especially for beginners, to perceive the direction and distance of the thrown ball. In addition, they generally tend to be afraid of the approaching ball because, without visual information, they could be hit by a high-speed ball. In this paper, our goal is to develop an application called GoalBaural (Goalball + aural) that enables goalball players to improve the recognizability of the direction and distance of a thrown ball without going onto the court and playing goalball. The evaluation result indicated that our application would be efficient in improving the speed and the accuracy of locating the balls.",
"title": ""
},
{
"docid": "bc166a431e35bc9b11801bcf1ff6c9fd",
"text": "Outsourced storage has become more and more practical in recent years. Users can now store large amounts of data in multiple servers at a relatively low price. An important issue for outsourced storage systems is to design an efficient scheme to assure users that their data stored at remote servers has not been tampered with. This paper presents a general method and a practical prototype application for verifying the integrity of files in an untrusted network storage service. The verification process is managed by an application running in a trusted environment (typically on the client) that stores just one cryptographic hash value of constant size, corresponding to the \"digest\" of an authenticated data structure. The proposed integrity verification service can work with any storage service since it is transparent to the storage technology used. Experimental results show that our integrity verification method is efficient and practical for network storage systems.",
"title": ""
},
{
"docid": "f392b4ba1cface8be439bf86a3e4c2bd",
"text": "STUDY DESIGN\nCase-control study comparing sagittal plane segmental motion in women (n = 34) with chronic whiplash-associated disorders, Grades I-II, with women (n = 35) with chronic insidious onset neck pain and with a normal database of sagittal plane rotational and translational motion.\n\n\nOBJECTIVE\nTo reveal whether women with chronic whiplash-associated disorders, Grades I-II, demonstrate evidence of abnormal segmental motions in the cervical spine.\n\n\nSUMMARY OF BACKGROUND DATA\nIt is hypothesized that unphysiological spinal motion experienced during an automobile accident may result in a persistent disturbance of segmental motion. It is not known whether patients with chronic whiplash-associated disorders differ from patients with chronic insidious onset neck pain with respect to segmental mobility.\n\n\nMETHODS\nLateral radiographic views were taken in assisted maximal flexion and extension. A new measurement protocol determined rotational and translational motions of segments C3-C4 and C5-C6 with high precision. Segmental motion was compared with normal data as well as among groups.\n\n\nRESULTS\nIn the whiplash-associated disorders group, the C3-C4 and C4-C5 segments showed significantly increased rotational motions. Translational motions within each segment revealed a significant deviation from normal at the C3-C4 segment in the whiplash-associated disorders and insidious onset neck pain groups and at the C5-C6 segment in the whiplash-associated disorders group. Significantly more women in the whiplash-associated disorders group (35.3%) had abnormal increased segmental motions compared to the insidious onset neck pain group (8.6%) when both the rotational and the translational parameters were analyzed. When the translational parameter was analyzed separately, no significant difference was found between groups, or 17.6% (whiplash-associated disorders group) and 8.6% (insidious onset neck pain group), respectively.\n\n\nCONCLUSION\nHypermobility in the lower cervical spine segments in 12 out of 34 patients with chronic whiplash-associated disorders in this study point to injury caused by the accident. This subgroup, identified by the new radiographic protocol, might need a specific therapeutic intervention.",
"title": ""
},
{
"docid": "dfaccd0aa36efbafe5cb1101f9d4f93e",
"text": "At present, the modern manufacturing and management concepts such as digitalization, networking and intellectualization have been popularized in the industry, and the degree of industrial automation and information has been improved unprecedentedly. Industrial products are everywhere in the world. They are involved in design, manufacture, operation, maintenance and recycling. The whole life cycle involves huge amounts of data. Improving data quality is very important for data mining and data analysis. To solve the problem of data inconsistency is a very important part of improving data quality.",
"title": ""
},
{
"docid": "ec5bdd52fa05364923cb12b3ff25a49f",
"text": "A system to prevent subscription fraud in fixed telecommunications with high impact on long-distance carriers is proposed. The system consists of a classification module and a prediction module. The classification module classifies subscribers according to their previous historical behavior into four different categories: subscription fraudulent, otherwise fraudulent, insolvent and normal. The prediction module allows us to identify potential fraudulent customers at the time of subscription. The classification module was implemented using fuzzy rules. It was applied to a database containing information of over 10,000 real subscribers of a major telecom company in Chile. In this database, a subscription fraud prevalence of 2.2% was found. The prediction module was implemented as a multilayer perceptron neural network. It was able to identify 56.2% of the true fraudsters, screening only 3.5% of all the subscribers in the test set. This study shows the feasibility of significantly preventing subscription fraud in telecommunications by analyzing the application information and the customer antecedents at the time of application. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
368b5ee483a00e75e00c493cdb4a427a
|
IoT's Tiny Steps towards 5G: Telco's Perspective
|
[
{
"docid": "48a1e20799ef94432145cefbfb65df25",
"text": "The rapidly increasing number of mobile devices, voluminous data, and higher data rate are pushing to rethink the current generation of the cellular mobile communication. The next or fifth generation (5G) cellular networks are expected to meet high-end requirements. The 5G networks are broadly characterized by three unique features: ubiquitous connectivity, extremely low latency, and very high-speed data transfer. The 5G networks would provide novel architectures and technologies beyond state-of-the-art architectures and technologies. In this paper, our intent is to find an answer to the question: “what will be done by 5G and how?” We investigate and discuss serious limitations of the fourth generation (4G) cellular networks and corresponding new features of 5G networks. We identify challenges in 5G networks, new technologies for 5G networks, and present a comparative study of the proposed architectures that can be categorized on the basis of energy-efficiency, network hierarchy, and network types. Interestingly, the implementation issues, e.g., interference, QoS, handoff, security-privacy, channel access, and load balancing, hugely effect the realization of 5G networks. Furthermore, our illustrations highlight the feasibility of these models through an evaluation of existing real-experiments and testbeds.",
"title": ""
}
] |
[
{
"docid": "adccd039cc54352eefd855567e8eeb62",
"text": "In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.",
"title": ""
},
{
"docid": "9546092b8db5d22448af61df5f725bbf",
"text": "This paper provides a new equivalent circuit model for a spurline filter section in an inhomogeneous coupled-line medium whose even and odd mode phase velocities are unequal. This equivalent circuit permits the exact filter synthesis to be performed easily. Millimeter-wave filters at 26 to 40 GHz and 75 to 110 GHz have been fabricated using the model, and experimental results are included which validate the equivalent circuit model.",
"title": ""
},
{
"docid": "4eca3018852fd3107cb76d1d95f76a0a",
"text": "Within the past decade, empirical evidence has emerged supporting the use of Acceptance and Commitment Therapy (ACT) targeting shame and self-stigma. Little is known about the role of self-compassion in ACT, but evidence from other approaches indicates that self-compassion is a promising means of reducing shame and self-criticism. The ACT processes of defusion, acceptance, present moment, values, committed action, and self-as-context are to some degree inherently self-compassionate. However, it is not yet known whether the self-compassion inherent in the ACT approach explains ACT’s effectiveness in reducing shame and stigma, and/or whether focused self-compassion work may improve ACT outcomes for highly self-critical, shame-prone people. We discuss how ACT for shame and stigma may be enhanced by existing approaches specifically targeting self-compassion.",
"title": ""
},
{
"docid": "dbc253488a9f5d272e75b38dc98ea101",
"text": "A new form of a hybrid design of a microstrip-fed parasitic coupled ring fractal monopole antenna with semiellipse ground plane is proposed for modern mobile devices having a wireless local area network (WLAN) module along with a Worldwide Interoperability for Microwave Access (WiMAX) function. In comparison to the previous monopole structures, the miniaturized antenna dimension is only about 25 × 25 × 1 mm3 , which is 15 times smaller than the previous proposed design. By only increasing the fractal iterations, very good impedance characteristics are obtained. Throughout this letter, the improvement process of the impedance and radiation properties is completely presented and discussed.",
"title": ""
},
{
"docid": "46f3f27a88b4184a15eeb98366e599ec",
"text": "Radiomics is an emerging field in quantitative imaging that uses advanced imaging features to objectively and quantitatively describe tumour phenotypes. Radiomic features have recently drawn considerable interest due to its potential predictive power for treatment outcomes and cancer genetics, which may have important applications in personalized medicine. In this technical review, we describe applications and challenges of the radiomic field. We will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.",
"title": ""
},
{
"docid": "3fcb9ab92334e3e214a7db08a93d5acd",
"text": "BACKGROUND\nA growing body of literature indicates that physical activity can have beneficial effects on mental health. However, previous research has mainly focussed on clinical populations, and little is known about the psychological effects of physical activity in those without clinically defined disorders.\n\n\nAIMS\nThe present study investigates the association between physical activity and mental health in an undergraduate university population based in the United Kingdom.\n\n\nMETHOD\nOne hundred students completed questionnaires measuring their levels of anxiety and depression using the Hospital Anxiety and Depression Scale (HADS) and their physical activity regime using the Physical Activity Questionnaire (PAQ).\n\n\nRESULTS\nSignificant differences were observed between the low, medium and high exercise groups on the mental health scales, indicating better mental health for those who engage in more exercise.\n\n\nCONCLUSIONS\nEngagement in physical activity can be an important contributory factor in the mental health of undergraduate students.",
"title": ""
},
{
"docid": "60d6869cadebea71ef549bb2a7d7e5c3",
"text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.",
"title": ""
},
{
"docid": "7e4a222322346abc281d72534902d707",
"text": "Humic substances (HS) have been widely recognized as a plant growth promoter mainly by changes on root architecture and growth dynamics, which result in increased root size, branching and/or greater density of root hair with larger surface area. Stimulation of the H+-ATPase activity in cell membrane suggests that modifications brought about by HS are not only restricted to root structure, but are also extended to the major biochemical pathways since the driving force for most nutrient uptake is the electrochemical gradient across the plasma membrane. Changes on root exudation profile, as well as primary and secondary metabolism were also observed, though strongly dependent on environment conditions, type of plant and its ontogeny. Proteomics and genomic approaches with diverse plant species subjected to HS treatment had often shown controversial patterns of protein and gene expression. This is a clear indication that HS effects of plants are complex and involve non-linear, cross-interrelated and dynamic processes that need be treated with an interdisciplinary view. Being the humic associations recalcitrant to microbiological attack, their use as vehicle to introduce beneficial selected microorganisms to crops has been proposed. This represents a perspective for a sort of new biofertilizer designed for a sustainable agriculture, whereby plants treated with HS become more susceptible to interact with bioinoculants, while HS may concomitantly modify the structure/activity of the microbial community in the rhizosphere compartment. An enhanced knowledge of the effects on plants physiology and biochemistry and interaction with rhizosphere and endophytic microbes should lead to achieve increased crop productivity through a better use of HS inputs in Agriculture.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
},
{
"docid": "ccfba22a7697a9deaedbb7d1ceebbc33",
"text": "The Machine Learning field evolved from the broad field of Artificial Intelligence, which aims to mimic intelligent abilities of humans by machines. In the field of Machine Learningone considers the important question of how to make machines able to “learn”. Learning in this context is understood as inductive inference , where one observesexamplesthat represent incomplete information about some “statistical phenomenon”. Inunsupervisedlearning one typically tries to uncover hidden regularities (e.g. clusters) or to detect anomalies in the data (for instance some unusual machine function or a network intrusion). Insupervised learning , there is alabel associated with each example. It is supposed to be the answer to a question about the example. If the label is discrete, then the task is called classification problem– otherwise, for realvalued labels we speak of a regression problem. Based on these examples (including the labels), one is particularly interested to predict the answer for other cases before they are explicitly observed. Hence, learning is not only a question of remembering but also ofgeneralization to unseen cases .",
"title": ""
},
{
"docid": "5245cdc023c612de89f36d1573d208fe",
"text": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.",
"title": ""
},
{
"docid": "79f7f7294f23ab3aace0c4d5d589b4a8",
"text": "Along with the expansion of globalization, multilingualism has become a popular social phenomenon. More than one language may occur in the context of a single conversation. This phenomenon is also prevalent in China. A huge variety of informal Chinese texts contain English words, especially in emails, social media, and other user generated informal contents. Since most of the existing natural language processing algorithms were designed for processing monolingual information, mixed multilingual texts cannot be well analyzed by them. Hence, it is of critical importance to preprocess the mixed texts before applying other tasks. In this paper, we firstly analyze the phenomena of mixed usage of Chinese and English in Chinese microblogs. Then, we detail the proposed two-stage method for normalizing mixed texts. We propose to use a noisy channel approach to translate in-vocabulary words into Chinese. For better incorporating the historical information of users, we introduce a novel user aware neural network language model. For the out-of-vocabulary words (such as pronunciations, informal expressions and et al.), we propose to use a graph-based unsupervised method to categorize them. Experimental results on a manually annotated microblog dataset demonstrate the effectiveness of the proposed method. We also evaluate three natural language parsers with and without using the proposed method as the preprocessing step. From the results, we can see that the proposed method can significantly benefit other NLP tasks in processing mixed text.",
"title": ""
},
{
"docid": "3abcfd48703b399404126996ca837f90",
"text": "Various inductive loads used in all industries deals with the problem of power factor improvement. Capacitor bank connected in shunt helps in maintaining the power factor closer to unity. They improve the electrical supply quality and increase the efficiency of the system. Also the line losses are also reduced. Shunt capacitor banks are less costly and can be installed anywhere. This paper deals with shunt capacitor bank designing for power factor improvement considering overvoltages for substation installation. Keywords— Capacitor Bank, Overvoltage Consideration, Power Factor, Reactive Power",
"title": ""
},
{
"docid": "63cfadd9a71aaa1cbe1ead79f943f83c",
"text": "Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.",
"title": ""
},
{
"docid": "8e878e5083d922d97f8d573c54cbb707",
"text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>, Quanzheng Li <Li.Quanzheng@mgh.harvard.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.",
"title": ""
},
{
"docid": "58f9c7fd920d7a3c70321afa2aa5794b",
"text": "Retrieval of the phase of a signal is one of the major problems in signal processing. For an exact signal reconstruction, both magnitude, and phase spectrum of the signal is required. In many speech-based applications, only the magnitude spectrum is processed and the phase is ignored, which leads to degradation in the performance. Here, we propose a novel technique that enables the reconstruction of the speech signal from magnitude spectrum only. We consider the even-odd part decomposition of a causal sequence and process only on the real part of the DTFT of the signal. We propose the shifting of the real part of DTFT of the sequence to make it non-negative. By adding a constant of sufficient value to the real part of the DTFT, the exact signal reconstruction is possible from the magnitude or power spectrum alone. Moreover, we have compared our proposed approach with recently proposed phase retrieval method from magnitude spectrum of the Causal Delta Dominant (CDD) signal. We found that the method of phase retrieval from CDD signal and proposed method are identical under certain approximation. However, proposed method involves the less computational cost for the exact processing of the signal.",
"title": ""
},
{
"docid": "cd892dec53069137c1c2cfe565375c62",
"text": "Optimal application performance on a Distributed Object Based System (DOBS) requires class fragmentation and the development of allocation schemes to place fragments at distributed sites so data transfer is minimized. Fragmentation enhances application performance by reducing the amount of irrelevant data accessed and the amount of data transferred unnecessarily between distributed sites. Algorithms for effecting horizontal and vertical fragmentation ofrelations exist, but fragmentation techniques for class objects in a distributed object based system are yet to appear in the literature. This paper first reviews a taxonomy of the fragmentation problem in a distributed object base. The paper then contributes by presenting a comprehensive set of algorithms for horizontally fragmenting the four realizable class models on the taxonomy. The fundamental approach is top-down, where the entity of fragmentation is the class object. Our approach consists of first generating primary horizontal fragments of a class based on only applications accessing this class, and secondly generating derived horizontal fragments of the class arising from primary fragments of its subclasses, its complex attributes (contained classes), and/or its complex methods classes. Finally, we combine the sets of primary and derived fragments of each class to produce the best possible fragments. Thus, these algorithms account for inheritance and class composition hierarchies as well as method nesting among objects, and are shown to be polynomial time.",
"title": ""
},
{
"docid": "4fac911d679240b84decef6618b97b4b",
"text": "A floating-gate current-output analog memory is implemented in a 0.13-μm digital CMOS process. The proposed memory cell achieves random-accessible and bidirectional updates with a sigmoid update rule. A novel writing scheme is proposed to obtain tunneling selectivity without on-chip highvoltage switches or charge pumps, and reduces interconnections and pin count. Parameters of empirical models for floating gate charge modification are extracted from measurements. Measurement and simulation results show that the proposed memory consumes 45 nW of power, has a 7-bit programming resolution, 53.8 dB dynamic range and 86.5 dB writing isolation.",
"title": ""
},
{
"docid": "354bc052f75e7884baca157492f5004c",
"text": "This paper is about how the SP theory of intelligence and its realization in the SP machine may, with advantage, be applied to the management and analysis of big data. The SP system-introduced in this paper and fully described elsewhere-may help to overcome the problem of variety in big data; it has potential as a universal framework for the representation and processing of diverse kinds of knowledge, helping to reduce the diversity of formalisms and formats for knowledge, and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualization of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.",
"title": ""
},
{
"docid": "9d73ff3f8528bb412c585d802873fcb4",
"text": "In this work, we introduce a novel interpretation of residual networks showing they are exponential ensembles. This observation is supported by a large-scale lesion study that demonstrates they behave just like ensembles at test time. Subsequently, we perform an analysis showing these ensembles mostly consist of networks that are each relatively shallow. For example, contrary to our expectations, most of the gradient in a residual network with 110 layers comes from an ensemble of very short networks, i.e., only 10-34 layers deep. This suggests that in addition to describing neural networks in terms of width and depth, there is a third dimension: multiplicity, the size of the implicit ensemble. Ultimately, residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network – rather, they avoid the problem simply by ensembling many short networks together. This insight reveals that depth is still an open research question and invites the exploration of the related notion of multiplicity.",
"title": ""
}
] |
scidocsrr
|
22d71c0d561635cc6d871a08812caca1
|
Algorithmic challenges in big data analytics
|
[
{
"docid": "0713b8668b5faf037b4553517151f9ab",
"text": "Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.",
"title": ""
}
] |
[
{
"docid": "658697e652979f31ae15c50370fab764",
"text": "In this paper, we purpose a new holonomic omni-directional mobile robot that can move on not only flat floors but also uneven environment. A prototype robot can move in omni-direction, run on the uneven floors and slopes, and pass over large steps. The robot has seven universal wheels that have twelve cylindrical free rollers. We adopt a passive suspension system that enable the robot to change the shape of the robot body in proportion to ground states without using actuators and sensors. We construct the prototype robot and analyse the kinematics of the robot. The performance of the prototype robot is verified through experiments.",
"title": ""
},
{
"docid": "4a9b55e73d3b66e4795a36ca8803b7fb",
"text": "Fault prognostic in various levels of production of semiconductor chips is considered to be a great challenge. To reduce yield loss during the manufacturing process, tool abnormalities should be detected as early as possible during process monitoring. In this paper, we propose a novel fault prognostic method based on Bayesian networks. The network is designed such that it can process both discrete and continuous variables, to represent the correlations between critical deviations and to process quality control data based on divide-and-conquer strategy. Such a network enables us to perform high-precision multi-step prognostic on the status of the fabrication process given the current state of the sensory info. Additionally, we introduce a layer-wise approach for efficient learning of the Bayesian-network parameters. We evaluate the accuracy of our prognostic model on a wafer fabrication dataset where our model performs precise next-step fault prognostic by using the control sensory data.",
"title": ""
},
{
"docid": "74378f7c4cb0217fad37bc9cd9e6f91a",
"text": "Multiple sclerosis (MS) is a multi-focal progressive disorder of the central nervous system often resulting in diverse clinical manifestations. Imbalance appears in most people with multiple sclerosis (PwMS). A popular balance training tool is virtual reality (VR) with several advantages including increased compliance and user satisfaction. Therefore, the aim of this pilot RCT (Trial registration number, date: ISRCTN14425615, 21/01/2016) was to examine the efficacy of a 6-week VR balance training program using the computer assisted rehabilitation environment (CAREN) system (Motek Medical BV, Amsterdam, Netherlands) on balance measures in PwMS. Results were compared with those of a conventional balance exercise group. Secondary aims included the impact of this program on the fear of falling. Thirty-two PwMS were equally randomized into the VR intervention group or the control group. Each group received balance training sessions for 6 consecutive weeks, two sessions per week, 30 min sessions. Clinical balance tests and instrumented posturography outcome measures were collected upon initiation of the intervention programs and at termination. Final analysis included 30 patients (19 females, 11 males; mean age, (S.D.) = 45.2 (11.6) years; mean EDSS (S.D.) = 4.1 (1.3), mean disease duration (S.D.) = 11.0 (8.9) years). Both groups showed a main effect of time on the center of pressure (CoP) path length with eyes open (F = 5.278, P = .024), sway rate with eyes open (F = 5.852, P = .035), Functional Reach Test (F = 20.841, P = .001), Four Square Step Test (F = 9.011, P = .031) and the Fear of Falls self-reported questionnaire (F = 17.815, P = .023). In addition, significant differences in favor of the VR program were observed for the group x time interactions of the Functional Reach Test (F = 10.173, P = .009) and fear of falling (F = 6.710, P = .021). We demonstrated that balance training based on the CAREN device is an effective method of balance training for PwMS.",
"title": ""
},
{
"docid": "519a51d0516f6f2988258a9f54d08123",
"text": "This paper is intended for those with some knowledge of the repertory grid technique who would like to experiment for themselves with new forms of grid. It is argued that because the technique is quite powerful and the basic principles of its design are easy to grasp there is some danger in it being used inappropriately. Inappropriate applications may be harmful both to those involved directly, and to the general reputation of the technique itself. The paper therefore surveys a range of alternatives in the design of grids, and discusses the factors that are important to consider in these cases. But even if a design has been produced which is inherently \"good\", any applications based on this will be of doubtful value unless prior thought has been given to the availability of analytic techniques, and to the means of interpretation of the results. Hence the paper outlines a number of approaches to the analysis of grids (both manual and computer based), and it also illustrates the possible process of interpretation in a number of cases.",
"title": ""
},
{
"docid": "bbeebb29c7220009c8d138dc46e8a6dd",
"text": "Let’s begin with a problem that many of you have seen before. It’s a common question in technical interviews. You’re given as input an array A of length n, with the promise that it has a majority element — a value that is repeated in strictly more than n/2 of the array’s entries. Your task is to find the majority element. In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, your post-CS161 toolbox already contains a subroutine that gives a linear-time solution — just compute the median of A. (Note: it must be the majority element.) So let’s be more ambitious: can we compute the majority element with a single left-to-right pass through the array? If you haven’t seen it before, here’s the solution:",
"title": ""
},
{
"docid": "54ef7c7dae7a8ff508c45b192d975c2b",
"text": "In order to realize performance gain of a robot or an artificial arm, the end-effector which exhibits the same function as human beings and can respond to various objects and environment needs to be realized. Then, we developed the new hand which paid its attention to the structure of human being's hand which realize operation in human-like manipulation (called TUAT/Karlsruhe Humanoid Hand). Since this humanoid hand has the structure of adjusting grasp shape and grasp force automatically, it does not need a touch sensor and feedback control. It is designed for the humanoid robot which has to work autonomously or interactively in cooperation with humans and for an artificial arm for handicapped persons. The ideal end-effectors for such an artificial arm or a humanoid would be able to use the tools and objects that a person uses when working in the same environment. If this humanoid hand can operate the same tools, a machine and furniture, it may be possible to work under the same environment as human beings. As a result of adopting a new function of a palm and the thumb, the robot hand could do the operation which was impossible until now. The humanoid hand realized operations which hold a kitchen knife, grasping a fan, a stick, uses the scissors and uses chopsticks.",
"title": ""
},
{
"docid": "05eb344fb8b671542f6f0228774a5524",
"text": "This paper presents an improved hardware structure for the computation of the Whirlpool hash function. By merging the round key computation with the data compression and by using embedded memories to perform part of the Galois Field (28) multiplication, a core can be implemented in just 43% of the area of the best current related art while achieving a 12% higher throughput. The proposed core improves the Throughput per Slice compared to the state of the art by 160%, achieving a throughput of 5.47 Gbit/s with 2110 slices and 32 BRAMs on a VIRTEX II Pro FPGA. Results for a real application are also presented by considering a polymorphic computational approach.",
"title": ""
},
{
"docid": "319bfee25d07faa5b2497102f765ad95",
"text": "Mobile computing is where the future is. This theme is not far fetched. We all want anywhere anytime communication. Ubiquitous communication has been made possible in the recent years with the advent of mobile ad hoc networks. The benefits of ubiquitous connectivity not only makes our lives more comfortable but also helps businesses efficiently deploy and manage their resources. These infrastructureless networks that enable \" anywhere anytime \" information access pose several challenging issues. There are several issues in the design and realization of these networks. Mobility planning is tricky and needs to be designed more carefully. Keeping track of mobiles in the infrastructure, the problem more popularly known as the location management problem, is another key issue to be addressed. The load on the servers, for handling location updates and queries, needs to be balanced. Moreover, the operation needs to be robust due to a high probability of temporary or permanent unavailability of one or more of the intermediate nodes. The transport protocols also need to be robust as a high degree of interference and noise can be expected in such environments. Applications will have to designed to incorporate environment-specific features in order to make them more robust. We believe that satisfactory solutions to these problems are essential in order to create smart environments using ad hoc networking infrastructure. While medium access in wireless networks still remains an active research area due to the limited availability of wireless bandwidth, the absence of infrastructure makes the problem more challenging. Mobility, being one of the inherent properties of ad hoc networks, results in frequent changes in the network topology, making routing in such dynamic environments complex. In short, the presence of wireless medium, mobility, and lack of infrastructure makes the problem of routing and scheduling far more challenging in ad hoc networks. Providing services in such networks while guaranteeing the performance requirements specified by the users remains an interesting and active research area. This issue of MONET is dedicated to papers relating to this topic. These papers are selected from the papers published in The papers were revised and reviewed again. In this special issue, we have selected seven papers covering various aspects of routing, multicasting, and Quality-of-Service in these networks. The first paper by Tang, Correa and Gerla, \" Effects of Ad Hoc MAC Layer Medium Access Mechanisms under TCP \" , deals with the issues in medium access control …",
"title": ""
},
{
"docid": "13ecd4155910512bf6159710f572e0c1",
"text": "Purpose – The purpose of this paper is to present the design and analysis of a robotic finger mechanism for robust industrial applications. Design/methodology/approach – The resultant design is a compact rigid link finger, which is adaptive to different shapes and sizes providing necessary grasping features. A number of such fingers can be assembled to function as a special purpose end effector. Findings – The mechanism removes a number of significant problems usually experienced with tendon-based designs. The finger actuation mechanism forms a compact and positive drive unit within the end effector’s body using solid mechanical linkages and integrated actuators. Practical implications – The paper discusses the design issues associated with a limited number of actuators to operate in a constrained environment and presents various considerations necessary to ensure safe and reliable operations. Originality/value – The design is original in existence and developed for special purpose handling applications that offers a strong and reliable system where space and safety is of prime concern.",
"title": ""
},
{
"docid": "7fa1ebea0989f7a6b8c0396bce54a54d",
"text": "Linear Discriminant Analysis (LDA) is a very common technique for dimensionality reduction problems as a preprocessing step for machine learning and pattern classification applications. At the same time, it is usually used as a black box, but (sometimes) not well understood. The aim of this paper is to build a solid intuition for what is LDA, and how LDA works, thus enabling readers of all levels be able to get a better understanding of the LDA and to know how to apply this technique in different applications. The paper first gave the basic definitions and steps of how LDA technique works supported with visual explanations of these steps. Moreover, the two methods of computing the LDA space, i.e. class-dependent and class-independent methods, were explained in details. Then, in a step-by-step approach, two numerical examples are demonstrated to show how the LDA space can be calculated in case of the class-dependent and class-independent methods. Furthermore, two of the most common LDA problems (i.e. Small Sample Size (SSS) and non-linearity problems) were highlighted and illustrated, and stateof-the-art solutions to these problems were investigated and explained. Finally, a number of experiments was conducted with different datasets to (1) investigate the effect of the eigenvectors that used in the LDA space on the robustness of the extracted feature for the classification accuracy, and (2) to show when the SSS problem occurs and how it can be addressed.",
"title": ""
},
{
"docid": "86f0e783a93fc783e10256c501008b0d",
"text": "We present a biologically-motivated system for the recognition of actions from video sequences. The approach builds on recent work on object recognition based on hierarchical feedforward architectures [25, 16, 20] and extends a neurobiological model of motion processing in the visual cortex [10]. The system consists of a hierarchy of spatio-temporal feature detectors of increasing complexity: an input sequence is first analyzed by an array of motion- direction sensitive units which, through a hierarchy of processing stages, lead to position-invariant spatio-temporal feature detectors. We experiment with different types of motion-direction sensitive units as well as different system architectures. As in [16], we find that sparse features in intermediate stages outperform dense ones and that using a simple feature selection approach leads to an efficient system that performs better with far fewer features. We test the approach on different publicly available action datasets, in all cases achieving the highest results reported to date.",
"title": ""
},
{
"docid": "090f5cb05d2f9d6d2456b3eb02a3a663",
"text": "The mesialization of molars in the lower jaw represents a particularly demanding scenario for the quality of orthodontic anchorage. The use of miniscrew implants has proven particularly effective; whereby, these orthodontic implants are either directly loaded (direct anchorage) or employed indirectly to stabilize a dental anchorage block (indirect anchorage). The objective of this study was to analyze the biomechanical differences between direct and indirect anchorage and their effects on the primary stability of the miniscrew implants. For this purpose, several computer-aided design/computer-aided manufacturing (CAD-CAM)-models were prepared from the CT data of a 21-year-old patient, and these were combined with virtually constructed models of brackets, arches, and miniscrew implants. Based on this, four finite element method (FEM) models were generated by three-dimensional meshing. Material properties, boundary conditions, and the quality of applied forces (direction and magnitude) were defined. After solving the FEM equations, strain values were recorded at predefined measuring points. The calculations made using the FEM models with direct and indirect anchorage were statistically evaluated. The loading of the compact bone in the proximity of the miniscrew was clearly greater with direct than it was with indirect anchorage. The more anchor teeth were integrated into the anchoring block with indirect anchorage, the smaller was the peri-implant loading of the bone. Indirect miniscrew anchorage is a reliable possibility to reduce the peri-implant loading of the bone and to reduce the risk of losing the miniscrew. The more teeth are integrated into the anchoring block, the higher is this protective effect. In clinical situations requiring major orthodontic forces, it is better to choose an indirect anchorage in order to minimize the risk of losing the miniscrew.",
"title": ""
},
{
"docid": "c3539090bef61fcfe4a194058a61d381",
"text": "Real-time environment monitoring and analysis is an important research area of Internet of Things (IoT). Understanding the behavior of the complex ecosystem requires analysis of detailed observations of an environment over a range of different conditions. One such example in urban areas includes the study of tree canopy cover over the microclimate environment using heterogeneous sensor data. There are several challenges that need to be addressed, such as obtaining reliable and detailed observations over monitoring area, detecting unusual events from data, and visualizing events in real-time in a way that is easily understandable by the end users (e.g., city councils). In this regard, we propose an integrated geovisualization framework, built for real-time wireless sensor network data on the synergy of computational intelligence and visual methods, to analyze complex patterns of urban microclimate. A Bayesian maximum entropy-based method and a hyperellipsoidal model-based algorithm have been build in our integrated framework to address above challenges. The proposed integrated framework was verified using the dataset from an indoor and two outdoor network of IoT devices deployed at two strategically selected locations in Melbourne, Australia. The data from these deployments are used for evaluation and demonstration of these components’ functionality along with the designed interactive visualization components.",
"title": ""
},
{
"docid": "9f3df90362c3fcba3130de916282361c",
"text": "There has been substantial recent interest in annotation schemes that can be applied consistently to many languages. Building on several recent efforts to unify morphological and syntactic annotation, the Universal Dependencies (UD) project seeks to introduce a cross-linguistically applicable part-of-speech tagset, feature inventory, and set of dependency relations as well as a large number of uniformly annotated treebanks. We present Universal Dependencies for Finnish, one of the ten languages in the recent first release of UD project treebank data. We detail the mapping of previously introduced annotation to the UD standard, describing specific challenges and their resolution. We additionally present parsing experiments comparing the performance of a stateof-the-art parser trained on a languagespecific annotation schema to performance on the corresponding UD annotation. The results show improvement compared to the source annotation, indicating that the conversion is accurate and supporting the feasibility of UD as a parsing target. The introduced tools and resources are available under open licenses from http://bionlp.utu.fi/ud-finnish.html.",
"title": ""
},
{
"docid": "34bd9e4717db9c3c5b13874446029d8f",
"text": "A steganographer network corresponds to a graphic structure that the involved vertices (or called nodes) denote social entities such as the data encoders and data decoders, and the associated edges represent any real communicable channels or other social links that could be utilized for steganography. Unlike traditional steganographic algorithms, a steganographer network models steganographic communication by an abstract way such that the concerned underlying characteristics of steganography are quantized as analyzable parameters in the network. In this paper, we will analyze two problems in a steganographer network. The first problem is a passive attack to a steganographer network where a network monitor has collected a list of suspicious vertices corresponding to the data encoders or decoders. The network monitor expects to break (disconnect) the steganographic communication down between the suspicious vertices while keeping the cost as low as possible. The second one relates to determining a set of vertices corresponding to the data encoders (senders) such that all vertices can share a message by neighbors. We point that, the two problems are equivalent to the minimum cut problem and the minimum-weight dominating set problem.",
"title": ""
},
{
"docid": "0c1f01d9861783498c44c7c3d0acd57e",
"text": "We understand a sociotechnical system as a multistakeholder cyber-physical system. We introduce governance as the administration of such a system by the stakeholders themselves. In this regard, governance is a peer-to-peer notion and contrasts with traditional management, which is a top-down hierarchical notion. Traditionally, there is no computational support for governance and it is achieved through out-of-band interactions among system administrators. Not surprisingly, traditional approaches simply do not scale up to large sociotechnical systems.\n We develop an approach for governance based on a computational representation of norms in organizations. Our approach is motivated by the Ocean Observatory Initiative, a thirty-year $400 million project, which supports a variety of resources dealing with monitoring and studying the world's oceans. These resources include autonomous underwater vehicles, ocean gliders, buoys, and other instrumentation as well as more traditional computational resources. Our approach has the benefit of directly reflecting stakeholder needs and assuring stakeholders of the correctness of the resulting governance decisions while yielding adaptive resource allocation in the face of changes in both stakeholder needs and physical circumstances.",
"title": ""
},
{
"docid": "8ce3fc72fa132b8baeff35035354d194",
"text": "Raman spectroscopy is a molecular vibrational spectroscopic technique that is capable of optically probing the biomolecular changes associated with diseased transformation. The purpose of this study was to explore near-infrared (NIR) Raman spectroscopy for identifying dysplasia from normal gastric mucosa tissue. A rapid-acquisition dispersive-type NIR Raman system was utilised for tissue Raman spectroscopic measurements at 785 nm laser excitation. A total of 76 gastric tissue samples obtained from 44 patients who underwent endoscopy investigation or gastrectomy operation were used in this study. The histopathological examinations showed that 55 tissue specimens were normal and 21 were dysplasia. Both the empirical approach and multivariate statistical techniques, including principal components analysis (PCA), and linear discriminant analysis (LDA), together with the leave-one-sample-out cross-validation method, were employed to develop effective diagnostic algorithms for classification of Raman spectra between normal and dysplastic gastric tissues. High-quality Raman spectra in the range of 800–1800 cm−1 can be acquired from gastric tissue within 5 s. There are specific spectral differences in Raman spectra between normal and dysplasia tissue, particularly in the spectral ranges of 1200–1500 cm−1 and 1600–1800 cm−1, which contained signals related to amide III and amide I of proteins, CH3CH2 twisting of proteins/nucleic acids, and the C=C stretching mode of phospholipids, respectively. The empirical diagnostic algorithm based on the ratio of the Raman peak intensity at 875 cm−1 to the peak intensity at 1450 cm−1 gave the diagnostic sensitivity of 85.7% and specificity of 80.0%, whereas the diagnostic algorithms based on PCA-LDA yielded the diagnostic sensitivity of 95.2% and specificity 90.9% for separating dysplasia from normal gastric tissue. Receiver operating characteristic (ROC) curves further confirmed that the most effective diagnostic algorithm can be derived from the PCA-LDA technique. Therefore, NIR Raman spectroscopy in conjunction with multivariate statistical technique has potential for rapid diagnosis of dysplasia in the stomach based on the optical evaluation of spectral features of biomolecules.",
"title": ""
},
{
"docid": "549f719cd53f769123c34d65dca1f566",
"text": "BACKGROUND\nA large body of scientific literature derived from experimental studies emphasizes the vital role of vagal-nociceptive networks in acute pain processing. However, research on vagal activity, indexed by vagally-mediated heart rate variability (vmHRV) in chronic pain patients (CPPs), has not yet been summarized.\n\n\nOBJECTIVES\nTo systematically investigate differences in vagus nerve activity indexed by time- and frequency-domain measures of vmHRV in CPPs compared to healthy controls (HCs).\n\n\nSTUDY DESIGN\nA systematic review and meta-analysis, including meta-regression on a variety of populations (i.e., clinical etiology) and study-level (i.e., length of HRV recording) covariates.\n\n\nSETTING\nNot applicable (variety of studies included in the meta-analysis).\n\n\nMETHODS\nEight computerized databases (PubMed via MEDLINE, PsycNET, PsycINFO, Embase, CINAHL, Web of Science, PSYNDEX, and the Cochrane Library) in addition to a hand search were systematically screened for eligible studies based on pre-defined inclusion criteria. A meta-analysis on all empirical investigations reporting short- and long-term recordings of continuous time- (root-mean-square of successive R-R-interval differences [RMSSD]) and frequency-domain measures (high-frequency [HF] HRV) of vmHRV in CPPs and HCs was performed. True effect estimates as adjusted standardized mean differences (SMD; Hedges g) combined with inverse variance weights using a random effects model were computed.\n\n\nRESULTS\nCPPs show lower vmHRV than HCs indexed by RMSSD (Z = 5.47, P < .0001; g = -0.24;95% CI [-0.33, -0.16]; k = 25) and HF (Z = 4.54, P < .0001; g = -0.30; 95% CI [-0.44, -0.17]; k = 61).Meta-regression on covariates revealed significant differences by clinical etiology, age, gender, and length of HRV recording.\n\n\nLIMITATIONS\nWe did not control for other potential covariates (i.e., duration of chronic pain, medication intake) which may carry potential risk of bias.\n\n\nCONCLUSION(S)\nThe present meta-analysis is the most extensive review of the current evidence on vagal activity indexed by vmHRV in CPPs. CPPs were shown to have lower vagal activity, indexed by vmHRV, compared to HCs. Several covariates in this relationship have been identified. Further research is needed to investigate vagal activity in CPPs, in particular prospective and longitudinal follow-up studies are encouraged.",
"title": ""
},
{
"docid": "bfdc5925a540686d03b6314bf2009db3",
"text": "This paper describes our programmable analog technology based around floating-gate transistors that allow for non-volatile storage as well as computation through the same device. We describe the basic concepts for floating-gate devices, capacitor-based circuits, and the basic charge modification mechanisms that makes this analog technology programmable. We describe the techniques to extend these techniques to program an nonhomogenious array of floating-gate devices.",
"title": ""
},
{
"docid": "2b0969dd0089bd2a2054957477ea4ce1",
"text": "A self-signaling action is an action chosen partly to secure good news about one’s traits or abilities, even when the action has no causal impact on these traits and abilities. We discuss some of the odd things that happen when self-signaling is introduced into an otherwise rational conception of action. We employ a signaling game perspective in which the diagnostic signals are an endogenous part of the equilibrium choice. We are interested (1) in pure self-signaling, separate from any desire to be regarded well by others, and (2) purely diagnostic motivation, that is, caring about what an action might reveal about a trait even when that action has no causal impact on it. When diagnostic motivation is strong, the person’s actions exhibit a rigidity characteristic of personal rules. Our model also predicts that a boost in self-image positively affects actions even though it leaves true preferences unchanged — we call this a “moral placebo effect.” 1 The chapter draws on (co-authored) Chapter 3 of Bodner’s doctoral dissertation (Bodner, 1995) and an unpublished MIT working paper (Bodner and Prelec, 1997). The authors thank Bodner’s dissertation advisors France Leclerc and Richard Thaler, workshop discussants Thomas Schelling, Russell Winer, and Mathias Dewatripont, and George Ainslie, Michael Bratman, Juan Carillo, Itzakh Gilboa, George Loewenstein, Al Mela, Matthew Rabin, Duncan Simester and Florian Zettelmeyer for comments on these ideas (with the usual disclaimer). We are grateful to Birger Wernerfelt for drawing attention to Bernheim's work on social conformity. Author addresses: Bodner – Director, Learning Innovations, 13\\4 Shimshon St., Jerusalem, 93501, Israel, learning@netvision.net.il; Prelec — E56-320, MIT, Sloan School, 38 Memorial Drive, Cambridge, MA 02139, dprelec@mit.edu. 1 Psychological evidence When we make a choice we reveal something of our inner traits or dispositions, not only to others, but also to ourselves. After the fact, this can be a source of pleasure or pain, depending on whether we were impressed or disappointed by our actions. Before the fact, the anticipation of future pride or remorse can influence what we choose to do. In a previous paper (Bodner and Prelec, 1997), we described how the model of a utility maximizing individual could be expanded to include diagnostic utility as a separate motive for action. We review the basic elements of that proposal here. The inspiration comes directly from signaling games in which actions of one person provide an informative signal to others, which in turn affects esteem (Bernheim, 1994). Here, however, actions provide a signal to ourselves, that is, actions are selfsignaling. For example, a person who takes the daily jog in spite of the rain may see that as a gratifying signal of willpower, dedication, or future well being. For someone uncertain about where he or she stands with respect to these dispositions, each new choice can provide a bit of good or bad \"news.” We incorporate the value of such \"news\" into the person's utility function. The notion that a person may draw inferences from an action he enacted partially in order to gain that inference has been posed as a philosophical paradox (e.g. Campbell and Sawden, 1985; Elster, 1985, 1989). A key problem is the following: Suppose that the disposition in question is altruism, and a person interprets a 25¢ donation to a panhandler as evidence of altruism. If the boost in self-esteem makes it worth giving the quarter even when there is no concern for the poor, than clearly, such a donation is not valid evidence of altruism. Logically, giving is valid evidence of high altruism only if a person with low altruism would not have given the quarter. This reasoning motivates our equilibrium approach, in which inferences from actions are an endogenous part of the equilibrium choice. As an empirical matter several studies have demonstrated that diagnostic considerations do indeed affect behavior (Quattrone and Tversky, 1984; Shafir and Tversky, 1992; Bodner, 1995). An elegant experiment by Quattrone and Tversky (1984) both defines the self-signaling phenomenon and demonstrates its existence. Quattrone and Tversky first asked each subject to take a cold pressor pain test in which the subject's arm is submerged in a container of cold water until the subject can no longer tolerate the pain. Subsequently the subject was told that recent medical studies had discovered a certain inborn heart condition, and that people with this condition are “frequently ill, prone to heart-disease, and have shorter-than-average life expectancy.” Subjects were also told that this type could be identified by the effect of exercise on the cold pressor test. Subjects were randomly assigned to one of two conditions in which they were told that the bad type of heart was associated with either increases or with decreases in tolerance to the cold water after exercise. Subjects then repeated the cold pressor test, after riding an Exercycle for one minute. As predicted, the vast majority of subjects showed changes in tolerance on the second cold pressor trial in the direction correlated of “good news”—if told that decreased tolerance is diagnostic of a bad heart they endured the near-freezing water longer (and vice versa). The result shows that people are willing to bear painful consequences for a behavior that is a signal, though not a cause, of a medical diagnosis. An experiment by Shafir and Tversky (1992) on \"Newcomb's paradox\" reinforces the same point. In the philosophical version of the paradox, a person is (hypothetically) presented with two boxes, A and B. Box A contains either nothing or some large amount of money deposited by an \"omniscient being.\" Box B contains a small amount of money for sure. The decision-maker doesn’t know what Box A contains choice, and has to choose whether to take the contents of that box (A) or of both boxes (A+B). What makes the problem a paradox is that the person is asked to believe that the omniscient being has already predicted her choice, and on that basis has already either \"punished\" a greedy choice of (A+B) with no deposit in A or \"rewarded\" a choice of (A) with a large deposit. The dominance principle argues in favor of choosing both boxes, because the deposits are fixed at the moment of choice. This is the philosophical statement of the problem. In the actual experiment, Shafir and Tversky presented a variant of Newcomb’s problem at the end of another, longer experiment, in which subjects repeatedly played a Prisoner’s Dilemma game against (virtual) opponents via computer terminals. After finishing these games, a final “bonus” problem appeared, with the two Newcomb boxes, and subjects had to choose whether to take money from one box or from both boxes. The experimental cover story did not mention an omniscient being but instead informed the subjects that \"a program developed at MIT recently was applied during the entire session [of Prisoner’s Dilemma choices] to analyze the pattern of your preference.” Ostensibly, this mighty program could predict choices, one or two boxes, with 85% accuracy, and, of course, if the program predicted a choice of both boxes it would then put nothing in Box A. Although it was evident that the money amounts were already set at the moment of choice, most experimental subjects opted for the single box. It is “as if” they believed that by declining to take the money in Box B, they could change the amount of money already deposited in box A. Although these are relatively recent experiments, their results are consistent with a long stream of psychological research, going back at least to the James-Lange theory of emotions which claimed that people infer their own states from behavior (e.g., they feel afraid if they see themselves running). The notion that people adopt the perspective of an outside observer when interpreting their own actions has been extensively explored in the research on self-perception (Bem, 1972). In a similar vein, there is an extensive literature confirming the existence of “self-handicapping” strategies, where a person might get too little sleep or under-prepare for an examination. In such a case, a successful performance could be attributed to ability while unsuccessful performance could be externalized as due to the lack of proper preparation (e.g. Berglas and Jones, 1978; Berglas and Baumeister, 1993). This broader context of psychological research suggests that we should view the results of Quattrone and Tversky, and Shafir and Tversky not as mere curiosities, applying to only contrived experimental situations, but instead as evidence of a general motivational “short circuit.” Motivation does not require causality, even when the lack of causality is utterly transparent. If anything, these experiments probably underestimate the impact of diagnosticity in realistic decisions, where the absence of causal links between actions and dispositions is less evident. Formally, our model distinguishes between outcome utility — the utility of the anticipated causal consequences of choice — and diagnostic utility — the value of the adjusted estimate of one’s disposition, adjusted in light of the choice. Individuals act so as to maximize some combination of the two sources of utility, and (in one version of the model) make correct inferences about what their choices imply about their dispositions. When diagnostic utility is sufficiently important, the individual chooses the same action independent of disposition. We interpret this as a personal rule. We describe other ways in which the behavior of self-signaling individuals is qualitatively different from that of standard economic agents. First, a self-signaling person will be more likely to reveal discrepancies between resolutions and actions when resolutions pertain to actions that are contingent or delayed. Thus she might honestly commit to do some worthy action if the circumstances requiring t",
"title": ""
}
] |
scidocsrr
|
def47dbc74644d476ce87066d24ce6c2
|
Probabilistic movement modeling for intention inference in human-robot interaction
|
[
{
"docid": "751563e10e62d6b8c4a4db9909e92058",
"text": "Summarising a high dimensional data set with a low dimension al embedding is a standard approach for exploring its structure. In this paper we provide an over view of some existing techniques for discovering such embeddings. We then introduce a novel prob abilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PC A (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the e mbedded space can easily be nonlinearised through Gaussian processes. We refer to this mod el as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective fu nction, we relate the model to popular spectral techniques such as kernel PCA and multidim ensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrat e the model on a range of real-world and artificially generated data sets.",
"title": ""
}
] |
[
{
"docid": "0efb5573afb882566387548ec3017843",
"text": "*Corresponding author Email: ahmed_shabaka@yahoo.com Phone: +20643381839 A pot experiment was conducted to investigate the effect of irrigation with different magnetized water on faba bean (Vicia faba L) growth and composition. Prepared sandy soil was packed in plastic pots (5 kg capacity) at a rate of 4 kg. Faba bean seeds were cultivated at rate of 4 seeds/pot. After the germination,faba bean plants were thinned into 2 plants/pot. Both sewage sludge compost (SSC) and tricalcium phosphate (TCP) were added to the soil at a rate of 25 tones ha-1 and 720 kg ha-1,respectively. Irrigation water sources were magnetized by passing throws a magnetic field 1000 gauss magnetron unit of 0.5 inch diameter. Plant length, shoot and root fresh and dry weights of faba bean were significantly increased by using the different magnetized irrigation water sources compared with the non-magnetized water. Only root fresh and dry weights of faba bean plant were significantly increased by using magnetized irrigation water and the different soil organic and inorganic treatments. On the other, plant length, shoot and fresh and dry weights were not significantly affected by the combined effect of magnetism and soil treatments. Shoot N, P and K contents and uptake of faba bean were significantly increased by the individual and the combined application of SSC and TCP to soil compared with untreated soil. Shoot N, P and K contents and uptake of faba bean was significantly increased by using magnetized irrigation water compared with the non-magnetized water.Generally, using different magnetized irrigation water sources, soil salinity, soluble cations and anions were significantly decreased by using magnetized water. Soil salinity, soluble cations and anions were significantly increased by adding both the individual and the combined SSC and TCP. Available soil N, P and K were significantly increased by adding both the individual and the combined SSC and TCP. Using different magnetized water sources, available soil N, P and K were significantly increased.",
"title": ""
},
{
"docid": "7d1348ad0dbd8f33373e556009d4f83a",
"text": "Laryngeal neoplasms represent 2% of all human cancers. They befall mainly the male sex, especially between 50 and 70 years of age, but exceptionally may occur in infancy or extreme old age. Their occurrence has increased considerably inclusively due to progressive population again. The present work aims at establishing a relation between this infirmity and its prognosis in patients submitted to the treatment recommended by Departament of Otolaryngology and Head Neck Surgery of the School of Medicine of São José do Rio Preto. To this effect, by means of karyometric optical microscopy, cell nuclei in the glottic region of 20 individuals, divided into groups according to their tumor stage and time of survival, were evaluated. Following comparation with a control group and statistical analsis, it became possible to verify that the lesser diameter of nuclei is of prognostic value for initial tumors in this region.",
"title": ""
},
{
"docid": "bbdf68b20aed9801ece9dc2adaa46ba5",
"text": "Coflow is a collection of parallel flows, while a job consists of a set of coflows. A job is completed if all of the flows completes in the coflows. Therefore, the completion time of a job is affected by the latest flows in the coflows. To guarantee the job completion time and service performance, the job deadline and the dependency of coflows needs to be considered in the scheduling process. However, most existing methods ignore the dependency of coflows which is important to guarantee the job completion. In this paper, we take the dependency of coflows into consideration. To guarantee job completion for performance, we formulate a deadline and dependency-based model called MTF scheduler model. The purpose of MTF model is to minimize the overall completion time with the constraints of deadline and network capacity. Accordingly, we propose our method to schedule dependent coflows. Especially, we consider the dependent coflows as an entirety and propose a valuable coflow scheduling first MTF algorithm. We conduct extensive simulations to evaluate MTF method which outperforms the conventional short job first method as well as guarantees the job deadline.",
"title": ""
},
{
"docid": "e9684914bb38ad30ffc623668f6b6cfe",
"text": "The Glasgow Coma Scale (GCS) has been widely adopted. Failure to assess the verbal score in intubated patients and the inability to test brainstem reflexes are shortcomings. We devised a new coma score, the FOUR (Full Outline of UnResponsiveness) score. It consists of four components (eye, motor, brainstem, and respiration), and each component has a maximal score of 4. We prospectively studied the FOUR score in 120 intensive care unit patients and compared it with the GCS score using neuroscience nurses, neurology residents, and neurointensivists. We found that the interrater reliability was excellent with the FOUR score (kappa(w) = 0.82) and good to excellent for physician rater pairs. The agreement among raters was similar with the GCS (kappa(w) = 0.82). Patients with the lowest GCS score could be further distinguished using the FOUR score. We conclude that the agreement among raters was good to excellent. The FOUR score provides greater neurological detail than the GCS, recognizes a locked-in syndrome, and is superior to the GCS due to the availability of brainstem reflexes, breathing patterns, and the ability to recognize different stages of herniation. The probability of in-hospital mortality was higher for the lowest total FOUR score when compared with the lowest total GCS score.",
"title": ""
},
{
"docid": "192b4a503a903747caffe5ea03c31c16",
"text": "We analyze and reframe AI progress. In addition to the prevailing metrics of performance, we highlight the usually neglected costs paid in the development and deployment of a system, including: data, expert knowledge, human oversight, software resources, computing cycles, hardware and network facilities, development time, etc. These costs are paid throughout the life cycle of an AI system, fall differentially on different individuals, and vary in magnitude depending on the replicability and generality of the AI solution. The multidimensional performance and cost space can be collapsed to a single utility metric for a user with transitive and complete preferences. Even absent a single utility function, AI advances can be generically assessed by whether they expand the Pareto (optimal) surface. We explore a subset of these neglected dimensions using the two case studies of Alpha* and ALE. This broadened conception of progress in AI should lead to novel ways of measuring success in AI, and can help set milestones for future progress.",
"title": ""
},
{
"docid": "04b66d9285404e7fb14fcec3cd66316a",
"text": "Amazon Aurora is a relational database service for OLTP workloads offered as part of Amazon Web Services (AWS). In this paper, we describe the architecture of Aurora and the design considerations leading to that architecture. We believe the central constraint in high throughput data processing has moved from compute and storage to the network. Aurora brings a novel architecture to the relational database to address this constraint, most notably by pushing redo processing to a multi-tenant scale-out storage service, purpose-built for Aurora. We describe how doing so not only reduces network traffic, but also allows for fast crash recovery, failovers to replicas without loss of data, and fault-tolerant, self-healing storage. We then describe how Aurora achieves consensus on durable state across numerous storage nodes using an efficient asynchronous scheme, avoiding expensive and chatty recovery protocols. Finally, having operated Aurora as a production service for over 18 months, we share the lessons we have learnt from our customers on what modern cloud applications expect from databases.",
"title": ""
},
{
"docid": "8db962a51ab6c9dc6002cb9b9aba35ca",
"text": "For some time now machine learning methods have been widely used in perception for autonomous robots. While there have been many results describing the performance of machine learning techniques with regards to their accuracy or convergence rates, relatively little work has been done on developing theoretical performance guarantees about their stability and robustness. As a result, many machine learning techniques are still limited to being used in situations where safety and robustness are not critical for success. One way to overcome this difficulty is by using reachability analysis, which can be used to compute regions of the state space, known as reachable sets, from which the system can be guaranteed to remain safe over some time horizon regardless of the disturbances. In this paper we show how reachability analysis can be combined with machine learning in a scenario in which an aerial robot is attempting to learn the dynamics of a ground vehicle using a camera with a limited field of view. The resulting simulation data shows that by combining these two paradigms, one can create robotic systems that feature the best qualities of each, namely high performance and guaranteed safety.",
"title": ""
},
{
"docid": "8b6b970a179eb2b357dace2b6e55d5d6",
"text": "Unmanned aerial vehicles (UAVs) have been recently considered as means to provide enhanced coverage or relaying services to mobile users (MUs) in wireless systems with limited or no infrastructure. In this paper, a UAV-based mobile cloud computing system is studied in which a moving UAV is endowed with computing capabilities to offer computation offloading opportunities to MUs with limited local processing capabilities. The system aims at minimizing the total mobile energy consumption while satisfying quality of service requirements of the offloaded mobile application. Offloading is enabled by uplink and downlink communications between the mobile devices and the UAV, which take place by means of frequency division duplex via orthogonal or nonorthogonal multiple access schemes. The problem of jointly optimizing the bit allocation for uplink and downlink communications as well as for computing at the UAV, along with the cloudlet's trajectory under latency and UAV's energy budget constraints is formulated and addressed by leveraging successive convex approximation strategies. Numerical results demonstrate the significant energy savings that can be accrued by means of the proposed joint optimization of bit allocation and cloudlet's trajectory as compared to local mobile execution as well as to partial optimization approaches that design only the bit allocation or the cloudlet's trajectory.",
"title": ""
},
{
"docid": "610ec093f08d62548925918d6e64b923",
"text": "Word embeddings encode semantic meanings of words into low-dimension word vectors. In most word embeddings, one cannot interpret the meanings of specific dimensions of those word vectors. Nonnegative matrix factorization (NMF) has been proposed to learn interpretable word embeddings via non-negative constraints. However, NMF methods suffer from scale and memory issue because they have to maintain a global matrix for learning. To alleviate this challenge, we propose online learning of interpretable word embeddings from streaming text data. Experiments show that our model consistently outperforms the state-of-the-art word embedding methods in both representation ability and interpretability. The source code of this paper can be obtained from http: //github.com/skTim/OIWE.",
"title": ""
},
{
"docid": "fddf65bce6abf403cf4f7d7cfcdd835f",
"text": "Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic. While several photorealistic image stylization methods exist, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In this paper, we propose a method to address these issues. The proposed method consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step ensures spatially consistent stylizations. Each of the steps has a closedform solution and can be computed efficiently. We conduct extensive experimental validations. The results show that the proposed method generates photorealistic stylization outputs that are more preferred by human subjects as compared to those by the competing methods while running much faster. Source code and additional results are available at https://github.com/NVIDIA/FastPhotoStyle.",
"title": ""
},
{
"docid": "a4ec796aa94914eead676eac4a688753",
"text": "Providing transactional primitives of NAND flash based solid state disks (SSDs) have demonstrated a great potential for high performance transaction processing and relieving software complexity. Similar with software solutions like write-ahead logging (WAL) and shadow paging, transactional SSD has two parts of overhead which include: 1) write overhead under normal condition, and 2) recovery overhead after power failures. Prior transactional SSD designs utilize out-of-band (OOB) area in flash pages to store transaction information to reduce the first part of overhead. However, they are required to scan a large part of or even whole SSD after power failures to abort unfinished transactions. Another limitation of prior approaches is the unicity of transactional primitive they provided. In this paper, we propose a new transactional SSD design named Möbius. Möbius provides different types of transactional primitives to support static and dynamic transactions separately. Möbius flash translation layer (mFTL), which combines normal FTL with transaction processing by storing mapping and transaction information together in a physical flash page as atom inode. By amortizing the cost of transaction processing with FTL persistence, MFTL achieve high performance in normal condition and does not increase write amplification ratio. After power failures, Möbius can leverage atom inode to eliminate unnecessary scanning and recover quickly. We implemented a prototype of Möbius and compare it with other state-of-art transactional SSD designs. Experimental results show that Möbius can at most 67% outperform in transaction throughput (TPS) and 29 times outperform in recovery time while still have similar or even better write amphfication ratio comparing with prior hardware approaches.",
"title": ""
},
{
"docid": "641fa9e397e1ce6e320ec4cacfd3064f",
"text": "Machine translation is a natural candidate problem for reinforcement learning from human feedback: users provide quick, dirty ratings on candidate translations to guide a system to improve. Yet, current neural machine translation training focuses on expensive human-generated reference translations. We describe a reinforcement learning algorithm that improves neural machine translation systems from simulated human feedback. Our algorithm combines the advantage actor-critic algorithm (Mnih et al., 2016) with the attention-based neural encoderdecoder architecture (Luong et al., 2015). This algorithm (a) is well-designed for problems with a large action space and delayed rewards, (b) effectively optimizes traditional corpus-level machine translation metrics, and (c) is robust to skewed, high-variance, granular feedback modeled after actual human behaviors.",
"title": ""
},
{
"docid": "882248356efb7b81fde7e569e261a88d",
"text": "Band clamps with a flat bottomed V-section are used to connect a pair of circular flanges to provide a joint with significant axial strength. Despite the wide application of V-band clamps, their behaviour is not fully understood and the ultimate axial strength is currently only available from physical testing. This physical testing has indicated that the ultimate strength is determined by two different types of structural deformation, an elastic deformation mode and a plastic deformation mode. Initial finite element analysis work has demonstrated that analysis of this class of problem is not straightforward. This paper discusses the difficulties encountered when simulating this type of component interaction where contact is highly localised and contact pressures are high and therefore presents a finite element model to predict the ultimate axial load capacity of V-band clamps.",
"title": ""
},
{
"docid": "e1cde0c8c65d1079cf4ee24ef2402e2f",
"text": "Breast lesions are characterized into three classes which include primary benign, primary malignant and secondary malignant. In the present work Laws' mask texture features are computed from the ultrasound images of the breast lesions. These Laws' masks of various resolutions i.e., of length 3,5, 7 and 9 have been used to extract the statistical features (Mean, Standard Deviation, Kurtosis, Skewness and Energy) from Laws' texture images. In the present work using the SVM classifier, an overall classification accuracy of 88.3% and the individual classification accuracy values of 95.2%, 88.6% and 91.6% have been obtained for primary benign, primary malignant and secondary malignant classes respectively.",
"title": ""
},
{
"docid": "256c3bcdd830f38d2ed41d5b9ed6048f",
"text": "This paper presents a comprehensive review of various techniques employed to enhance the low voltage ride through (LVRT) capability of the fixed-speed induction generators (FSIGs)-based wind turbines (WTs), which has a non-negligible 20% contribution of the existing wind energy in the world. As the FSIG-based WT system is directly connected to the grid with no power electronic interfaces, terminal voltage or reactive power output may not be precisely controlled. Thus, various LVRT strategies based on installation of the additional supporting technologies have been proposed in the literature. Although the various individual technologies are well documented, a comparative study of existing approaches has not been reported so far. This paper attempts to fill this void by providing a comprehensive analysis of these LVRT methods for FSIG-based WTs in terms of dynamic performance, controller complexity, and economic feasibility. A novel feature of this paper is to categorize LVRT capability enhancement approaches into three main groups depending on the connection configuration: series, shunt, and series-shunt (hybrid) connections and then discuss their advantages and limitations in detail. For verification purposes, several simulations are presented in MATLAB software to demonstrate and compare the reviewed LVRT schemes. Based on the simulated results, series connection dynamic voltage restorer (DVR) and shunt connection static synchronous compensators (STATCOM) are the highly efficient LVRT capability enhancement approaches.",
"title": ""
},
{
"docid": "898b5800e6ff8a599f6a4ec27310f89a",
"text": "Jenni Anttonen: Using the EMFi chair to measure the user's emotion-related heart rate responses Master's thesis, 55 pages, 2 appendix pages May 2005 The research reported here is part of a multidisciplinary collaborative project that aimed at developing embedded measurement devices using electromechanical film (EMFi) as a basic measurement technology. The present aim was to test if an unobtrusive heart rate measurement device, the EMFi chair, had the potential to detect heart rate changes associated with emotional stimulation. Six-second long visual, auditory, and audiovisual stimuli with negative, neutral, and positive emotional content were presented to 24 participants. Heart rate responses were measured with the EMFi chair and with earlobe photoplethysmography (PPG). Also, subjective ratings of the stimuli were collected. Firstly, the high correlation between the measurement results of the EMFi chair and PPG, r = 0.99, p < 0.001, indicated that the EMFi chair measured heart rate reliably. Secondly, heart rate showed a decelerating response to visual, auditory, and audiovisual emotional stimulation. The emotional stimulation caused statistically significant changes in heart rate at the 6 th second from stimulus onset so that the responses to negative stimulation were significantly lower than the responses to positive stimulation. The results were in line with previous research. The results show that heart rate responses measured with the EMFi chair differed significantly for positive and negative emotional stimulation. These results suggest that the EMFi chair could be used in HCI to measure the user's emotional responses unobtrusively.",
"title": ""
},
{
"docid": "37426a6261243f5bbe6d59be3826a82f",
"text": "A key to successful face recognition is accurate and reliable face alignment using automatically-detected facial landmarks. Given this strong dependency between face recognition and facial landmark detection, robust face recognition requires knowledge of when the facial landmark detection algorithm succeeds and when it fails. Facial landmark confidence represents this measure of success. In this paper, we propose two methods to measure landmark detection confidence: local confidence based on local predictors of each facial landmark, and global confidence based on a 3D rendered face model. A score fusion approach is also introduced to integrate these two confidences effectively. We evaluate both confidence metrics on two datasets for face recognition: JANUS CS2 and IJB-A datasets. Our experiments show up to 9% improvements when face recognition algorithm integrates the local-global confidence metrics.",
"title": ""
},
{
"docid": "1302963869cdcb958a331838786c51de",
"text": "Introduction: Benefits from mental health early interventions may not be sustained over time, and longer-term intervention programs may be required to maintain early clinical gains. However, due to the high intensity of face-to-face early intervention treatments, this may not be feasible. Adjunctive internet-based interventions specifically designed for youth may provide a cost-effective and engaging alternative to prevent loss of intervention benefits. However, until now online interventions have relied on human moderators to deliver therapeutic content. More sophisticated models responsive to user data are critical to inform tailored online therapy. Thus, integration of user experience with a sophisticated and cutting-edge technology to deliver content is necessary to redefine online interventions in youth mental health. This paper discusses the development of the moderated online social therapy (MOST) web application, which provides an interactive social media-based platform for recovery in mental health. We provide an overview of the system's main features and discus our current work regarding the incorporation of advanced computational and artificial intelligence methods to enhance user engagement and improve the discovery and delivery of therapy content. Methods: Our case study is the ongoing Horyzons site (5-year randomized controlled trial for youth recovering from early psychosis), which is powered by MOST. We outline the motivation underlying the project and the web application's foundational features and interface. We discuss system innovations, including the incorporation of pertinent usage patterns as well as identifying certain limitations of the system. This leads to our current motivations and focus on using computational and artificial intelligence methods to enhance user engagement, and to further improve the system with novel mechanisms for the delivery of therapy content to users. In particular, we cover our usage of natural language analysis and chatbot technologies as strategies to tailor interventions and scale up the system. Conclusions: To date, the innovative MOST system has demonstrated viability in a series of clinical research trials. Given the data-driven opportunities afforded by the software system, observed usage patterns, and the aim to deploy it on a greater scale, an important next step in its evolution is the incorporation of advanced and automated content delivery mechanisms.",
"title": ""
},
{
"docid": "b1a384176d320576ec8bc398474f5e68",
"text": "Concept mapping (a mixed qualitative–quantitative methodology) was used to describe and understand the psychosocial experiences of adults with confirmed and self-identified dyslexia. Using innovative processes of art and photography, Phase 1 of the study included 15 adults who participated in focus groups and in-depth interviews and were asked to elucidate their experiences with dyslexia. On index cards, 75 statements and experiences with dyslexia were recorded. The second phase of the study included 39 participants who sorted these statements into self-defined categories and rated each statement to reflect their personal experiences to produce a visual representation, or concept map, of their experience. The final concept map generated nine distinct cluster themes: Organization Skills for Success; Finding Success; A Good Support System Makes the Difference; On Being Overwhelmed; Emotional Downside; Why Can’t They See It?; Pain, Hurt, and Embarrassment From Past to Present; Fear of Disclosure; and Moving Forward. Implications of these findings are discussed.",
"title": ""
},
{
"docid": "5da1f0692a71e4dde4e96009b99e0c13",
"text": "The McKibben artificial muscle is a pneumatic actuator whose properties include a very high force to weight ratio. This characteristic makes it very attractive for a wide range of applications such as mobile robots and prosthetic appliances for the disabled. Typical applications often require a significant number of repeated contractions and extensions or cycles of the actuator. This repeated action leads to fatigue and failure of the actuator, yielding a life span that is often shorter than its more common robotic counterparts such as electric motors or pneumatic cylinders. In this paper, we develop a model that predicts the maximum number of life cycles of the actuator based on available uniaxial tensile properties of the actuator’s inner bladder. Experimental results, which validate the model, reveal McKibben actuators fabricated with natural latex rubber bladders have a fatigue limit 24 times greater than actuators fabricated with synthetic silicone rubber at large contraction ratios.",
"title": ""
}
] |
scidocsrr
|
7376385e8b2bbcc41a0bc809cc806f5f
|
Isolation and Emotions in the Workplace: The Influence of Perceived Media Richness and Virtuality
|
[
{
"docid": "4506bc1be6e7b42abc34d79dc426688a",
"text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.",
"title": ""
},
{
"docid": "e4d38d8ef673438e9ab231126acfda99",
"text": "The trend toward physically dispersed work groups has necessitated a fresh inquiry into the role and nature of team leadership in virtual settings. To accomplish this, we assembled thirteen culturally diverse global teams from locations in Europe, Mexico, and the United States, assigning each team a project leader and task to complete. The findings suggest that effective team leaders demonstrate the capability to deal with paradox and contradiction by performing multiple leadership roles simultaneously (behavioral complexity). Specifically, we discovered that highly effective virtual team leaders act in a mentoring role and exhibit a high degree of understanding (empathy) toward other team members. At the same time, effective leaders are also able to assert their authority without being perceived as overbearing or inflexible. Finally, effective leaders are found to be extremely effective at providing regular, detailed, and prompt communication with their peers and in articulating role relationships (responsibilities) among the virtual team members. This study provides useful insights for managers interested in developing global virtual teams, as well as for academics interested in pursuing virtual team research. 8 KAYWORTH AND LEIDNER",
"title": ""
},
{
"docid": "aff9d415a725b9e1ea65897af2715729",
"text": "Survey research is believed to be well understood and applied by MIS scholars. It has been applied for several years, it is well defined, and it has precise procedures which, when followed closely, yield valid and easily interpretable data. Our assessment of the use of survey research in the MIS field between 1980 and 1990 indicates that this perception is at odds with reality. Our analysis indicates that survey methodology is often misapplied and is plagued by five important weaknesses: (1) single method designs where multiple methods are needed, (2) unsystematic and often inadequate sampling procedures, (3) low response rates, (4) weak linkages between units of analysis and respondents, and (5) over reliance on cross-sectional surveys where longitudinal surveys are really needed. Our assessment also shows that the quality of survey research varies considerably among studies of different purposes: explanatory studies are of good quality overall, exploratory and descriptive studies are of moderate to poor quality. This article presents a general framework for classifying and examining survey research and uses this framework to assess, review and critique the usage of survey research conducted in the past decade in the MIS field. In an effort to improve the quality of survey research, this article makes specific recommendations that directly address the major problems highlighted in the review. AUTHORS' BIOGRAPHIES Alain Pinsonneault holds a Ph.d. in administration from University of California at Irvine (1990) and a M.Sc. in Management Information Systems from Ecole des Hautes Etudes Commerciales de Montreal (1986). His current research interests include the organizational implications of computing, especially with regard to the centralization/decentralization of decision making authority and middle managers workforce; the strategic and political uses of computing, the use of information technology to support group decision making process; and the benefits of computing. He has published articles in Decision Support Systems, European Journal of Operational Research, and in Management Information Systems Quarterly, and one book chapter. He has also given numerous conferences and he is an associate editor of Informatization and the Public Sector journal. His doctoral dissertation won the 1990 International Center for Information Technology Doctoral Award. Kenneth L. Kraemer is the Director of the Public Policy Research Organization and Professor of Management and Information and Computer Science. He holds a Ph.D. from University of Southern California. Professor Kraemer has conducted research into the management of computing in organizations for more than 20 years. He is currently studying the diffusion of computing in Asia-Pacific countries, the dynamics of computing development in organizations, the impacts of computing on productivity in the work environment, and policies for successful implementation of computer-based information systems. In addition, Professor Kraemer is coeditor of a series of books entitled Computing, Organization, Policy, and Society (CORPS) published by Columbia University Press. He has published numerous books on computing, the most recent of which being Managing Information Systems. He has served as a consultant to the Department of Housing and Urban Development, the Office of Technology Assessment and the United Nations, and as a national expert to the Organization for Economic Cooperation and Development. He was recently Shaw Professor in Information Systems and Computer Sciences at the National University of Singapore.",
"title": ""
}
] |
[
{
"docid": "f845508acabb985dd80c31774776e86b",
"text": "In this paper, we introduce two input devices for wearable computers, called GestureWrist and GesturePad. Both devices allow users to interact with wearable or nearby computers by using gesture-based commands. Both are designed to be as unobtrusive as possible, so they can be used under various social contexts. The first device, called GestureWrist, is a wristband-type input device that recognizes hand gestures and forearm movements. Unlike DataGloves or other hand gesture-input devices, all sensing elements are embedded in a normal wristband. The second device, called GesturePad, is a sensing module that can be attached on the inside of clothes, and users can interact with this module from the outside. It transforms conventional clothes into an interactive device without changing their appearance.",
"title": ""
},
{
"docid": "2f3734b49e9d2e6ea7898622dac8a296",
"text": "Dropout prediction in MOOCs is a well-researched problem where we classify which students are likely to persist or drop out of a course. Most research into creating models which can predict outcomes is based on student engagement data. Why these students might be dropping out has only been studied through retroactive exit surveys. This helps identify an important extension area to dropout prediction— how can we interpret dropout predictions at the student and model level? We demonstrate how existing MOOC dropout prediction pipelines can be made interpretable, all while having predictive performance close to existing techniques. We explore each stage of the pipeline as design components in the context of interpretability. Our end result is a layer which longitudinally interprets both predictions and entire classification models of MOOC dropout to provide researchers with in-depth insights of why a student is likely to dropout.",
"title": ""
},
{
"docid": "c2659be74498ec68c3eb5509ae11b3c3",
"text": "We focus on modeling human activities comprising multiple actions in a completely unsupervised setting. Our model learns the high-level action co-occurrence and temporal relations between the actions in the activity video. We consider the video as a sequence of short-term action clips, called action-words, and an activity is about a set of action-topics indicating which actions are present in the video. Then we propose a new probabilistic model relating the action-words and the action-topics. It allows us to model long-range action relations that commonly exist in the complex activity, which is challenging to capture in the previous works. We apply our model to unsupervised action segmentation and recognition, and also to a novel application that detects forgotten actions, which we call action patching. For evaluation, we also contribute a new challenging RGB-D activity video dataset recorded by the new Kinect v2, which contains several human daily activities as compositions of multiple actions interacted with different objects. The extensive experiments show the effectiveness of our model.",
"title": ""
},
{
"docid": "89281eed8f3faadcf0bc07bd151728a4",
"text": "The Internet of Things (IoT) continues to increase in popularity as more “smart” devices are released and sold every year. Three protocols in particular, Zigbee, Z-wave, and Bluetooth Low Energy (BLE) are used for network communication on a significant number of IoT devices. However, devices utilizing each of these three protocols have been compromised due to either implementation failures by the manufacturer or security shortcomings in the protocol itself. This paper identifies the security features and shortcomings of each protocol citing employed attacks for reference. Additionally, it will serve to help manufacturers make two decisions: First, should they invest in creating their own protocol, and second, if they decide against this, which protocol should they use and how should they implement it to ensure their product is as secure as it can be. These answers are made with respect to the specific factors manufacturers in the IoT space face such as the reversed CIA model with availability usually being the most important of the three and the ease of use versus security tradeoff that manufacturers have to consider. This paper finishes with a section aimed at future research for IoT communication protocols.",
"title": ""
},
{
"docid": "4159f4f92adea44577319e897f10d765",
"text": "While our knowledge about ancient civilizations comes mostly from studies in archaeology and history books, much can also be learned or confirmed from literary texts . Using natural language processing techniques, we present aspects of ancient China as revealed by statistical textual analysis on the Complete Tang Poems , a 2.6-million-character corpus of all surviving poems from the Tang Dynasty (AD 618 —907). Using an automatically created treebank of this corpus , we outline the semantic profiles of various poets, and discuss the role of s easons, geography, history, architecture, and colours , as observed through word selection and dependencies.",
"title": ""
},
{
"docid": "a08aa88aa3b4249baddbd8843e5c9be3",
"text": "We present the design, implementation, evaluation, and user ex periences of theCenceMe application, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace. We discuss the system challenges for the development of software on the Nokia N95 mobile phone. We present the design and tradeoffs of split-level classification, whereby personal sensing presence (e.g., walking, in conversation, at the gym) is derived from classifiers which execute in part on the phones and in part on the backend servers to achieve scalable inference. We report performance measurements that characterize the computational requirements of the software and the energy consumption of the CenceMe phone client. We validate the system through a user study where twenty two people, including undergraduates, graduates and faculty, used CenceMe continuously over a three week period in a campus town. From this user study we learn how the system performs in a production environment and what uses people find for a personal sensing system.",
"title": ""
},
{
"docid": "e742aa091dae6227994cffcdb5165769",
"text": "In this paper, a new adaptive multi-batch experience replay scheme is proposed for proximal policy optimization (PPO) for continuous action control. On the contrary to original PPO, the proposed scheme uses the batch samples of past policies as well as the current policy for the update for the next policy, where the number of the used past batches is adaptively determined based on the oldness of the past batches measured by the average importance sampling (IS) weight. The new algorithm constructed by combining PPO with the proposed multi-batch experience replay scheme maintains the advantages of original PPO such as random minibatch sampling and small bias due to low IS weights by storing the pre-computed advantages and values and adaptively determining the mini-batch size. Numerical results show that the proposed method significantly increases the speed and stability of convergence on various continuous control tasks compared to original PPO.",
"title": ""
},
{
"docid": "63db10a21fcfc659e350d5bf6df47166",
"text": "This research proposes the design, simulation, and implementation of a three-phase induction motor driver, using voltage-fed Space Vector Pulse Width Modulation technique (SVPWM), which is an advance and modern technique. The SVPWM provides maximum usage of the DC link. A MATLAB/SIMULINK program is prepared for simulating the overall drive system which include; voltage-fed space vector PWM inverter model and three-phase induction motor model. A practical model is designed by imitate the conceptions of TMS320 (DSP) microcontroller. This practical model is completely implemented and exact results are obtained. The worst state of the harmonics content of the voltage and current (no-load condition) are analyzed. This analysis shows high reduction in the dominant harmonics and very low total harmonic distortion (THD) when SVPWM is used (less than 5%), compared to (more than 20%) in square wave. Experimental and simulation results have verified the superior performance and the effectiveness in reduction the harmonic losses and switching losses.",
"title": ""
},
{
"docid": "3abcfd48703b399404126996ca837f90",
"text": "Various inductive loads used in all industries deals with the problem of power factor improvement. Capacitor bank connected in shunt helps in maintaining the power factor closer to unity. They improve the electrical supply quality and increase the efficiency of the system. Also the line losses are also reduced. Shunt capacitor banks are less costly and can be installed anywhere. This paper deals with shunt capacitor bank designing for power factor improvement considering overvoltages for substation installation. Keywords— Capacitor Bank, Overvoltage Consideration, Power Factor, Reactive Power",
"title": ""
},
{
"docid": "63b04046e1136290a97f885783dda3bd",
"text": "This paper considers the design of secondary wireless mesh networks which use leased frequency channels. In a given geographic region, the available channels are individually priced and leased exclusively through a primary spectrum owner. The usage of each channel is also subject to published interference constraints so that the primary user is not adversely affected. When the network is designed and deployed, the secondary user would like to minimize the costs of using the required resources while satisfying its own traffic and interference requirements. This problem is formulated as a mixed integer optimization which gives the optimum deployment cost as a function of the secondary node positioning, routing, and frequency allocations. Because of the problem's complexity, the optimum result can only be found for small problem sizes. To accommodate more practical deployments, two algorithms are proposed and their performance is compared to solutions obtained from the optimization. The first algorithm is a greedy flow-based scheme (GFB) which iterates over the individual node flows based on solving a much simpler optimization at each step. The second algorithm (ILS) uses an iterated local search whose initial solution is based on constrained shortest path routing. Our results show that the proposed algorithms perform well for a variety of network scenarios.",
"title": ""
},
{
"docid": "e2a97f90f42dcaf5b8b703c5eb47a757",
"text": "Metamaterials (MMs) have been proposed to improve the performance of wireless power transfer (WPT) systems. The performance of identical unit cells having the same transmitter and receiver self-resonance is presented in the literature. This paper presents the optimization of tunable MM for performance improvement in WPT systems. Furthermore, a figure of merit (FOM) is proposed for the optimization of WPT systems with MMs. It is found that both transferred power and power transfer efficiency can be improved significantly by using the proposed FOM and tunable MM, particularly under misaligned conditions.",
"title": ""
},
{
"docid": "447bfee37117b77534abe2cf6cfd8a17",
"text": "Detailed characterization of the cell types in the human brain requires scalable experimental approaches to examine multiple aspects of the molecular state of individual cells, as well as computational integration of the data to produce unified cell-state annotations. Here we report improved high-throughput methods for single-nucleus droplet-based sequencing (snDrop-seq) and single-cell transposome hypersensitive site sequencing (scTHS-seq). We used each method to acquire nuclear transcriptomic and DNA accessibility maps for >60,000 single cells from human adult visual cortex, frontal cortex, and cerebellum. Integration of these data revealed regulatory elements and transcription factors that underlie cell-type distinctions, providing a basis for the study of complex processes in the brain, such as genetic programs that coordinate adult remyelination. We also mapped disease-associated risk variants to specific cellular populations, which provided insights into normal and pathogenic cellular processes in the human brain. This integrative multi-omics approach permits more detailed single-cell interrogation of complex organs and tissues.",
"title": ""
},
{
"docid": "ccc6651b9bf4fcaa905d8e1bc7f9b6b4",
"text": "We introduce computational network (CN), a unified framework for describing arbitrary learning machines, such as deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short term memory (LSTM), logistic regression, and maximum entropy model, that can be illustrated as a series of computational steps. A CN is a directed graph in which each leaf node represents an input value or a parameter and each non-leaf node represents a matrix operation upon its children. We describe algorithms to carry out forward computation and gradient calculation in CN and introduce most popular computation node types used in a typical CN. We further introduce the computational network toolkit (CNTK), an implementation of CN that supports both GPU and CPU. We describe the architecture and the key components of the CNTK, the command line options to use CNTK, and the network definition and model editing language, and provide sample setups for acoustic model, language model, and spoken language understanding. We also describe the Argon speech recognition decoder as an example to integrate with CNTK.",
"title": ""
},
{
"docid": "d35bc5ef2ea3ce24bbba87f65ae93a25",
"text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.",
"title": ""
},
{
"docid": "5d1fbf1b9f0529652af8d28383ce9a34",
"text": "Automatic License Plate Recognition (ALPR) is one of the most prominent tools in intelligent transportation system applications. In ALPR algorithm implementation, License Plate Detection (LPD) is a critical stage. Despite many state-of-the-art researches, some parameters such as low/high illumination, type of camera, or a different style of License Plate (LP) causes LPD step is still a challenging problem. In this paper, we propose a new style-free method based on the cross power spectrum. Our method has three steps; designing adaptive binarized filter, filtering using cross power spectrum and verification. Experimental results show that the recognition accuracy of the proposed approach is 98% among 2241 Iranian cars images including two styles of the LP. In addition, the process of the plate detection takes 44 milliseconds, which is suitable for real-time processing.",
"title": ""
},
{
"docid": "a9f70ea201e17bca3b97f6ef7b2c1c15",
"text": "Network embedding task aims at learning low-dimension latent representations of vertices while preserving the structure of a network simultaneously. Most existing network embedding methods mainly focus on static networks, which extract and condense the network information without temporal information. However, in the real world, networks keep evolving, where the linkage states between the same vertex pairs at consequential timestamps have very close correlations. In this paper, we propose to study the network embedding problem and focus on modeling the linkage evolution in the dynamic network setting. To address this problem, we propose a deep dynamic network embedding method. More specifically, the method utilizes the historical information obtained from the network snapshots at past timestamps to learn latent representations of the future network. In the proposed embedding method, the objective function is carefully designed to incorporate both the network internal and network dynamic transition structures. Extensive empirical experiments prove the effectiveness of the proposed model on various categories of real-world networks, including a human contact network, a bibliographic network, and e-mail networks. Furthermore, the experimental results also demonstrate the significant advantages of the method compared with both the state-of-the-art embedding techniques and several existing baseline methods.",
"title": ""
},
{
"docid": "1a259f28221e8045568e5053ddc4ede1",
"text": "The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper describes one distributed learning algorithm which extends the original(centralized) CHAID algorithm to its distributed version. This distributed algorithm generates exactly the same results as its centralized counterpart. For completeness, a distributed quantization method is proposed so that continuous data can be processed by our algorithm. Experimental results for several well known data sets are presented and compared with decision trees generated using CHAID with centrally stored data.",
"title": ""
},
{
"docid": "e0450f09c579ddda37662cbdfac4265c",
"text": "Deep neural networks (DNNs) have recently achieved a great success in various learning task, and have also been used for classification of environmental sounds. While DNNs are showing their potential in the classification task, they cannot fully utilize the temporal information. In this paper, we propose a neural network architecture for the purpose of using sequential information. The proposed structure is composed of two separated lower networks and one upper network. We refer to these as LSTM layers, CNN layers and connected layers, respectively. The LSTM layers extract the sequential information from consecutive audio features. The CNN layers learn the spectro-temporal locality from spectrogram images. Finally, the connected layers summarize the outputs of two networks to take advantage of the complementary features of the LSTM and CNN by combining them. To compare the proposed method with other neural networks, we conducted a number of experiments on the TUT acoustic scenes 2016 dataset which consists of recordings from various acoustic scenes. By using the proposed combination structure, we achieved higher performance compared to the conventional DNN, CNN and LSTM architecture.",
"title": ""
},
{
"docid": "e668f84e16a5d17dff7d638a5543af82",
"text": "Mining topics in Twitter is increasingly attracting more attention. However, the shortness and informality of tweets leads to extreme sparse vector representation with a large vocabulary, which makes the conventional topic models (e.g., Latent Dirichlet Allocation) often fail to achieve high quality underlying topics. Luckily, tweets always show up with rich user-generated hash tags as keywords. In this paper, we propose a novel topic model to handle such semi-structured tweets, denoted as Hash tag Graph based Topic Model (HGTM). By utilizing relation information between hash tags in our hash tag graph, HGTM establishes word semantic relations, even if they haven't co-occurred within a specific tweet. In addition, we enhance the dependencies of both multiple words and hash tags via latent variables (topics) modeled by HGTM. We illustrate that the user-contributed hash tags could serve as weakly-supervised information for topic modeling, and hash tag relation could reveal the semantic relation between tweets. Experiments on a real-world twitter data set show that our model provides an effective solution to discover more distinct and coherent topics than the state-of-the-art baselines and has a strong ability to control sparseness and noise in tweets.",
"title": ""
},
{
"docid": "82c6906aec894bde04e773ebf4961864",
"text": "OBJECTIVE\nTo identify the biomechanical feasibility of the thoracic extrapedicular approach to the placement of screws.\n\n\nMETHODS\nFive fresh adult cadaveric thoracic spine from T1 to T8 were harvested. The screw was inserted either by pedicular approach or extrapedicular approach. The result was observed and the pullout strength by pedicular screw approach and extrapedicular screw approach via sagittal axis of the vertebrale was measured and compared statistically.\n\n\nRESULTS\nIn thoracic pedicular approach, the pullout strength of pedicle screw was 1001.23 N+/-220 N (288.2-1561.7 N)ls and that of thoracic extrapedicular screw approach was 827.01 N+/-260 N when screw was inserted into the vertebrae through transverse process, and 954.25 N+/-254 N when screw was inserted into the vertebrae through the lateral cortex of the pedicle. Compared with pedicular group, the pullout strength in extrapedicular group was decreased by 4.7% inserted through transverse process (P larger than 0.05) and by 17.3% inserted through the lateral cortex (P less than 0.05). The mean pullout strength by extrapedicular approach was decreased by 11.04% as compared with pedicular approach (P less than 0.05).\n\n\nCONCLUSIONS\nIt is feasible biomechanically to use extrapedicular screw technique to insert pedicular screws in the thoracic spine when it is hard to insert by pedicular approach.",
"title": ""
}
] |
scidocsrr
|
c4da55bef4f7a0dc19fb0e24221286e2
|
A Planar Feeding Technology Using Phase-and-Amplitude-Corrected SIW Horn and Its Application
|
[
{
"docid": "7e17c1842a70e416f0a90bdcade31a8e",
"text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.",
"title": ""
}
] |
[
{
"docid": "cb5d3d06c46266c9038aea9d18d4ae69",
"text": "Signal distortion of photoplethysmographs (PPGs) due to motion artifacts has been a limitation for developing real-time, wearable health monitoring devices. The artifacts in PPG signals are analyzed by comparing the frequency of the PPG with a reference pulse and daily life motions, including typing, writing, tapping, gesturing, walking, and running. Periodical motions in the range of pulse frequency, such as walking and running, cause motion artifacts. To reduce these artifacts in real-time devices, a least mean square based active noise cancellation method is applied to the accelerometer data. Experiments show that the proposed method recovers pulse from PPGs efficiently.",
"title": ""
},
{
"docid": "2d13bda0defb815bdc51e02262b78222",
"text": "A method has been devised for the electrophoretic transfer of proteins from polyacrylamide gels to nitrocellulose sheets. The method results in quantitative transfer of ribosomal proteins from gels containing urea. For sodium dodecyl sulfate gels, the original band pattern was obtained with no loss of resolution, but the transfer was not quantitative. The method allows detection of proteins by autoradiography and is simpler than conventional procedures. The immobilized proteins were detectable by immunological procedures. All additional binding capacity on the nitrocellulose was blocked with excess protein; then a specific antibody was bound and, finally, a second antibody directed against the first antibody. The second antibody was either radioactively labeled or conjugated to fluorescein or to peroxidase. The specific protein was then detected by either autoradiography, under UV light, or by the peroxidase reaction product, respectively. In the latter case, as little as 100 pg of protein was clearly detectable. It is anticipated that the procedure will be applicable to analysis of a wide variety of proteins with specific reactions or ligands.",
"title": ""
},
{
"docid": "c742c138780c10220487961d00724f56",
"text": "D. Seagrave and T. Grisso (2002) provide a review of the emerging research on the construct of juvenile psychopathy and make the important point that use of this construct in forensic decision-making could have serious consequences for juvenile offenders. Furthermore, the existing literature on the construct of psychopathy in youth is not sufficient to justify its use for most forensic purposes. These basic points are very important cautions on the use of measures of psychopathy in forensic settings. However, in this response, several issues related to the reasons given for why concern over the potential misuse of measures of psychopathy should be greater than that for measures of other psychopathological constructs used to make decisions with potentially serious consequences are discussed. Also, the rationale for some of the standards proposed to guide research on measures of juvenile psychopathy that focus on assumptions about the construct of psychopathy that are not clearly articulated and that are only peripherally related to validating their use in forensic assessments is questioned.",
"title": ""
},
{
"docid": "a24f958c480812feb338b651849037b2",
"text": "This paper investigates the detection and classification of fighting and pre and post fighting events when viewed from a video camera. Specifically we investigate normal, pre, post and actual fighting sequences and classify them. A hierarchical AdaBoost classifier is described and results using this approach are presented. We show it is possible to classify pre-fighting situations using such an approach and demonstrate how it can be used in the general case of continuous sequences.",
"title": ""
},
{
"docid": "44a84af55421c88347034d6dc14e4e30",
"text": "Anomaly detection plays an important role in protecting computer systems from unforeseen attack by automatically recognizing and filter atypical inputs. However, it can be difficult to balance the sensitivity of a detector – an aggressive system can filter too many benign inputs while a conservative system can fail to catch anomalies. Accordingly, it is important to rigorously test anomaly detectors to evaluate potential error rates before deployment. However, principled systems for doing so have not been studied – testing is typically ad hoc, making it difficult to reproduce results or formally compare detectors. To address this issue we present a technique and implemented system, Fortuna, for obtaining probabilistic bounds on false positive rates for anomaly detectors that process Internet data. Using a probability distribution based on PageRank and an efficient algorithm to draw samples from the distribution, Fortuna computes an estimated false positive rate and a probabilistic bound on the estimate’s accuracy. By drawing test samples from a well defined distribution that correlates well with data seen in practice, Fortuna improves on ad hoc methods for estimating false positive rate, giving bounds that are reproducible, comparable across different anomaly detectors, and theoretically sound. Experimental evaluations of three anomaly detectors (SIFT, SOAP, and JSAND) show that Fortuna is efficient enough to use in practice — it can sample enough inputs to obtain tight false positive rate bounds in less than 10 hours for all three detectors. These results indicate that Fortuna can, in practice, help place anomaly detection on a stronger theoretical foundation and help practitioners better understand the behavior and consequences of the anomaly detectors that they deploy. As part of our work, we obtain a theoretical result that may be of independent interest: We give a simple analysis of the convergence rate of the random surfer process defining PageRank that guarantees the same rate as the standard, second-eigenvalue analysis, but does not rely on any assumptions about the link structure of the web.",
"title": ""
},
{
"docid": "ce3e480e50ffc7a79c3dbc71b07ec9f7",
"text": "A relatively recent advance in cognitive neuroscience has been multi-voxel pattern analysis (MVPA), which enables researchers to decode brain states and/or the type of information represented in the brain during a cognitive operation. MVPA methods utilize machine learning algorithms to distinguish among types of information or cognitive states represented in the brain, based on distributed patterns of neural activity. In the current investigation, we propose a new approach for representation of neural data for pattern analysis, namely a Mesh Learning Model. In this approach, at each time instant, a star mesh is formed around each voxel, such that the voxel corresponding to the center node is surrounded by its p-nearest neighbors. The arc weights of each mesh are estimated from the voxel intensity values by least squares method. The estimated arc weights of all the meshes, called Mesh Arc Descriptors (MADs), are then used to train a classifier, such as Neural Networks, k-Nearest Neighbor, Naïve Bayes and Support Vector Machines. The proposed Mesh Model was tested on neuroimaging data acquired via functional magnetic resonance imaging (fMRI) during a recognition memory experiment using categorized word lists, employing a previously established experimental paradigm (Öztekin & Badre, 2011). Results suggest that the proposed Mesh Learning approach can provide an effective algorithm for pattern analysis of brain activity during cognitive processing.",
"title": ""
},
{
"docid": "9787ae39c27f9cfad2dbd29779bb5f36",
"text": "Compressive sensing (CS) techniques offer a framework for the detection and allocation of sparse signals with a reduced number of samples. Today, modern radar systems operate with high bandwidths—demanding high sample rates according to the Shannon–Nyquist theorem—and a huge number of single elements for phased array consumption and costs of radar systems. There is only a small number of publications addressing the application of CS to radar, leaving several open questions. This paper addresses some aspects as a further step to CS-radar by presenting generic system architectures and implementation considerations. It is not the aim of this paper to investigate numerically efficient algorithms but to point to promising applications as well as arising problems. Three possible applications are considered: pulse compression, radar imaging, and air space surveillance with array antennas. Some simulation results are presented and enriched by the evaluation of real data acquired by an experimental radar system of Fraunhofer FHR. & 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "60716b31303314598ac2f68d45c6cb51",
"text": "Female genital cosmetic surgery procedures have gained popularity in the West in recent years. Marketing by surgeons promotes the surgeries, but professional organizations have started to question the promotion and practice of these procedures. Despite some surgeon claims of drastic transformations of psychological, emotional, and sexual life associated with the surgery, little reliable evidence of such effects exists. This article achieves two objectives. First, reviewing the published academic work on the topic, it identifies the current state of knowledge around female genital cosmetic procedures, as well as limitations in our knowledge. Second, examining a body of critical scholarship that raises sociological and psychological concerns not typically addressed in medical literature, it summarizes broader issues and debates. Overall, the article demonstrates a paucity of scientific knowledge and highlights a pressing need to consider the broader ramifications of surgical practices. \"Today we have a whole society held in thrall to the drastic plastic of labial rejuvenation.\"( 1 ) \"At the present time, the field of female cosmetic genital surgery is like the old Wild, Wild West: wide open and unregulated\"( 2 ).",
"title": ""
},
{
"docid": "969c21b522f0247504d93f23084711c5",
"text": "A new approach for high-speed micro-crack detection of solar wafers with variable thickness is proposed. Using a pair of laser displacement sensors, wafer thickness is measured and the lighting intensity is automatically adjusted to compensate for loss in NIR transmission due to varying thickness. In this way, the image contrast is maintained relatively uniform for the entire size of a wafer. An improved version of Niblack segmentation algorithm is developed for this application. Experimental results show the effectiveness of the system when tested with solar wafers with thickness ranging from 125 to 170 μm. Since the inspection is performed on the fly, therefore, a high throughput rate of more than 3600 wafers per hour can easily be obtained. Hence, the proposed system enables rapid in-line monitoring and real-time measurement.",
"title": ""
},
{
"docid": "a40d3b98ab50a5cd924be09ab1f1cc40",
"text": "Feeling comfortable reading and understanding financial statements is critical to the success of healthcare executives and physicians involved in management. Businesses use three primary financial statements: a balance sheet represents the equation, Assets = Liabilities + Equity; an income statement represents the equation, Revenues - Expenses = Net Income; a statement of cash flows reports all sources and uses of cash during the represented period. The balance sheet expresses financial indicators at one particular moment in time, whereas the income statement and the statement of cash flows show activity that occurred over a stretch of time. Additional information is disclosed in attached footnotes and other supplementary materials. There are two ways to prepare financial statements. Cash-basis accounting recognizes revenue when it is received and expenses when they are paid. Accrual-basis accounting recognizes revenue when it is earned and expenses when they are incurred. Although cash-basis is acceptable, periodically using the accrual method reveals important information about receivables and liabilities that could otherwise remain hidden. Become more engaged with your financial statements by spending time reading them, tracking key performance indicators, and asking accountants and financial advisors questions. This will help you better understand your business and build a successful future.",
"title": ""
},
{
"docid": "cdf78bab8d93eda7ccbb41674d24b1a2",
"text": "OBJECTIVE\nThe U.S. Food and Drug Administration and Institute of Medicine are currently investigating front-of-package (FOP) food labelling systems to provide science-based guidance to the food industry. The present paper reviews the literature on FOP labelling and supermarket shelf-labelling systems published or under review by February 2011 to inform current investigations and identify areas of future research.\n\n\nDESIGN\nA structured search was undertaken of research studies on consumer use, understanding of, preference for, perception of and behaviours relating to FOP/shelf labelling published between January 2004 and February 2011.\n\n\nRESULTS\nTwenty-eight studies from a structured search met inclusion criteria. Reviewed studies examined consumer preferences, understanding and use of different labelling systems as well as label impact on purchasing patterns and industry product reformulation.\n\n\nCONCLUSIONS\nThe findings indicate that the Multiple Traffic Light system has most consistently helped consumers identify healthier products; however, additional research on different labelling systems' abilities to influence consumer behaviour is needed.",
"title": ""
},
{
"docid": "7588252227f9faef2212962e606cc992",
"text": "OBJECTIVE\nThis study examines the reasons for not using any method of contraception as well as reasons for not using modern methods of contraception, and factors associated with the future intention to use different types of contraceptives in India and its selected states, namely Uttar Pradesh, Assam and West Bengal.\n\n\nMETHODS\nData from the third wave of District Level Household and Facility Survey, 2007-08 were used. Bivariate as well as logistic regression analyses were performed to fulfill the study objective.\n\n\nRESULTS\nPostpartum amenorrhea and breastfeeding practices were reported as the foremost causes for not using any method of contraception. Opposition to use, health concerns and fear of side effects were reported to be major hurdles in the way of using modern methods of contraception. Results from logistic regression suggest considerable variation in explaining the factors associated with future intention to use contraceptives.\n\n\nCONCLUSION\nPromotion of health education addressing the advantages of contraceptive methods and eliminating apprehension about the use of these methods through effective communication by community level workers is the need of the hour.",
"title": ""
},
{
"docid": "cb3d1448269b29807dc62aa96ff6ad1a",
"text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.",
"title": ""
},
{
"docid": "414766f92683470af0b01edcd2ed6e62",
"text": "The world in which we work is changing. Information and communication technologies transform the work environment, providing the flexibility of when and where to work. The New Way of Working (NWOW) is a relatively new phenomenon that provides the context for these developments. It consists of three distinct pillars that are referred to as Bricks, Bytes and Behaviour. These pillars formed the basis for the development of the NWOW Analysis Monitor that enables organisations to determine their current level of NWOW adoption and provides guidance for future initiatives in adopting NWOW practices. The level of adoption is determined from both the manager’s and employees’ perspective as they might have a different perception and/or expectations regarding NWOW. The development of the multi-level NWOW Analysis Monitor is based on the Design Science Research approach. The monitor has been evaluated in two cases, forming two iterations in the design science research cycle. It has proved to be a useful assessment tool for organisations in the process of implementing NWOW. In future research the NWOW Analysis Monitor will be used in quantitative research on the effects of the implementation of NWOW on the organisation and its performance.",
"title": ""
},
{
"docid": "692f80bda530610858312da98bc49815",
"text": "Loss of heterozygosity (LOH) at locus 10q23.3 and mutation of the PTEN tumor suppressor gene occur frequently in both endometrial carcinoma and ovarian endometrioid carcinoma. To investigate the potential role of the PTEN gene in the carcinogenesis of ovarian endometrioid carcinoma and its related subtype, clear cell carcinoma, we examined 20 ovarian endometrioid carcinomas, 24 clear cell carcinomas, and 34 solitary endometrial cysts of the ovary for LOH at 10q23.3 and point mutations within the entire coding region of the PTEN gene. LOH was found in 8 of 19 ovarian endometrioid carcinomas (42.1%), 6 of 22 clear cell carcinomas (27.3%), and 13 of 23 solitary endometrial cysts (56.5%). In 5 endometrioid carcinomas synchronous with endometriosis, 3 cases displayed LOH events common to both the carcinoma and the endometriosis, 1 displayed an LOH event in only the carcinoma, and 1 displayed no LOH events in either lesion. In 7 clear cell carcinomas synchronous with endometriosis, 3 displayed LOH events common to both the carcinoma and the endometriosis, 1 displayed an LOH event in only the carcinoma, and 3 displayed no LOH events in either lesion. In no cases were there LOH events in the endometriosis only. Somatic mutations in the PTEN gene were identified in 4 of 20 ovarian endometrioid carcinomas (20.0%), 2 of 24 clear cell carcinomas (8.3%), and 7 of 34 solitary endometrial cysts (20.6%). These results indicate that inactivation of the PTEN tumor suppressor gene is an early event in the development of ovarian endometrioid carcinoma and clear cell carcinoma of the ovary.",
"title": ""
},
{
"docid": "0626c39604a1dde16a5d27de1c4cef24",
"text": "Two dimensional (2D) materials with a monolayer of atoms represent an ultimate control of material dimension in the vertical direction. Molybdenum sulfide (MoS2) monolayers, with a direct bandgap of 1.8 eV, offer an unprecedented prospect of miniaturizing semiconductor science and technology down to a truly atomic scale. Recent studies have indeed demonstrated the promise of 2D MoS2 in fields including field effect transistors, low power switches, optoelectronics, and spintronics. However, device development with 2D MoS2 has been delayed by the lack of capabilities to produce large-area, uniform, and high-quality MoS2 monolayers. Here we present a self-limiting approach that can grow high quality monolayer and few-layer MoS2 films over an area of centimeters with unprecedented uniformity and controllability. This approach is compatible with the standard fabrication process in semiconductor industry. It paves the way for the development of practical devices with 2D MoS2 and opens up new avenues for fundamental research.",
"title": ""
},
{
"docid": "0320ebc09663ecd6bf5c39db472fcbde",
"text": "The human visual system is capable of learning an unbounded number of facts from images including not only objects but also their attributes, actions and interactions. Such uniform understanding of visual facts has not received enough attention. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <boy>), (2) attributes (e.g., <boy, tall>), (3) actions (e.g., <boy, playing>), and (4) interactions (e.g., <boy, riding, a horse >). Each fact has a language view (e.g., < boy, playing>) and a visual view (an image). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce a structured embedding model. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of > 202,000 facts and 814,000 images. Our results show the advantage of relating facts by the structure by the proposed model compared to the baselines.",
"title": ""
},
{
"docid": "c0787a32d5c641e0368a1802fd24aa8e",
"text": "FPGA-based CNN accelerators are gaining popularity due to high energy efficiency and great flexibility in recent years. However, as the networks grow in depth and width, the great volume of intermediate data is too large to store on chip, data transfers between on-chip memory and off-chip memory should be frequently executed, which leads to unexpected offchip memory access latency and energy consumption. In this paper, we propose a block convolution approach, which is a memory-efficient, simple yet effective block-based convolution to completely avoid intermediate data from streaming out to off-chip memory during network inference. Experiments on the very large VGG-16 network show that the improved top-1/top-5 accuracy of 72.60%/91.10% can be achieved on the ImageNet classification task with the proposed approach. As a case study, we implement the VGG-16 network with block convolution on Xilinx Zynq ZC706 board, achieving a frame rate of 12.19fps under 150MHz working frequency, with all intermediate data staying on chip.",
"title": ""
},
{
"docid": "adf2ed7bde8b051dea88d4907ec9f10c",
"text": "The strong emotional reaction elicited by privacy issues is well documented (e.g., [12, 8]). The emotional aspect of privacy makes it difficult to evaluate privacy concern, and directly asking about a privacy issue may result in an emotional reaction and a biased response. This effect may be partly responsible for the dramatic privacy concern ratings coming from recent surveys, ratings that often seem to be at odds with user behavior. In this paper we propose indirect techniques for measuring content privacy concerns through surveys, thus hopefully diminishing any emotional response. We present a design for indirect surveys and test the design's use as (1) a means to measure relative privacy concerns across content types, (2) a tool for predicting unwillingness to share content (a possible indicator of privacy concern), and (3) a gauge for two underlying dimensions of privacy - content importance and the willingness to share content. Our evaluation consists of 3 surveys, taken by 200 users each, in which privacy is never asked about directly, but privacy warnings are issued with increasing escalation in the instructions and individual question-wording. We demonstrate that this escalation results in statistically and practically significant differences in responses to individual questions. In addition, we compare results against a direct privacy survey and show that rankings of privacy concerns are increasingly preserved as privacy language increases in the indirect surveys, thus indicating our mapping of the indirect questions to privacy ratings is accurately reflecting privacy concerns.",
"title": ""
},
{
"docid": "09f3bb814e259c74f1c42981758d5639",
"text": "PURPOSE OF REVIEW\nThe application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases.\n\n\nRECENT FINDINGS\nMachine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies.\n\n\nSUMMARY\nOverall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.",
"title": ""
}
] |
scidocsrr
|
9f96e2aa883f321c65845fe99f1a91db
|
Predicting movie ratings from audience behaviors
|
[
{
"docid": "1f4376dcc726b7ac5726620d887c60c3",
"text": "Recently sparse representation has been applied to visual tracker by modeling the target appearance using a sparse approximation over a template set, which leads to the so-called L1 trackers as it needs to solve an ℓ1 norm related minimization problem for many times. While these L1 trackers showed impressive tracking accuracies, they are very computationally demanding and the speed bottleneck is the solver to ℓ1 norm minimizations. This paper aims at developing an L1 tracker that not only runs in real time but also enjoys better robustness than other L1 trackers. In our proposed L1 tracker, a new ℓ1 norm related minimization model is proposed to improve the tracking accuracy by adding an ℓ1 norm regularization on the coefficients associated with the trivial templates. Moreover, based on the accelerated proximal gradient approach, a very fast numerical solver is developed to solve the resulting ℓ1 norm related minimization problem with guaranteed quadratic convergence. The great running time efficiency and tracking accuracy of the proposed tracker is validated with a comprehensive evaluation involving eight challenging sequences and five alternative state-of-the-art trackers.",
"title": ""
}
] |
[
{
"docid": "e1064861857e32be6184d3e9852f2c48",
"text": "Alzheimer's disease (AD) represents the most frequent neurodegenerative disease of the human brain worldwide. Currently practiced treatment strategies for AD only include some less effective symptomatic therapeutic interventions, which unable to counteract the disease course of AD. New therapeutic attempts aimed to prevent, reduce, or remove the extracellular depositions of the amyloid-β protein did not elicit beneficial effects on cognitive deficits or functional decline of AD. In view of the failure of these amyloid-β-based therapeutic trials and the close correlation between the brain pathology of the cytoskeletal tau protein and clinical AD symptoms, therapeutic attention has since shifted to the tau cytoskeletal protein as a novel drug target. The abnormal hyperphosphorylation and intraneuronal aggregation of this protein are early events in the evolution of the AD-related neurofibrillary pathology, and the brain spread of the AD-related tau aggregation pathology may possibly follow a corruptive protein templating and seeding-like mechanism according to the prion hypothesis. Accordingly, immunotherapeutic targeting of the tau aggregation pathology during the very early pre-tangle phase is currently considered to represent an effective and promising therapeutic approach for AD. Recent studies have shown that the initial immunoreactive tau aggregation pathology already prevails in several subcortical regions in the absence of any cytoskeletal changes in the cerebral cortex. Thus, it may be hypothesized that the subcortical brain regions represent the \"port of entry\" for the pathogenetic agent from which the disease ascends anterogradely as an \"interconnectivity pathology\".",
"title": ""
},
{
"docid": "20b1a9f9ea3a9a1798f611cbd44658c5",
"text": "The majority of colorectal cancers (CRCs) are classified as adenocarcinoma not otherwise specified (AC). Mucinous carcinoma (MC) is a distinct form of CRC and is found in 10–15% of patients with CRC. MC differs from AC in terms of both clinical and histopathological characteristics, and has long been associated with an inferior response to treatment compared with AC. The debate concerning the prognostic implications of MC in patients with CRC is ongoing and MC is still considered an unfavourable and unfamiliar subtype of the disease. Nevertheless, in the past few years epidemiological and clinical studies have shed new light on the treatment and management of patients with MC. Use of a multidisciplinary approach, including input from surgeons, pathologists, oncologists and radiologists, is beginning to lead to more-tailored approaches to patient management, on an individualized basis. In this Review, the authors provide insight into advances that have been made in the care of patients with MC. The prognostic implications for patients with colon or rectal MC are described separately; moreover, the predictive implications of MC regarding responses to commonly used therapies for CRC, such as chemotherapy, radiotherapy and chemoradiotherapy, and the potential for, and severity of, metastasis are also described.",
"title": ""
},
{
"docid": "777cbf7e5c5bdf4457ce24520bbc8036",
"text": "Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today's DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TL-DRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5× faster than the next fastest simulator. Ramulator is released under the permissive BSD license.",
"title": ""
},
{
"docid": "9f883ffe537afa07a38c90c0174f7b03",
"text": "The scope and purpose of this work is 2-fold: to synthesize the available evidence and to translate it into recommendations. This document provides recommendations only when there is evidence to support them. As such, they do not constitute a complete protocol for clinical use. Our intention is that these recommendations be used by others to develop treatment protocols, which necessarily need to incorporate consensus and clinical judgment in areas where current evidence is lacking or insufficient. We think it is important to have evidence-based recommendations to clarify what aspects of practice currently can and cannot be supported by evidence, to encourage use of evidence-based treatments that exist, and to encourage creativity in treatment and research in areas where evidence does not exist. The communities of neurosurgery and neuro-intensive care have been early pioneers and supporters of evidence-based medicine and plan to continue in this endeavor. The complete guideline document, which summarizes and evaluates the literature for each topic, and supplemental appendices (A-I) are available online at https://www.braintrauma.org/coma/guidelines.",
"title": ""
},
{
"docid": "dd080a0ad38076c2693d6bcef574b053",
"text": "We present an approach to detect network configuration errors, which combines the benefits of two prior approaches. Like prior techniques that analyze configuration files, our approach can find errors proactively, before the configuration is applied, and answer “what if” questions. Like prior techniques that analyze data-plane snapshots, our approach can check a broad range of forwarding properties and produce actual packets that violate checked properties. We accomplish this combination by faithfully deriving and then analyzing the data plane that would emerge from the configuration. Our derivation of the data plane is fully declarative, employing a set of logical relations that represent the control plane, the data plane, and their relationship. Operators can query these relations to understand identified errors and their provenance. We use our approach to analyze two large university networks with qualitatively different routing designs and find many misconfigurations in each. Operators have confirmed the majority of these as errors and have fixed their configurations accordingly.",
"title": ""
},
{
"docid": "efb2a123ee1e757abfa87ad5f1aa03a2",
"text": "STUDY OBJECTIVES\nTo investigate the effects of sleep extension over multiple weeks on specific measures of athletic performance as well as reaction time, mood, and daytime sleepiness.\n\n\nSETTING\nStanford Sleep Disorders Clinic and Research Laboratory and Maples Pavilion, Stanford University, Stanford, CA.\n\n\nPARTICIPANTS\nEleven healthy students on the Stanford University men's varsity basketball team (mean age 19.4 ± 1.4 years).\n\n\nINTERVENTIONS\nSubjects maintained their habitual sleep-wake schedule for a 2-4 week baseline followed by a 5-7 week sleep extension period. Subjects obtained as much nocturnal sleep as possible during sleep extension with a minimum goal of 10 h in bed each night. Measures of athletic performance specific to basketball were recorded after every practice including a timed sprint and shooting accuracy. Reaction time, levels of daytime sleepiness, and mood were monitored via the Psychomotor Vigilance Task (PVT), Epworth Sleepiness Scale (ESS), and Profile of Mood States (POMS), respectively.\n\n\nRESULTS\nTotal objective nightly sleep time increased during sleep extension compared to baseline by 110.9 ± 79.7 min (P < 0.001). Subjects demonstrated a faster timed sprint following sleep extension (16.2 ± 0.61 sec at baseline vs. 15.5 ± 0.54 sec at end of sleep extension, P < 0.001). Shooting accuracy improved, with free throw percentage increasing by 9% and 3-point field goal percentage increasing by 9.2% (P < 0.001). Mean PVT reaction time and Epworth Sleepiness Scale scores decreased following sleep extension (P < 0.01). POMS scores improved with increased vigor and decreased fatigue subscales (P < 0.001). Subjects also reported improved overall ratings of physical and mental well-being during practices and games.\n\n\nCONCLUSIONS\nImprovements in specific measures of basketball performance after sleep extension indicate that optimal sleep is likely beneficial in reaching peak athletic performance.",
"title": ""
},
{
"docid": "f3824c2f3d6687dc88c06be66a3f2db7",
"text": "As part of a study that explored the factors associated with consistent condom use among senior secondary school female learners in Mbonge subdivision of rural Cameroon, the Health Belief Model (HBM) was used as the framework. Literature was reviewed to ascertain how the entire HBM has been defined and what recommendations have been made as to how to apply the HBM as a framework in studies regarding HIV/AIDS prevention. To achieve this, a systemic review of literature was undertaken. Electronic databases, academic journals and books from various sources were accessed. Several key search terms relating to the HBM and HIV/AIDS prevention were used. Only references deemed useful for bibliographies of relevant texts and journal articles were included. The inclusion criteria were articles that provided information about HIV/AIDS prevention and the HBM constructs. Six constructs of the HBM (perceived susceptibility to HIV/AIDS, perceived severity of HIV/AIDS, perceived benefit of condom use, perceived barriers to condom use, cues to action for condom use and condom use self-efficacy), and modifying factors were identified and applied as the framework for the study. The HBM was identified as the most commonly used theory in health education, health promotion and disease prevention, and thus provided the framework for the study. The underlying concept of the HBM is that behaviour is determined by personal beliefs or perceptions about a disease and the strategies available to decrease its occurrence.",
"title": ""
},
{
"docid": "0a4c81c9bb27c1231f6329587362eef7",
"text": "Traditional approaches to knowledge management are essentially limited to document management. However, much knowledge in organizations or communities resides in an informal social network and may be accessed only by asking the right people. This paper describes MARS, a multiagent referral system for knowledge management. MARS assigns a software agent to each user. The agents facilitate their users' interactions and help manage their personal social networks. Moreover, the agents cooperate with one another by giving and taking referrals to help their users find the right parties to contact for a specific knowledge need.",
"title": ""
},
{
"docid": "68a77338227063ce4880eb0fe98a3a92",
"text": "Mammalian microRNAs (miRNAs) have recently been identified as important regulators of gene expression, and they function by repressing specific target genes at the post-transcriptional level. Now, studies of miRNAs are resolving some unsolved issues in immunology. Recent studies have shown that miRNAs have unique expression profiles in cells of the innate and adaptive immune systems and have pivotal roles in the regulation of both cell development and function. Furthermore, when miRNAs are aberrantly expressed they can contribute to pathological conditions involving the immune system, such as cancer and autoimmunity; they have also been shown to be useful as diagnostic and prognostic indicators of disease type and severity. This Review discusses recent advances in our understanding of both the intended functions of miRNAs in managing immune cell biology and their pathological roles when their expression is dysregulated.",
"title": ""
},
{
"docid": "96053a9bd2faeff5ddf61f15f2b989c4",
"text": "Poly(vinyl alcohol) cryogel, PVA-C, is presented as a tissue-mimicking material, suitable for application in magnetic resonance (MR) imaging and ultrasound imaging. A 10% by weight poly(vinyl alcohol) in water solution was used to form PVA-C, which is solidified through a freeze-thaw process. The number of freeze-thaw cycles affects the properties of the material. The ultrasound and MR imaging characteristics were investigated using cylindrical samples of PVA-C. The speed of sound was found to range from 1520 to 1540 m s(-1), and the attenuation coefficients were in the range of 0.075-0.28 dB (cm MHz)(-1). T1 and T2 relaxation values were found to be 718-1034 ms and 108-175 ms, respectively. We also present applications of this material in an anthropomorphic brain phantom, a multi-volume stenosed vessel phantom and breast biopsy phantoms. Some suggestions are made for how best to handle this material in the phantom design and development process.",
"title": ""
},
{
"docid": "33eb86dcfd8b28aeb1616724c2ffc5eb",
"text": "Trajectories obtained from Global Position System (GPS)-enabled taxis grant us an opportunity not only to extract meaningful statistics, dynamics, and behaviors about certain urban road users but also to monitor adverse and/or malicious events. In this paper, we focus on the problem of detecting anomalous routes by comparing the latter against time-dependent historically “normal” routes. We propose an online method that is able to detect anomalous trajectories “on-the-fly” and to identify which parts of the trajectory are responsible for its anomalousness. Furthermore, we perform an in-depth analysis on around 43 800 anomalous trajectories that are detected out from the trajectories of 7600 taxis for a month, revealing that most of the anomalous trips are the result of conscious decisions of greedy taxi drivers to commit fraud. We evaluate our proposed isolation-based online anomalous trajectory (iBOAT) through extensive experiments on large-scale taxi data, and it shows that iBOAT achieves state-of-the-art performance, with a remarkable performance of the area under a curve (AUC) ≥ 0.99.",
"title": ""
},
{
"docid": "7c5892ec30e081f6f2eec5c9bbae3b38",
"text": "In recent years, the power quality of the ac system has become great concern due to the rapidly increased numbers of electronic equipment, power electronics and high voltage power system. Most of the commercial and industrial installation in the contry has large electrical loads which are severally inductive in nature causing lagging power factor which gives heavy penalties to consumer by electricity board. This situation is taken care by PFC. Power factor correction is the capacity of absorbing the reactive power produced by a load. In case of fixed loads, this can be done manually by switching of capacitors, however in case of rapidly varying and scattered loads it becomes difficult to maintain a high power factor by manually switching on/off the capacitors in proportion to variation of load within an installation. This drawback is overcome by using an APFC panel. In this paper measuring of power factor from load is done by using PIC microcontroller and trigger required capacitors in order to compensate reactive power and bring power factor near to unity.",
"title": ""
},
{
"docid": "a259b83f74b76401334a544c1fa2192d",
"text": "Pan-sharpening is a fundamental and significant task in the field of remote sensing imagery processing, in which high-resolution spatial details from panchromatic images are employed to enhance the spatial resolution of multispectral (MS) images. As the transformation from low spatial resolution MS image to high-resolution MS image is complex and highly nonlinear, inspired by the powerful representation for nonlinear relationships of deep neural networks, we introduce multiscale feature extraction and residual learning into the basic convolutional neural network (CNN) architecture and propose the multiscale and multidepth CNN for the pan-sharpening of remote sensing imagery. Both the quantitative assessment results and the visual assessment confirm that the proposed network yields high-resolution MS images that are superior to the images produced by the compared state-of-the-art methods.",
"title": ""
},
{
"docid": "556e496bd716f46e27c8378066c91521",
"text": "A study is being done into the psychology of crowd behaviour during emergencies, and ways of ensuring safety during mass evacuations by encouraging more altruistic behaviour. Crowd emergencies have previously been understood as involving panic and selfish behaviour. The present study tests the claims that (1) co-operation and altruistic behaviour rather than panic will predominate in mass responses to emergencies, even in situations where there is a clear threat of death; and that this is the case not only because (2) everyday norms and social roles continue to exert an influence, but also because (3) the external threat can create a sense of solidarity amongst strangers. Qualitative analysis of interviews with survivors of different emergencies supports these claims. A second study of the July 7 London bombings is on-going and also supports these claims. While these findings provide support for some existing models of mass emergency evacuation, it also points to the necessity of a new theoretical approach to the phenomena, using Self-Categorization Theory. Practical applications for the future management of crowd emergencies are also considered.",
"title": ""
},
{
"docid": "dcb06055c5494384c27f0e76415023fd",
"text": "Robotic exoskeleton systems are one of the highly active areas in recent robotic research. These systems have been developed significantly to be used for the human power augmentation, robotic rehabilitation, human power assist, and haptic interaction in virtual reality. Unlike the robots used in industry, the robotic exoskeleton systems should be designed with special consideration since they directly interact with human user. In the mechanical design of these systems, movable ranges, safety, comfort wearing, low inertia, and adaptability should be especially considered. Controllability, responsiveness, flexible and smooth motion generation, and safety should especially be considered in the controllers of exoskeleton systems. Furthermore, the controller should generate the motions in accordance with the human motion intention. This paper briefly reviews the upper extremity robotic exoskeleton systems. In the short review, it is focused to identify the brief history, basic concept, challenges, and future development of the robotic exoskeleton systems. Furthermore, key technologies of upper extremity exoskeleton systems are reviewed by taking state-of-the-art robot as examples.",
"title": ""
},
{
"docid": "bc69fe2a1791b8d7e0e262f8110df9d4",
"text": "A small-size coupled-fed loop antenna suitable to be printed on the system circuit board of the mobile phone for penta-band WWAN operation (824-960/1710-2170 MHz) is presented. The loop antenna requires only a small footprint of 15 x 25 mm2 on the circuit board, and it can also be in close proximity to the surrounding ground plane printed on the circuit board. That is, very small or no isolation distance is required between the antenna's radiating portion and the nearby ground plane. This can lead to compact integration of the internal on-board printed antenna on the circuit board of the mobile phone, especially the slim mobile phone. The loop antenna also shows a simple structure; it is formed by a loop strip of about 87 mm with its end terminal short-circuited to the ground plane and its front section capacitively coupled to a feeding strip which is also an efficient radiator to contribute a resonant mode for the antenna's upper band to cover the GSM1800/1900/UMTS bands (1710-2170 MHz). Through the coupling excitation, the antenna can also generate a 0.25-wavelength loop resonant mode to form the antenna's lower band to cover the GSM850/900 bands (824-960 MHz). Details of the proposed antenna are presented. The SAR results for the antenna with the presence of the head and hand phantoms are also studied.",
"title": ""
},
{
"docid": "36356a91bc84888cb2dd6180983fdfc5",
"text": "We recently showed that Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform state-of-the-art deep neural networks (DNNs) for large scale acoustic modeling where the models were trained with the cross-entropy (CE) criterion. It has also been shown that sequence discriminative training of DNNs initially trained with the CE criterion gives significant improvements. In this paper, we investigate sequence discriminative training of LSTM RNNs in a large scale acoustic modeling task. We train the models in a distributed manner using asynchronous stochastic gradient descent optimization technique. We compare two sequence discriminative criteria – maximum mutual information and state-level minimum Bayes risk, and we investigate a number of variations of the basic training strategy to better understand issues raised by both the sequential model, and the objective function. We obtain significant gains over the CE trained LSTM RNN model using sequence discriminative training techniques.",
"title": ""
},
{
"docid": "6c72b38246e35d1f49d7f55e89b42f21",
"text": "The success of IT project related to numerous factors. It had an important significance to find the critical factors for the success of project. Based on the general analysis of IT project management, this paper analyzed some factors of project management for successful IT project from the angle of modern project management. These factors include project participators, project communication, collaboration, and information sharing mechanism as well as project management process. In the end, it analyzed the function of each factor for a successful IT project. On behalf of the collective goal, by the use of the favorable project communication and collaboration, the project participants carry out successfully to the management of the process, which is significant to the project, and make project achieve success eventually.",
"title": ""
},
{
"docid": "88ac730e4e54ecc527bcd188b7cc5bf5",
"text": "In this paper we outline the nature of Neuro-linguistic Programming and explore its potential for learning and teaching. The paper draws on current research by Mathison (2003) to illustrate the role of language and internal imagery in teacherlearner interactions, and the way language influences beliefs about learning. Neuro-linguistic Programming (NLP) developed in the USA in the 1970's. It has achieved widespread popularity as a method for communication and personal development. The title, coined by the founders, Bandler and Grinder (1975a), refers to purported systematic, cybernetic links between a person's internal experience (neuro), their language (linguistic) and their patterns of behaviour (programming). In essence NLP is a form of modelling that offers potential for systematic and detailed understanding of people's subjective experience. NLP is eclectic, drawing on models and strategies from a wide range of sources. We outline NLP's approach to teaching and learning, and explore applications through illustrative data from Mathison's study. A particular implication for the training of educators is that of attention to communication skills. Finally we summarise criticisms of NLP that may represent obstacles to its acceptance by academe.",
"title": ""
},
{
"docid": "41e6c6be1e87daf0276403c6d91e9ebd",
"text": "Starting from Shannon's celebrated 1948 channel coding theorem, we trace the evolution of channel coding from Hamming codes to capacity-approaching codes. We focus on the contributions that have led to the most significant improvements in performance versus complexity for practical applications, particularly on the additive white Gaussian noise channel. We discuss algebraic block codes, and why they did not prove to be the way to get to the Shannon limit. We trace the antecedents of today's capacity-approaching codes: convolutional codes, concatenated codes, and other probabilistic coding schemes. Finally, we sketch some of the practical applications of these codes.",
"title": ""
}
] |
scidocsrr
|
a41f3178dd5aa19ea156e5e68d392e7c
|
Review of hardware cost estimation methods, models and tools applied to early phases of space mission planning
|
[
{
"docid": "f052fae696370910cc59f48552ddd889",
"text": "Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is included.",
"title": ""
}
] |
[
{
"docid": "e69f71cc98bce195d0cfb77ecdc31088",
"text": "Wheat grass juice is the juice extracted from the pulp of wheat grass and has been used as a general-purpose health tonic for several years. Several of our patients in the thalassemia unit began consuming wheat grass juice after anecdotal accounts of beneficial effects on transfusion requirements. These encouraging experiences prompted us to evaluate the effect of wheat grass juice on transfusion requirements in patients with transfusion dependent beta thalassemia. Families of patients raised the wheat grass at home in kitchen garden/pots. The patients consumed about 100 mL of wheat grass juice daily. Each patient acted as his own control. Observations recorded during the period of intake of wheat grass juice were compared with one-year period preceding it. Variables recorded were the interval between transfusions, pre-transfusion hemoglobin, amount of blood transfused and the body weight. A beneficial effect of wheat grass juice was defined as decrease in the requirement of packed red cells (measured as grams/Kg body weight/year) by 25% or more. 16 cases were analyzed. Blood transfusion requirement fell by >25% in 8 (50%) patients with a decrease of >40% documented in 3 of these. No perceptible adverse effects were recognized.",
"title": ""
},
{
"docid": "a2622b1e0c1c58a535ec11a5075d1222",
"text": "The condition of a machine can automatically be identified by creating and classifying features that summarize characteristics of measured signals. Currently, experts, in their respective fields, devise these features based on their knowledge. Hence, the performance and usefulness depends on the expert's knowledge of the underlying physics or statistics. Furthermore, if new and additional conditions should be detectable, experts have to implement new feature extraction methods. To mitigate the drawbacks of feature engineering, a method from the subfield of feature learning, i.e., deep learning (DL), more specifically convolutional neural networks (NNs), is researched in this paper. The objective of this paper is to investigate if and how DL can be applied to infrared thermal (IRT) video to automatically determine the condition of the machine. By applying this method on IRT data in two use cases, i.e., machine-fault detection and oil-level prediction, we show that the proposed system is able to detect many conditions in rotating machinery very accurately (i.e., 95 and 91.67% accuracy for the respective use cases), without requiring any detailed knowledge about the underlying physics, and thus having the potential to significantly simplify condition monitoring using complex sensor data. Furthermore, we show that by using the trained NNs, important regions in the IRT images can be identified related to specific conditions, which can potentially lead to new physical insights.",
"title": ""
},
{
"docid": "53aeddc466479c710c132a19513426f6",
"text": "This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback-Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action-constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution-that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.",
"title": ""
},
{
"docid": "5fd0b013ee2778ac6328729566eb1481",
"text": "As more and more virtual machines (VM) are packed into a physical machine, refactoring common kernel components shared by the virtual machines running on the same physical machine significantly reduces the overall resource consumption. A refactored kernel component typically runs on a special VM called a virtual appliance. Because of the semantics gap in Hardware Abstraction Layer (HAL)-based virtualization, a physical machine's virtual appliance requires the support of per-VM in-guest agents to perform VM-specific operations such as kernel data structure access and modification. To simplify deployment, these agents must be injected into guest virtual machines without requiring any manual installation. Moreover, it is essential to protect the integrity of in-guest agents at run time, especially when the underlying refactored kernel service is security-related. This paper describes the design, implementation and evaluation of a surreptitious kernel agent deployment and execution mechanism called SADE that requires zero installation effort and effectively hides the execution of agent code. To demonstrate the efficacy of SADE, we describe a signature-based memory scanning virtual appliance that uses SADE to inject its in-guest kernel agents without any support from the injected virtual machine, and show that both the start-up overhead and the run-time performance penalty of SADE are quite modest in practice.",
"title": ""
},
{
"docid": "28016e339bab5c1f5daa6bf26c3a06dd",
"text": "In this paper, we propose a straightforward solution to the problems of compositional parallel programming by using skeletons as the uniform mechanism for structured composition. In our approach parallel programs are constructed by composing procedures in a conventional base language using a set of high-level, pre-defined, functional, parallel computational forms known as skeletons. The ability to compose skeletons provides us with the essential tools for building further and more complex application-oriented skeletons specifying important aspects of parallel computation. Compared with the process network based composition approach, such as PCN, the skeleton approach abstracts away the fine details of connecting communication ports to the higher level mechanism of making data distributions conform, thus avoiding the complexity of using lower level ports as the means of interaction. Thus, the framework provides a natural integration of the compositional programming approach with the data parallel programming paradigm.",
"title": ""
},
{
"docid": "dcfc6f3c1eba7238bd6c6aa18dcff6df",
"text": "With the evaluation and simulation of long-term evolution/4G cellular network and hot discussion about new technologies or network architecture for 5G, the appearance of simulation and evaluation guidelines for 5G is in urgent need. This paper analyzes the challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures. Based on the overview of evaluation methodologies issued for 4G candidates, challenges in 5G evaluation are formulated. Additionally, a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for 5G.",
"title": ""
},
{
"docid": "f60dfa21c052672d526e326c29be9447",
"text": "The intense competition that accompanied the growth of internet-based companies ushered in the era of 'big data' characterized by major innovations in processing of very large amounts of data and the application of advanced analytics including data mining and machine learning. Healthcare is on the cusp of its own era of big data, catalyzed by the changing regulatory and competitive environments, fueled by growing adoption of electronic health records, as well as efforts to integrate medical claims, electronic health records and other novel data sources. Applying the lessons from big data pioneers will require healthcare and life science organizations to make investments in new hardware and software, as well as in individuals with different skills. For life science companies, this will impact the entire pharmaceutical value chain from early research to postcommercialization support. More generally, this will revolutionize comparative effectiveness research.",
"title": ""
},
{
"docid": "b81c04540487e09937401130ccd53ee2",
"text": "Some design and operation aspects of axial flux permanent magnet synchronous machines, wound with concentrated coils, are presented. Due to their high number of poles, compactness, and excellent waveform quality and efficiency, these machines show satisfactory operation at low speeds, both as direct drive generators and as motors. In this paper, after a general analysis of the model and design features of this kind of machine, the attention is focused on wind power generation: The main sizing equations are defined, and the most relevant figures of merit are examined by means of a suitable parametric analysis. Some experimental results obtained by testing a three-phase, 50-kW, and 70-rpm prototype are presented and discussed, validating the modeling theory and the design procedure.",
"title": ""
},
{
"docid": "0ff8c4799b62c70ef6b7d70640f1a931",
"text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.",
"title": ""
},
{
"docid": "8d092dfa88ba239cf66e5be35fcbfbcc",
"text": "We present VideoWhisper, a novel approach for unsupervised video representation learning. Based on the observation that the frame sequence encodes the temporal dynamics of a video (e.g., object movement and event evolution), we treat the frame sequential order as a self-supervision to learn video representations. Unlike other unsupervised video feature learning methods based on frame-level feature reconstruction that is sensitive to visual variance, VideoWhisper is driven by a novel video “sequence-to-whisper” learning strategy. Specifically, for each video sequence, we use a prelearned visual dictionary to generate a sequence of high-level semantics, dubbed “whisper,” which can be considered as the language describing the video dynamics. In this way, we model VideoWhisper as an end-to-end sequence-to-sequence learning model using attention-based recurrent neural networks. This model is trained to predict the whisper sequence and hence it is able to learn the temporal structure of videos. We propose two ways to generate video representation from the model. Through extensive experiments on two real-world video datasets, we demonstrate that video representation learned by V ideoWhisper is effective to boost fundamental multimedia applications such as video retrieval and event classification.",
"title": ""
},
{
"docid": "e9b942c71646f2907de65c2641329a66",
"text": "In many vision based application identifying moving objects is important and critical task. For different computer vision application Background subtraction is fast way to detect moving object. Background subtraction separates the foreground from background. However, background subtraction is unable to remove shadow from foreground. Moving cast shadow associated with moving object also gets detected making it challenge for video surveillance. The shadow makes it difficult to detect the exact shape of object and to recognize the object.",
"title": ""
},
{
"docid": "93ae39ed7b4d6b411a2deb9967e2dc7d",
"text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.",
"title": ""
},
{
"docid": "7d604a9daef9b10c31ac74ecc60bd690",
"text": "Sentiment analysis is treated as a classification task as it classifies the orientation of a text into either positive or negative. This paper describes experimental results that applied Support Vector Machine (SVM) on benchmark datasets to train a sentiment classifier. N-grams and different weighting scheme were used to extract the most classical features. It also explores Chi-Square weight features to select informative features for the classification. Experimental analysis reveals that by using Chi-Square feature selection may provide significant improvement on classification accuracy.",
"title": ""
},
{
"docid": "f20f924fc0e975e0a4b2107692e6bd4c",
"text": "One of the ultimate goals of open ended learning systems is to take advantage of experience to get a future benefit. We can identify two levels in learning. One builds directly over the data : it captures the pattern and regularities which allow for reliable predictions on new samples. The other starts from such an obtained source knowledge and focuses on how to generalize it to new target concepts : this is also known as learning to learn. Most of the existing machine learning methods stop at the first level and are able of reliable future decisions only if a large amount of training samples is available. This work is devoted to the second level of learning and focuses on how to transfer information from prior knowledge, exploiting it on a new learning problem with possibly scarce labeled data. We propose several algorithmic solutions by leveraging over prior models or features. One possibility is to constrain any target learning model to be close to the linear combination of several source models. Alternatively the prior knowledge can be used as an expert which judges over the target samples and considers the obtained output as an extra feature descriptor. All the proposed approaches evaluate automatically the relevance of prior knowledge and decide from where and how much to transfer without any need of external supervision or heuristically hand tuned parameters. A thorough experimental analysis shows the effectiveness of the defined methods both in case of interclass transfer and for adaptation across different domains. The last part of this work is dedicated to moving forward knowledge transfer towards life long learning. We show how to combine transfer and online learning to obtain a method which processes continuously new data guided by information acquired in the past. We also present an approach to exploit the large variety of existing visual data resources every time it is necessary to solve a new situated learning problem. We propose an image representation that decomposes orthogonally into a specific and a generic part. The last one can be used as an un-biased reference knowledge for future learning tasks.",
"title": ""
},
{
"docid": "907883af0e81f4157e81facd4ff4344c",
"text": "This work presents a low-power low-cost CDR design for RapidIO SerDes. The design is based on phase interpolator, which is controlled by a synthesized standard cell digital block. Half-rate architecture is adopted to lessen the problems in routing high speed clocks and reduce power. An improved half-rate bang-bang phase detector is presented to assure the stability of the system. Moreover, the paper proposes a simplified control scheme for the phase interpolator to further reduce power and cost. The CDR takes an area of less than 0.05mm2, and post simulation shows that the CDR has a RMS jitter of UIpp/32 (11.4ps@3.125GBaud) and consumes 9.5mW at 3.125GBaud.",
"title": ""
},
{
"docid": "165522fd4d416fa0b1aeef37f816b1e7",
"text": "Tarsal tunnel syndrome, unlike its similar sounding counterpart in the hand, is a significantly misunderstood clinical entity. Confusion concerning the anatomy involved, the presenting symptomatology, the appropriateness and significance of various diagnostic tests, conservative and surgical management, and, finally, the variability of reported results of surgical intervention attests to the lack of consensus surrounding this condition. The terminology involved in various diagnoses for chronic heel pain is also a hodgepodge of poorly understood entities.",
"title": ""
},
{
"docid": "d4d9948e170edd124c57742d91a5d021",
"text": "The attribute set in an information system evolves in time when new information arrives. Both lower and upper approximations of a concept will change dynamically when attributes vary. Inspired by the former incremental algorithm in Pawlak rough sets, this paper focuses on new strategies of dynamically updating approximations in probabilistic rough sets and investigates four propositions of updating approximations under probabilistic rough sets. Two incremental algorithms based on adding attributes and deleting attributes under probabilistic rough sets are proposed, respectively. The experiments on five data sets from UCI and a genome data with thousand attributes validate the feasibility of the proposed incremental approaches. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f84e0d8892d0b9d0b108aa5dcf317037",
"text": "We present a continuously adaptive, continuous query (CACQ) implementation based on the eddy query processing framework. We show that our design provides significant performance benefits over existing approaches to evaluating continuous queries, not only because of its adaptivity, but also because of the aggressive cross-query sharing of work and space that it enables. By breaking the abstraction of shared relational algebra expressions, our Telegraph CACQ implementation is able to share physical operators --- both selections and join state --- at a very fine grain. We augment these features with a grouped-filter index to simultaneously evaluate multiple selection predicates. We include measurements of the performance of our core system, along with a comparison to existing continuous query approaches.",
"title": ""
},
{
"docid": "19a538b6a49be54b153b0a41b6226d1f",
"text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.",
"title": ""
},
{
"docid": "428c480be4ae3d2043c9f5485087c4af",
"text": "Current difference-expansion (DE) embedding techniques perform one layer embedding in a difference image. They do not turn to the next difference image for another layer embedding unless the current difference image has no expandable differences left. The obvious disadvantage of these techniques is that image quality may have been severely degraded even before the later layer embedding begins because the previous layer embedding has used up all expandable differences, including those with large magnitude. Based on integer Haar wavelet transform, we propose a new DE embedding algorithm, which utilizes the horizontal as well as vertical difference images for data hiding. We introduce a dynamical expandable difference search and selection mechanism. This mechanism gives even chances to small differences in two difference images and effectively avoids the situation that the largest differences in the first difference image are used up while there is almost no chance to embed in small differences of the second difference image. We also present an improved histogram-based difference selection and shifting scheme, which refines our algorithm and makes it resilient to different types of images. Compared with current algorithms, the proposed algorithm often has better embedding capacity versus image quality performance. The advantage of our algorithm is more obvious near the embedding rate of 0.5 bpp.",
"title": ""
}
] |
scidocsrr
|
93563d592bb8bba09a5bfee19068a451
|
Learning Criteria and Evaluation Metrics for Textual Transfer between Non-Parallel Corpora
|
[
{
"docid": "d131f4f22826a2083d35dfa96bf2206b",
"text": "The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.",
"title": ""
},
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
},
{
"docid": "49942573c60fa910369b81c44447a9b1",
"text": "Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible text sentences, whose attributes are controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of semantic structures. The model can alternatively be seen as enhancing VAEs with the wake-sleep algorithm for leveraging fake samples as extra training data. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns interpretable representations from even only word annotations, and produces short sentences with desired attributes of sentiment and tenses. Quantitative experiments using trained classifiers as evaluators validate the accuracy of sentence and attribute generation.",
"title": ""
}
] |
[
{
"docid": "bb64d33190d359461a4258e0ed3d3229",
"text": "In this paper, we consider the class of first-order algebraic ordinary differential equations (AODEs), and study their rational solutions in three different approaches. A combinatorial approach gives a degree bound for rational solutions of a class of AODEs which do not have movable poles. Algebraic considerations yield an algorithm for computing rational solutions of quasilinear AODEs. And finally ideas from algebraic geometry combine these results to an algroithm for finding all rational solutions of a class of firstorder AODEs which covers all examples from the collection of Kamke. In particular, parametrizations of algebraic curves play an important role for a transformation of a parametrizable first-order AODE to a quasi-linear differential equation.",
"title": ""
},
{
"docid": "6625c2f456bb09c4e4668b7326247e02",
"text": "The More-Electric Aircraft (MEA) underlines the utilization of the electrical power to power the non-propulsive aircraft systems. Adopting the MEA achieves numerous advantages such as optimizing the aircraft performance and decreasing operating and maintenance costs. Moreover, the MEA reduces the emission of the air pollutant gases from the aircraft, which can contribute in solving the problem of climate change. However, the MEA put some challenge on the aircraft electrical system either in the amount of the required power or the processing and management of this power. This paper introduces a review for the MEA. The review includes the different options of generation and power system architectures.",
"title": ""
},
{
"docid": "36286c36dfd7451ecd297e2ebe445a35",
"text": "Research on the \"dark side\" of organizational behavior has determined that employee sabotage is most often a reaction by disgruntled employees to perceived mistreatment. To date, however, most studies on employee retaliation have focused on intra-organizational sources of (in)justice. Results from this field study of customer service representatives (N = 358) showed that interpersonal injustice from customers relates positively to customer-directed sabotage over and above intra-organizational sources of fairness. Moreover, the association between unjust treatment and sabotage was moderated by 2 dimensions of moral identity (symbolization and internalization) in the form of a 3-way interaction. The relationship between injustice and sabotage was more pronounced for employees high (vs. low) in symbolization, but this moderation effect was weaker among employees who were high (vs. low) in internalization. Last, employee sabotage was negatively related to job performance ratings.",
"title": ""
},
{
"docid": "89f1cec7c2999693805945c3c898c484",
"text": "Studies investigating the relationship between job satisfaction and turnover intention are abundant. Yet, this relationship has not been fully addressed in the IT field particularly in the developing countries. Moving from this point, this study aims at further probe this area by evaluating the levels of job satisfaction and turnover intention among a sample of IT employees in the Palestinian IT firms. Then, it attempts to examine the sources of job satisfaction and the causes of turnover intention among those employees. The findings show job security, work conditions, pay and benefits, work nature, coworkers, career advancement, supervision and management were all significantly correlated with overall job satisfaction. Only job security, pay, and coworkers were able to significantly influence turnover intention. Implications of the findings and future research directions are discussed",
"title": ""
},
{
"docid": "b57cbb1f6eeb34946df47f2be390aaf8",
"text": "The automatic detection of software vulnerabilities is an important research problem. However, existing solutions to this problem rely on human experts to define features and often miss many vulnerabilities (i.e., incurring high false negative rate). In this paper, we initiate the study of using deep learning-based vulnerability detection to relieve human experts from the tedious and subjective task of manually defining features. Since deep learning is motivated to deal with problems that are very different from the problem of vulnerability detection, we need some guiding principles for applying deep learning to vulnerability detection. In particular, we need to find representations of software programs that are suitable for deep learning. For this purpose, we propose using code gadgets to represent programs and then transform them into vectors, where a code gadget is a number of (not necessarily consecutive) lines of code that are semantically related to each other. This leads to the design and implementation of a deep learning-based vulnerability detection system, called Vulnerability Deep Pecker (VulDeePecker). In order to evaluate VulDeePecker, we present the first vulnerability dataset for deep learning approaches. Experimental results show that VulDeePecker can achieve much fewer false negatives (with reasonable false positives) than other approaches. We further apply VulDeePecker to 3 software products (namely Xen, Seamonkey, and Libav) and detect 4 vulnerabilities, which are not reported in the National Vulnerability Database but were “silently” patched by the vendors when releasing later versions of these products; in contrast, these vulnerabilities are almost entirely missed by the other vulnerability detection systems we experimented with.",
"title": ""
},
{
"docid": "bc1d4ce838971d6a04d5bf61f6c3f2d8",
"text": "This paper presents a novel network slicing management and orchestration architectural framework. A brief description of business scenarios and potential customers of network slicing is provided, illustrating the need for ordering network services with very different requirements. Based on specific customer goals (of ordering and building an end-to-end network slice instance) and other requirements gathered from industry and standardization associations, a solution is proposed enabling the automation of end-to-end network slice management and orchestration in multiple resource domains. This architecture distinguishes between two main design time and runtime components: Network Slice Design and Multi-Domain Orchestrator, belonging to different competence service areas with different players in these domains, and proposes the required interfaces and data structures between these components.",
"title": ""
},
{
"docid": "94b80e3a04bca3740d04df595a8aea4e",
"text": "Many problems have a structure with an inherently two (or higher) dimensional nature. Unfortunately, the classical method of representing problems when using Genetic Algorithms (GAs) is of a linear nature. We develop a genome representation with a related crossover mechanism which preserves spatial relationships for two dimensional problems. We then explore how crossover disruption rates relate to the spatial structure of the problem space.",
"title": ""
},
{
"docid": "fd9e8a79decf68721fb0dd81f16a5f8b",
"text": "Feeder reconfiguration (FRC) is an important function of distribution automation system. It modifies the topology of distribution network through changing the open/close statuses of tie switches and sectionalizing switches. The change of topology redirects the power flow within the distribution network, in order to obtain a better performance of the system. Various methods have been explored to solve FRC problems. This paper presents a literature survey on distribution system FRC. Among many aspects to be reviewed for a comprehensive study, this paper focuses on FRC objectives and solution methods. The problem definition of FRC is first discussed, the objectives are summarized, and various solution methods are categorized and evaluated.",
"title": ""
},
{
"docid": "9b5d45b155985b36986c69e2af9e68a0",
"text": "We address the problem of maximizing application speedup through runtime, self-selection of an appropriate number of processors on which to run. Automatic, runtime selection of processor allocations is important because many parallel applications exhibit peak speedups at allocations that are data or time dependent. We propose the use of a runtime system that: (a) dynamically measures job efficiencies at different allocations, (b) uses these measurements to calculate speedups, and (c) automatically adjusts a job’s processor allocation to maximize its speedup. Using a set of 10 applications that includes both hand-coded parallel programs and compiler-parallelized sequential programs, we show that our runtime system can reliably determine dynamic allocations that match the best possible static allocation, and that it has the potential to find dynamic allocations that outperform any static allocation.",
"title": ""
},
{
"docid": "f4955f2102675b67ffbe5c220e859c3b",
"text": "Identification of named entities such as person, organization and product names from text is an important task in information extraction. In many domains, the same entity could be referred to in multiple ways due to variations introduced by different user groups, variations of spellings across regions or cultures, usage of abbreviations, typographical errors and other reasons associated with conventional usage. Identifying a piece of text as a mention of an entity in such noisy data is difficult, even if we have a dictionary of possible entities. Previous approaches treat the synonym problem as part entity disambiguation and use learning-based methods that use the context of the words to identify synonyms. In this paper, we show that existing domain knowledge, encoded as rules, can be used effectively to address the synonym problem to a considerable extent. This makes the disambiguation task simpler, without the need for much training data. We look at a subset of application scenarios in named entity extraction, categorize the possible variations in entity names, and define rules for each category. Using these rules, we generate synonyms for the canonical list and match these synonyms to the actual occurrence in the data sets. In particular, we describe the rule categories that we developed for several named entities and report the results of applying our technique of extracting named entities by generating synonyms for two different domains.",
"title": ""
},
{
"docid": "ebc7f0693527eb6186fe56ef847581b3",
"text": "WITH THE ADVENT OF CENTRALized data warehouses, where data might be stored as electronic documents or as text fields in databases, text mining has increased in importance and economic value. One important goal in text mining is automatic classification of electronic documents. Computer programs scan text in a document and apply a model that assigns the document to one or more prespecified topics. Researchers have used benchmark data, such as the Reuters-21578 test collection, to measure advances in automated text categorization. Conventional methods such as decision trees have had competitive, but not optimal, predictive performance. Using the Reuters collection, we show that adaptive resampling techniques can improve decision-tree performance and that relatively small, pooled local dictionaries are effective. We’ve applied these techniques to online banking applications to enhance automated e-mail routing.",
"title": ""
},
{
"docid": "0509fa7a08613b5a2383c53f40882e38",
"text": "We present a demonstrated and commercially viable self-tracker, using robust software that fuses data from inertial and vision sensors. Compared to infrastructurebased trackers, self-trackers have the advantage that objects can be tracked over an extremely wide area, without the prohibitive cost of an extensive network of sensors or emitters to track them. So far most AR research has focused on the long-term goal of a purely vision-based tracker that can operate in arbitrary unprepared environments, even outdoors. We instead chose to start with artificial fiducials, in order to quickly develop the first self-tracker which is small enough to wear on a belt, low cost, easy to install and self-calibrate, and low enough latency to achieve AR registration. We also present a roadmap for how we plan to migrate from artificial fiducials to natural ones. By designing to the requirements of AR, our system can easily handle the less challenging applications of wearable VR systems and robot navigation.",
"title": ""
},
{
"docid": "dc7361721e3a40de15b3d2211998cc2a",
"text": "Despite advances in surgical technique and postoperative care, fibrosis remains the major impediment to a marked reduction of intraocular pressure without the need of additional medication (complete success) following filtering glaucoma surgery. Several aspects specific to filtering surgery may contribute to enhanced fibrosis. Changes in conjunctival tissue structure and composition due to preceding treatments as well as alterations in interstitial fluid flow and content due to aqueous humor efflux may act as important drivers of fibrosis. In light of these pathophysiological considerations, current and possible future strategies to control fibrosis following filtering glaucoma surgery are discussed.",
"title": ""
},
{
"docid": "123a21d9913767e1a8d1d043f6feab01",
"text": "Permanent magnet synchronous machines generate parasitic torque pulsations owing to distortion of the stator flux linkage distribution, variable magnetic reluctance at the stator slots, and secondary phenomena. The consequences are speed oscillations which, although small in magnitude, deteriorate the performance of the drive in demanding applications. The parasitic effects are analysed and modelled using the complex state-variable approach. A fast current control system is employed to produce highfrequency electromagnetic torque components for compensation. A self-commissioning scheme is described which identifies the machine parameters, particularly the torque ripple functions which depend on the angular position of the rotor. Variations of permanent magnet flux density with temperature are compensated by on-line adaptation. The algorithms for adaptation and control are implemented in a standard microcontroller system without additional hardware. The effectiveness of the adaptive torque ripple compensation is demonstrated by experiments.",
"title": ""
},
{
"docid": "3a53831731ec16edf54877c610ae4384",
"text": "We propose a position-based approach for largescale simulations of rigid bodies at interactive frame-rates. Our method solves positional constraints between rigid bodies and therefore integrates nicely with other position-based methods. Interaction of particles and rigid bodies through common constraints enables two-way coupling with deformables. The method exhibits exceptional performance and stability while being user-controllable and easy to implement. Various results demonstrate the practicability of our method for the resolution of collisions, contacts, stacking and joint constraints.",
"title": ""
},
{
"docid": "3f1d30e6d6ebd60f6f1e66b4dfd4f3a0",
"text": "Diagnosing software failures in the field is notoriously difficult, in part due to the fundamental complexity of trouble-shooting any complex software system, but further exacerbated by the paucity of information that is typically available in the production setting. Indeed, for reasons of both overhead and privacy, it is common that only the run-time log generated by a system (e.g., syslog) can be shared with the developers. Unfortunately, the ad-hoc nature of such reports are frequently insufficient for detailed failure diagnosis. This paper seeks to improve this situation within the rubric of existing practice. We describe a tool, LogEnhancer that automatically \"enhances\" existing logging code to aid in future post-failure debugging. We evaluate LogEnhancer on eight large, real-world applications and demonstrate that it can dramatically reduce the set of potential root failure causes that must be considered during diagnosis while imposing negligible overheads.",
"title": ""
},
{
"docid": "d6697ddaaf5e31ff2a6367115d7467c6",
"text": "A feature-rich second-generation 60-GHz transceiver chipset is introduced. It integrates dual-conversion superheterodyne receiver and transmitter chains, a sub-integer frequency synthesizer, full programmability from a digital interface, modulator and demodulator circuits to support analog modulations (e.g. MSK, BPSK), as well as a universal I&Q interface for digital modulation formats (e.g. OFDM). Achieved performance includes 6-dB receiver noise figure and 12 dBm transmitter output ldB compression point. Wireless link experiments with different modulation formats for 2-Gb/s real-time uncompressed HDTV transmission are discussed. Additionally, recent millimeter-wave package and antenna developments are summarized and a 60GHz silicon micromachined antenna is presented.",
"title": ""
},
{
"docid": "5f20df3abf9a4f7944af6b3afd16f6f8",
"text": "An important step towards the successful integration of information and communication technology (ICT) in schools is to facilitate their capacity to develop a school-based ICT policy resulting in an ICT policy plan. Such a plan can be defined as a school document containing strategic and operational elements concerning the integration of ICT in education. To write such a plan in an efficient way is challenging for schools. Therefore, an online tool [Planning for ICT in Schools (pICTos)] has been developed to guide schools in this process. A multiple case study research project was conducted with three Flemish primary schools to explore the process of developing a school-based ICT policy plan and the supportive role of pICTos within this process. Data from multiple sources (i.e. interviews with school leaders and ICT coordinators, school policy documents analysis and a teacher questionnaire) were collected and analysed. The results indicate that schools shape their ICT policy based on specific school data collected and presented by the pICTos environment. School teams learned about the actual and future place of ICT in teaching and learning. Consequently, different policy decisions were made according to each school’s vision on ‘good’ education and ICT integration.",
"title": ""
},
{
"docid": "79623049d961677960ed769d1469fb03",
"text": "Understanding how people communicate during disasters is important for creating systems to support this communication. Twitter is commonly used to broadcast information and to organize support during times of need. During the 2010 Gulf Oil Spill, Twitter was utilized for spreading information, sharing firsthand observations, and to voice concern about the situation. Through building a series of classifiers to detect emotion and sentiment, the distribution of emotion during the Gulf Oil Spill can be analyzed and its propagation compared against released information and corresponding events. We contribute a series of emotion classifiers and a prototype collaborative visualization of the results and discuss their implications.",
"title": ""
},
{
"docid": "f18a0ae573711eb97b9b4150d53182f3",
"text": "The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.",
"title": ""
}
] |
scidocsrr
|
80797f84baa76fd952ea9e92cc3b81f5
|
Large-Scale Content-Based Matching of MIDI and Audio Files
|
[
{
"docid": "5cb4cbcf553da673354ebb325e18339e",
"text": "150-200 words) The MIDI Toolbox is a compilation of functions for analyzing and visualizing MIDI files in the Matlab computing environment. In this article, the basic issues of the Toolbox are summarized and demonstrated with examples ranging from melodic contour, similarity, keyfinding, meter-finding to segmentation. The Toolbox is based on symbolic musical data but signal processing methods are applied to cover such aspects of musical behaviour as geometric representations and short-term memory. Besides simple manipulation and filtering functions, the toolbox contains cognitively inspired analytic techniques that are suitable for contextdependent musical analysis, a prerequisite for many music information retrieval applications.",
"title": ""
},
{
"docid": "1ecade87386366ab7b1631b8a47c7c32",
"text": "We introduce an efficient computational framework for hashing data belonging to multiple modalities into a single representation space where they become mutually comparable. The proposed approach is based on a novel coupled siamese neural network architecture and allows unified treatment of intra- and inter-modality similarity learning. Unlike existing cross-modality similarity learning approaches, our hashing functions are not limited to binarized linear projections and can assume arbitrarily complex forms. We show experimentally that our method significantly outperforms state-of-the-art hashing approaches on multimedia retrieval tasks.",
"title": ""
},
{
"docid": "baf3c2456fa0e28b39730a5803ddcc2b",
"text": "Music21 is an object-oriented toolkit for analyzing, searching, and transforming music in symbolic (scorebased) forms. The modular approach of the project allows musicians and researchers to write simple scripts rapidly and reuse them in other projects. The toolkit aims to provide powerful software tools integrated with sophisticated musical knowledge to both musicians with little programming experience (especially musicologists) and to programmers with only modest music theory skills. This paper introduces the music21 system, demonstrating how to use it and the types of problems it is wellsuited toward advancing. We include numerous examples of its power and flexibility, including demonstrations of graphing data and generating annotated musical scores.",
"title": ""
}
] |
[
{
"docid": "83b52262983115cff76c558888adb428",
"text": "The computation of relatedness between two fragments of text in an automated manner requires taking into account a wide range of factors pertaining to the meaning the two fragments convey, and the pairwise relations between their words. Without doubt, a measure of relatedness between text segments must take into account both the lexical and the semantic relatedness between words. Such a measure that captures well both aspects of text relatedness may help in many tasks, such as text retrieval, classification and clustering. In this paper we present a new approach for measuring the semantic relatedness between words based on their implicit semantic links. The approach exploits only a word thesaurus in order to devise implicit semantic links between words. Based on this approach, we introduce Omiotis, a new measure of semantic relatedness between texts which capitalizes on the word-to-word semantic relatedness measure (SR) and extends it to measure the relatedness between texts. We gradually validate our method: we first evaluate the performance of the semantic relatedness measure between individual words, covering word-to-word similarity and relatedness, synonym identification and word analogy; then, we proceed with evaluating the performance of our method in measuring text-to-text semantic relatedness in two tasks, namely sentence-to-sentence similarity and paraphrase recognition. Experimental evaluation shows that the proposed method outperforms every lexicon-based method of semantic relatedness in the selected tasks and the used data sets, and competes well against corpus-based and hybrid approaches.",
"title": ""
},
{
"docid": "ee173a79714c48ebcf6eafdba0bc53f4",
"text": "Little is known about the measurement properties of clinical tests of stepping in different directions for children with cerebral palsy (CP) and Down syndrome (DS). The ability to step in various directions is an important balance skill for daily life. Standardized testing of this skill can yield important information for therapy planning. This observational methodological study was aimed at defining the relative and absolute reliability, minimal detectable difference, and concurrent validity with the Timed Up-&-Go (TUG) of the Four Square Step Test (FSST) for children with CP and DS. Thirty children, 16 with CP and 14 with DS, underwent repeat testing 2 weeks apart on the FSST by 3 raters. TUG was administered on the second test occasion. Intraclass correlation coefficients (ICC [1,1] and [3,1]) with 95% confidence intervals, standard error of measurement (SEM), minimal detectable difference (MDD) and the Spearman rank correlation coefficient were computed. The FSST demonstrated excellent interrater reliability (ICC=.79; 95% CI: .66, .89) and high positive correlation with the TUG (r=.74). Test-retest reliability estimates varied from moderate to excellent among the 3 raters (.54, .78 and .89 for raters 1, 2 and 3, respectively). SEM and MDD were calculated at 1.91s and 5.29s, respectively. Scores on the FSST of children with CP and DS between 5 and 12 years of age are reliable and valid.",
"title": ""
},
{
"docid": "f513165fd055b04544dff6eb5b7ec771",
"text": "Low power wide area (LPWA) networks are attracting a lot of attention primarily because of their ability to offer affordable connectivity to the low-power devices distributed over very large geographical areas. In realizing the vision of the Internet of Things, LPWA technologies complement and sometimes supersede the conventional cellular and short range wireless technologies in performance for various emerging smart city and machine-to-machine applications. This review paper presents the design goals and the techniques, which different LPWA technologies exploit to offer wide-area coverage to low-power devices at the expense of low data rates. We survey several emerging LPWA technologies and the standardization activities carried out by different standards development organizations (e.g., IEEE, IETF, 3GPP, ETSI) as well as the industrial consortia built around individual LPWA technologies (e.g., LoRa Alliance, Weightless-SIG, and Dash7 alliance). We further note that LPWA technologies adopt similar approaches, thus sharing similar limitations and challenges. This paper expands on these research challenges and identifies potential directions to address them. While the proprietary LPWA technologies are already hitting the market with large nationwide roll-outs, this paper encourages an active engagement of the research community in solving problems that will shape the connectivity of tens of billions of devices in the next decade.",
"title": ""
},
{
"docid": "6c2a033b374b4318cd94f0a617ec705a",
"text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.",
"title": ""
},
{
"docid": "fc4f06fb586de6452337d83bac8f64f3",
"text": "Deep learning techniques have boosted the performance of hyperspectral image (HSI) classification. In particular, convolutional neural networks (CNNs) have shown superior performance to that of the conventional machine learning algorithms. Recently, a novel type of neural networks called capsule networks (CapsNets) was presented to improve the most advanced CNNs. In this paper, we present a modified two-layer CapsNet with limited training samples for HSI classification, which is inspired by the comparability and simplicity of the shallower deep learning models. The presented CapsNet is trained using two real HSI datasets, i.e., the PaviaU (PU) and SalinasA datasets, representing complex and simple datasets, respectively, and which are used to investigate the robustness or representation of every model or classifier. In addition, a comparable paradigm of network architecture design has been proposed for the comparison of CNN and CapsNet. Experiments demonstrate that CapsNet shows better accuracy and convergence behavior for the complex data than the state-of-the-art CNN. For CapsNet using the PU dataset, the Kappa coefficient, overall accuracy, and average accuracy are 0.9456, 95.90%, and 96.27%, respectively, compared to the corresponding values yielded by CNN of 0.9345, 95.11%, and 95.63%. Moreover, we observed that CapsNet has much higher confidence for the predicted probabilities. Subsequently, this finding was analyzed and discussed with probability maps and uncertainty analysis. In terms of the existing literature, CapsNet provides promising results and explicit merits in comparison with CNN and two baseline classifiers, i.e., random forests (RFs) and support vector machines (SVMs).",
"title": ""
},
{
"docid": "9a9c35392d53595295c344b351563405",
"text": "This paper is the second part of a two-part paper, which is a survey of multiobjective evolutionary algorithms for data mining problems. In Part I , multiobjective evolutionary algorithms used for feature selection and classification have been reviewed. In this part, different multiobjective evolutionary algorithms used for clustering, association rule mining, and other data mining tasks are surveyed. Moreover, a general discussion is provided along with scopes for future research in the domain of multiobjective evolutionary algorithms for data mining.",
"title": ""
},
{
"docid": "cea6cc7b8e9f4ca9571f19be6f9eef39",
"text": "This paper discusses file system Access Control Lists as implemented in several UNIX-like operating systems. After recapitulating the concepts of these Access Control Lists that never formally became a POSIX standard, we focus on the different aspects of implementation and use on Linux.",
"title": ""
},
{
"docid": "02605f4044a69b70673121985f1bd913",
"text": "A novel class of low-cost, small-footprint and high-gain antenna arrays is presented for W-band applications. A 4 × 4 antenna array is proposed and demonstrated using substrate-integrated waveguide (SIW) technology for the design of its feed network and longitudinal slots in the SIW top metallic surface to drive the array antenna elements. Dielectric cubes of low-permittivity material are placed on top of each 1 × 4 antenna array to increase the gain of the circular patch antenna elements. This new design is compared to a second 4 × 4 antenna array which, instead of dielectric cubes, uses vertically stacked Yagi-like parasitic director elements to increase the gain. Measured impedance bandwidths of the two 4 × 4 antenna arrays are about 7.5 GHz (94.2-101.8 GHz) at 18 ± 1 dB gain level, with radiation patterns and gains of the two arrays remaining nearly constant over this bandwidth. While the fabrication effort of the new array involving dielectric cubes is significantly reduced, its measured radiation efficiency of 81 percent is slightly lower compared to 90 percent of the Yagi-like design.",
"title": ""
},
{
"docid": "4ac12c76112ff2085c4701130448f5d5",
"text": "A key point in the deployment of new wireless services is the cost-effective extension and enhancement of the network's radio coverage in indoor environments. Distributed Antenna Systems using Fiber-optics distribution (F-DAS) represent a suitable method of extending multiple-operator radio coverage into indoor premises, tunnels, etc. Another key point is the adoption of MIMO (Multiple Input — Multiple Output) transmission techniques which can exploit the multipath nature of the radio link to ensure reliable, high-speed wireless communication in hostile environments. In this paper novel indoor deployment solutions based on Radio over Fiber (RoF) and distributed-antenna MIMO techniques are presented and discussed, highlighting their potential in different cases.",
"title": ""
},
{
"docid": "50795998e83dafe3431c3509b9b31235",
"text": "In this study, the daily movement directions of three frequently traded stocks (GARAN, THYAO and ISCTR) in Borsa Istanbul were predicted using deep neural networks. Technical indicators obtained from individual stock prices and dollar-gold prices were used as features in the prediction. Class labels indicating the movement direction were found using daily close prices of the stocks and they were aligned with the feature vectors. In order to perform the prediction process, the type of deep neural network, Convolutional Neural Network, was trained and the performance of the classification was evaluated by the accuracy and F-measure metrics. In the experiments performed, using both price and dollar-gold features, the movement directions in GARAN, THYAO and ISCTR stocks were predicted with the accuracy rates of 0.61, 0.578 and 0.574 respectively. Compared to using the price based features only, the use of dollar-gold features improved the classification performance.",
"title": ""
},
{
"docid": "e159ffe1f686e400b28d398127edfc5c",
"text": "In this paper, we present an in-vehicle computing system capable of localizing lane markings and communicating them to drivers. To the best of our knowledge, this is the first system that combines the Maximally Stable Extremal Region (MSER) technique with the Hough transform to detect and recognize lane markings (i.e., lines and pictograms). Our system begins by localizing the region of interest using the MSER technique. A three-stage refinement computing algorithm is then introduced to enhance the results of MSER and to filter out undesirable information such as trees and vehicles. To achieve the requirements of real-time systems, the Progressive Probabilistic Hough Transform (PPHT) is used in the detection stage to detect line markings. Next, the recognition of the color and the form of line markings is performed; this it is based on the results of the application of the MSER to left and right line markings. The recognition of High-Occupancy Vehicle pictograms is performed using a new algorithm, based on the results of MSER regions. In the tracking stage, Kalman filter is used to track both ends of each detected line marking. Several experiments are conducted to show the efficiency of our system. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9d98fe5183d53bfaaa42e642bc03b9b3",
"text": "Cyber-attacks continue to increase worldwide, leading to significant loss or misuse of information assets. Most of the existing intrusion detection systems rely on per-packet inspection, a resource consuming task in today’s high speed networks. A recent trend is to analyze netflows (or simply flows) instead of packets, a technique performed at a relative low level leading to high false alarm rates. Since analyzing raw data extracted from flows lacks the semantic information needed to discover attacks, a novel approach is introduced, which uses contextual information to automatically identify and query possible semantic links between different types of suspicious activities extracted from flows. Time, location, and other contextual information mined from flows is applied to generate semantic links among alerts raised in response to suspicious flows. These semantic links are identified through an inference process on probabilistic semantic link networks (SLNs), which receive an initial prediction from a classifier that analyzes incoming flows. The SLNs are then queried at run-time to retrieve other relevant predictions. We show that our approach can be extended to detect unknown attacks in flows as variations of known attacks. An extensive validation of our approach has been performed with a prototype system on several benchmark datasets yielding very promising results in detecting both known and unknown attacks.",
"title": ""
},
{
"docid": "cb8a59bbed595776e27058e0cfc8b494",
"text": "In real world planning problems, time for deliberation is often limited. Anytime planners are well suited for these problems: they find a feasible solution quickly and then continually work on improving it until time runs out. In this paper we propose an anytime heuristic search, ARA*, which tunes its performance bound based on available search time. It starts by finding a suboptimal solution quickly using a loose bound, then tightens the bound progressively as time allows. Given enough time it finds a provably optimal solution. While improving its bound, ARA* reuses previous search efforts and, as a result, is significantly more efficient than other anytime search methods. In addition to our theoretical analysis, we demonstrate the practical utility of ARA* with experiments on a simulated robot kinematic arm and a dynamic path planning problem for an outdoor rover.",
"title": ""
},
{
"docid": "dc93126fadf8801687573cbef29cdef1",
"text": "Many graph-based semi-supervised learning methods for large datasets have been proposed to cope with the rapidly increasing size of data, such as Anchor Graph Regularization (AGR). This model builds a regularization framework by exploring the underlying structure of the whole dataset with both datapoints and anchors. Nevertheless, AGR still has limitations in its two components: (1) in anchor graph construction, the estimation of the local weights between each datapoint and its neighboring anchors could be biased and relatively slow; and (2) in anchor graph regularization, the adjacency matrix that estimates the relationship between datapoints, is not sufficiently effective. In this paper, we develop an Efficient Anchor Graph Regularization (EAGR) by tackling these issues. First, we propose a fast local anchor embedding method, which reformulates the optimization of local weights and obtains an analytical solution. We show that this method better reconstructs datapoints with anchors and speeds up the optimizing process. Second, we propose a new adjacency matrix among anchors by considering the commonly linked datapoints, which leads to a more effective normalized graph Laplacian over anchors. We show that, with the novel local weight estimation and normalized graph Laplacian, EAGR is able to achieve better classification accuracy with much less computational costs. Experimental results on several publicly available datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "ef8be5104f9bc4a0f4353ed236b6afb8",
"text": "State-of-the-art human pose estimation methods are based on heat map representation. In spite of the good performance, the representation has a few issues in nature, such as non-differentiable postprocessing and quantization error. This work shows that a simple integral operation relates and unifies the heat map representation and joint regression, thus avoiding the above issues. It is differentiable, efficient, and compatible with any heat map based methods. Its effectiveness is convincingly validated via comprehensive ablation experiments under various settings, specifically on 3D pose estimation, for the first time.",
"title": ""
},
{
"docid": "55ec669a67b88ff0b6b88f1fa6408df9",
"text": "This paper proposes low overhead training techniques for a wireless communication system equipped with a Multifunctional Reconfigurable Antenna (MRA) capable of dynamically changing beamwidth and beam directions. A novel microelectromechanical system (MEMS) MRA antenna is presented with radiation patterns (generated using complete electromagnetic full-wave analysis) which are used to quantify the communication link performance gains. In particular, it is shown that using the proposed Exhaustive Training at Reduced Frequency (ETRF) consistently results in a reduction in training overhead. It is also demonstrated that further reduction in training overhead is possible using statistical or MUSIC-based training schemes. Bit Error Rate (BER) and capacity simulations are carried out using an MRA, which can tilt its radiation beam into one of Ndir = 4 or 8 directions with variable beamwidth (≈2π/Ndir). The performance of each training scheme is quantified for OFDM systems operating in frequency selective channels with and without Line of Sight (LoS). We observe 6 dB of gain at BER = 10-4 and 6 dB improvement in capacity (at capacity = 6 bits/sec/subcarrier) are achievable for an MRA with Ndir= 8 as compared to omni directional antennas using ETRF scheme in a LoS environment.",
"title": ""
},
{
"docid": "4c8ac629f8a7faaa315e4e4441eb630c",
"text": "This article reviews the cognitive therapy of depression. The psychotherapy based on this theory consists of behavioral and verbal techniques to change cognitions, beliefs, and errors in logic in the patient's thinking. A few of the various techniques are described and a case example is provided. Finally, the outcome studies testing the efficacy of this approach are reviewed.",
"title": ""
},
{
"docid": "0e5a11ef4daeb969702e40ea0c50d7f3",
"text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).",
"title": ""
},
{
"docid": "ffed6abc3134f30d267342e83931ee64",
"text": "This paper discusses General Random Utility Models (GRUMs). These are a class of parametric models that generate partial ranks over alternatives given attributes of agents and alternatives. We propose two preference elicitation scheme for GRUMs developed from principles in Bayesian experimental design, one for social choice and the other for personalized choice. We couple this with a general Monte-CarloExpectation-Maximization (MC-EM) based algorithm for MAP inference under GRUMs. We also prove uni-modality of the likelihood functions for a class of GRUMs. We examine the performance of various criteria by experimental studies, which show that the proposed elicitation scheme increases the precision of estimation.",
"title": ""
},
{
"docid": "4a0c2ad7f07620fa5ea5a97a68672131",
"text": "The Philadelphia Neurodevelopmental Cohort (PNC) is a large-scale, NIMH funded initiative to understand how brain maturation mediates cognitive development and vulnerability to psychiatric illness, and understand how genetics impacts this process. As part of this study, 1445 adolescents ages 8-21 at enrollment underwent multimodal neuroimaging. Here, we highlight the conceptual basis for the effort, the study design, and the measures available in the dataset. We focus on neuroimaging measures obtained, including T1-weighted structural neuroimaging, diffusion tensor imaging, perfusion neuroimaging using arterial spin labeling, functional imaging tasks of working memory and emotion identification, and resting state imaging of functional connectivity. Furthermore, we provide characteristics regarding the final sample acquired. Finally, we describe mechanisms in place for data sharing that will allow the PNC to become a freely available public resource to advance our understanding of normal and pathological brain development.",
"title": ""
}
] |
scidocsrr
|
c501784eee5db811254bb8b9e5bf4158
|
Joint inference for cross-document information extraction
|
[
{
"docid": "7aaa535e1294e9bcce7d0d40caff626e",
"text": "Event extraction is the task of detecting certain specified types of events that are mentioned in the source language data. The state-of-the-art research on the task is transductive inference (e.g. cross-event inference). In this paper, we propose a new method of event extraction by well using cross-entity inference. In contrast to previous inference methods, we regard entitytype consistency as key feature to predict event mentions. We adopt this inference method to improve the traditional sentence-level event extraction system. Experiments show that we can get 8.6% gain in trigger (event) identification, and more than 11.8% gain for argument (role) classification in ACE event extraction.",
"title": ""
}
] |
[
{
"docid": "4315cbfa13e9a32288c1857f231c6410",
"text": "The likelihood of soft errors increase with system complexity, reduction in operational voltages, exponential growth in transistors per chip, increases in clock frequencies and device shrinking. As the memory bit-cell area is condensed, single event upset that would have formerly despoiled only a single bit-cell are now proficient of upsetting multiple contiguous memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the frequently used error correction codes (ECCs) for single bit, the overhead associated with moving to more sophisticated codes for multi-bit errors is considered to be too costly. To address this issue, this paper presents a new approach to detect and correct multi-bit soft error by using Horizontal-Vertical-Double-Bit-Diagonal (HVDD) parity bits with a comparatively low overhead.",
"title": ""
},
{
"docid": "c3c3add0c42f3b98962c4682a72b1865",
"text": "This paper compares to investigate output characteristics according to a conventional and novel stator structure of axial flux permanent magnet (AFPM) motor for cooling fan drive system. Segmented core of stator has advantages such as easy winding and fast manufacture speed. However, a unit cost increase due to cutting off tooth tip to constant slot width. To solve the problem, this paper proposes a novel stator structure with three-step segmented core. The characteristics of AFPM were analyzed by time-stepping three dimensional finite element analysis (3D FEA) in two stator models, when stator cores are cutting off tooth tips from rectangular core and three step segmented core. Prototype motors were manufactured based on analysis results, and were tested as a motor.",
"title": ""
},
{
"docid": "d3d6c72526f0312fd2de537af17d2c88",
"text": "OBJECTIVES\nTo investigate the influence of antibody formation to TNF-α blocking agents on the clinical response in AS patients treated with infliximab (IFX), etanercept (ETA), or adalimumab (ADA), and to investigate the development of ANA, ANCA, and anti-dsDNA antibodies in association with the formation of antibodies to TNF-α blocking agents.\n\n\nMETHODS\nConsecutive AS outpatients with active disease who started treatment with IFX (n=20), ETA (n=20), or ADA (n=20) were included in this longitudinal observational study. Clinical data were collected prospectively at baseline and after 3, 6, and 12 months of anti-TNF-α treatment. At the same time points, serum samples were collected. In these samples, antibodies to TNF-α blocking agents, serum TNF-α blocker levels, and ANA, ANCA, and anti-dsDNA antibodies were measured retrospectively.\n\n\nRESULTS\nAnti-IFX, anti-ETA, and anti-ADA antibodies were induced in 20%, 0%, and 30% of patients, respectively. Although ANA, ANCA, and anti-dsDNA antibodies were detected during anti-TNF-α treatment, no significant association was found between the presence of these autoantibodies and the formation of antibodies to TNF-α blocking agents. Patients with anti-IFX or anti-ADA antibodies had significantly lower serum TNF-α blocker levels compared to patients without these antibodies. Furthermore, significant negative correlations were found between serum TNF-α blocker levels and assessments of disease activity.\n\n\nCONCLUSIONS\nThis study indicates that antibody formation to IFX or ADA is related to a decrease in efficacy and early discontinuation of anti-TNF-α treatment in AS patients. Furthermore, autoantibody formation does not seem to be associated with antibody formation to TNF-α blocking agents.",
"title": ""
},
{
"docid": "26aecc52cd3e4eaec05011333a9a7814",
"text": "This paper introduces the concept of letting an RDBMS Optimizer optimize its own environment. In our project, we have used the DB2 Optimizer to tackle the index selection problem, a variation of the knapack problem. This paper will discuss our implementation of index recommendation, the user interface, and provide measurements on the quality of the recommended indexes.",
"title": ""
},
{
"docid": "3c8861fabd462232c6c22a6dec2bda72",
"text": "Transductive classification (TC) using a small labeled data to help classifying all the unlabeled data in information networks. It is an important data mining task on information networks. Various classification methods have been proposed for this task. However, most of these methods are proposed for homogeneous networks but not for heterogeneous ones, which include multi-typed objects and relations and may contain more useful semantic information. In this paper, we firstly use the concept of meta path to represent the different relation paths in heterogeneous networks and propose a novel meta path selection model. Then we extend the transductive classification problem to heterogeneous information networks and propose a novel algorithm, named HetPathMine. The experimental results show that: (1) HetPathMine can get higher accuracy than the existing transductive classification methods and (2) the weight obtained by HetPathMine for each meta path is consistent with human intuition or real-world situations.",
"title": ""
},
{
"docid": "406b1d13ecc9c9097079c8a24c15a332",
"text": "We propose an automated breast cancer triage CAD system using machine vision on low-cost, portable ultrasound imaging devices. We demonstrate that the triage CAD software can effectively analyze images captured by minimally-trained operators and output one of three assessments - benign, probably benign (6-month follow-up recommended) and suspicious (biopsy recommended). This system opens up the possibility of offering practical, cost-effective breast cancer diagnosis for symptomatic women in economically developing countries.",
"title": ""
},
{
"docid": "464b66e2e643096bd344bea8026f4780",
"text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.",
"title": ""
},
{
"docid": "4017461db56ebe986c3cdf9eec11826a",
"text": "Software Defined Networking (SDN) is a promising paradigm to provide centralized traffic control. Multimedia traffic control based on SDN is crucial but challenging for Quality of Experience (QoE) optimization. It is very difficult to model and control multimedia traffic because solutions mainly depend on an understanding of the network environment, which is complicated and dynamic. Inspired by the recent advances in artificial intelligence (AI) technologies, we study the adaptive multimedia traffic control mechanism leveraging Deep Reinforcement Learning (DRL). This paradigm combines deep learning with reinforcement learning, which learns solely from rewards by trial-and-error. Results demonstrate that the proposed mechanism is able to control multimedia traffic directly from experience without referring to a mathematical model.",
"title": ""
},
{
"docid": "2da7166b9ec1ca7da168ac4fc5f056e6",
"text": "Can an algorithm create original and compelling fashion designs to serve as an inspirational assistant? To help answer this question, we design and investigate different image generation models associated with different loss functions to boost creativity in fashion generation. The dimensions of our explorations include: (i) different Generative Adversarial Networks architectures that start from noise vectors to generate fashion items, (ii) novel loss functions that encourage creativity, inspired from Sharma-Mittal divergence, a generalized mutual information measure for the widely used relative entropies such as Kullback-Leibler, and (iii) a generation process following the key elements of fashion design (disentangling shape and texture components). A key challenge of this study is the evaluation of generated designs and the retrieval of best ones, hence we put together an evaluation protocol associating automatic metrics and human experimental studies that we hope will help ease future research. We show that our proposed creativity losses yield better overall appreciation than the one employed in Creative Adversarial Networks. In the end, about 61% of our images are thought to be created by human designers rather than by a computer while also being considered original per our human subject experiments, and our proposed loss scores the highest compared to existing losses in both novelty and likability. Figure 1: Training generative adversarial models with appropriate losses leads to realistic and creative 512× 512 fashion images.",
"title": ""
},
{
"docid": "705dbe0e0564b1937da71f33d17164b8",
"text": "0191-8869/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.paid.2011.11.011 ⇑ Tel.: +1 309 298 1622; fax: +1 309 298 2369. E-mail address: cj-carpenter2@wiu.edu A survey (N = 292) was conducted that measured self-promoting Facebook behaviors (e.g. posting status updates and photos of oneself, updating profile information) and several anti-social behaviors (e.g. seeking social support more than one provides it, getting angry when people do not comment on one’s status updates, retaliating against negative comments). The grandiose exhibitionism subscale of the narcissistic personality inventory was hypothesized to predict the self-promoting behaviors. The entitlement/exploitativeness subscale was hypothesized to predict the anti-social behaviors. Results were largely consistent with the hypothesis for the self-promoting behaviors but mixed concerning the anti-social behaviors. Trait self-esteem was also related in the opposite manner as the Narcissism scales to some Facebook behaviors. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e56efa06a1af42ab3c14754ea70e1f1d",
"text": "The wide diffusion of mobile devices has motivated research towards optimizing energy consumption of software systems— including apps—targeting such devices. Besides efforts aimed at dealing with various kinds of energy bugs, the adoption of Organic Light-Emitting Diode (OLED) screens has motivated research towards reducing energy consumption by choosing an appropriate color palette. Whilst past research in this area aimed at optimizing energy while keeping an acceptable level of contrast, this paper proposes an approach, named GEMMA (Gui Energy Multi-objective optiMization for Android apps), for generating color palettes using a multi- objective optimization technique, which produces color solutions optimizing energy consumption and contrast while using consistent colors with respect to the original color palette. An empirical evaluation that we performed on 25 Android apps demonstrates not only significant improvements in terms of the three different objectives, but also confirmed that in most cases users still perceived the choices of colors as attractive. Finally, for several apps we interviewed the original developers, who in some cases expressed the intent to adopt the proposed choice of color palette, whereas in other cases pointed out directions for future improvements",
"title": ""
},
{
"docid": "fb0c1c324be810386a25cec8eae6f37c",
"text": "vector distribution, a new four-step search (4SS) algorithm with center-biased checking point pattern for fast block motion estimation is proposed in this paper. Halfway-stop technique is employed in the new algorithm with searching steps of 2 to 4 and the total number of checking points is varied from 17 to 27. Simulation results show that the proposed 4SS performs better than the well-known three-step search and has similar performance to the new three-step search (N3SS) in terms of motion compensation errors. In addition, the 4SS also reduces the worst-case computational requirement from 33 to 27 search points and the average computational requirement from 21 to 19 search points as compared with N3SS. _______________________________________ This paper was published in IEEE Trans. Circuits Syst. Video Technol., vol. 6, No. 3, pp. 313-317, Jun. 1996. The authors are with the CityU Image Processing Lab, Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong. Email: elmpo@cityu.edu.hk",
"title": ""
},
{
"docid": "bd53dea475e4ddecf40ebf31a225f0c2",
"text": "Business process management is multidimensional tool which utilizes several methods to examine processes from a holistic perspective, transcending the narrow borders of specific functions. It undertakes fundamental reconsideration and radical redesign of organizational processes in order to achieve drastic improvement of current performance in terms of cost, service and speed. Business process management tries to encourage a radical change rather than an incremental change. An analytical approach has been applied for the current study. For this study, the case of Bank X, which is a leading public sector bank operating in the state, has been taken into consideration. A sample of 250 customers was selected randomly from Alwar, Dausa and Bharatpur districts. For policy framework, corporate headquarters were consulted. For the research a self-designed survey instrument, looking for information from the customers on several parameters like cost, quality, services and performance, was used. This article tries to take a critical account of existent business process management in Bank X and to study the relationship between business process management and organizational performance. The data has been tested by correlation analysis. The findings of the study show that business process management exists in the Bank X and there is a significant relationship between business process management and organizational performance. Keywords-Business Process Management; Business Process Reengineering; Organizational Performance",
"title": ""
},
{
"docid": "a08ae7da309e4f34308fa627b231cdea",
"text": "The rapid development of social networks makes it easy for people to communicate online. However, social networks always suffer from social spammers due to their openness. Spammers deliver information for economic purposes, and they pose threats to the security of social networks. To maintain the long-term running of online social networks, many detection methods are proposed. But current methods normally use high dimension features with supervised learning algorithms to find spammers, resulting in low detection performance. To solve this problem, in this paper, we first apply the Laplacian score method, which is an unsupervised feature selection method, to obtain useful features. Based on the selected features, the semi-supervised ensemble learning is then used to train the detection model. Experimental results on the Twitter dataset show the efficiency of our approach after feature selection. Moreover, the proposed method remains high detection performance in the face of limited labeled data.",
"title": ""
},
{
"docid": "229d132b2662a2e6c00669c6cdf7aaf0",
"text": "Retrieval effectiveness has been traditionally pursued by improving the ranking models and by enriching the pieces of evidence about the information need beyond the original query. A successful method for producing improved rankings consists in expanding the original query. Pseudo-relevance feedback (PRF) has proved to be an effective method for this task in the absence of explicit user's judgements about the initial ranking. This family of techniques obtains expansion terms using the top retrieved documents yielded by the original query. PRF techniques usually exploit the relationship between terms and documents or terms and queries. In this paper, we explore the use of linear methods for pseudo-relevance feedback. We present a novel formulation of the PRF task as a matrix decomposition problem which we called LiMe. This factorisation involves the computation of an inter-term similarity matrix which is used for expanding the original query. We use linear least squares regression with regularisation to solve the proposed decomposition with non-negativity constraints. We compare LiMe on five datasets against strong state-of-the-art baselines for PRF showing that our novel proposal achieves improvements in terms of MAP, nDCG and robustness index.",
"title": ""
},
{
"docid": "14d9343bbe4ad2dd4c2c27cb5d6795cd",
"text": "In the paper a method of translation applied in a new system TGT is discussed. TGT translates texts written in Polish into corresponding utterances in the Polish sign language. Discussion is focused on text-into-text translation phase. Proper translation is done on the level of a predicative representation of the sentence. The representation is built on the basis of syntactic graph that depicts the composition and mutual connections of syntactic groups, which exist in the sentence and are identified at the syntactic analysis stage. An essential element of translation process is complementing the initial predicative graph with nodes, which correspond to lacking sentence members. The method acts for primitive sentences as well as for compound ones, with some limitations, however. A translation example is given which illustrates main transformations done on the linguistic level. It is complemented by samples of images generated by the animating part of the system.",
"title": ""
},
{
"docid": "deda12e60ddba97be009ce1f24feba7e",
"text": "It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.",
"title": ""
},
{
"docid": "da296c4266c241b3e8d330f5c654439f",
"text": "Robotic process automation or intelligent automation (the combination of artificial intelligence and automation) is starting to change the way business is done in nearly every sector of the economy. Intelligent automation systems detect and produce vast amounts of information and can automate entire processes or workflows, learning and adapting as they go. Applications range from the routine to the revolutionary: from collecting, analysing, and making decisions about textual information to guiding autonomous vehicles and advanced robots. It is already helping companies transcend conventional performance trade-offs to achieve unprecedented levels of efficiency and quality. Until recently, robotics has found most of its applications in the primary sector, automating and removing the human element from the production chain. Replacing menial tasks was its first foray, and many organisations introduced robotics into their assembly line, warehouse, and cargo bay operations.",
"title": ""
},
{
"docid": "44c66a2654fdc7ab72dabaa8e31f0e99",
"text": "The availability of new generation multispectral sensors of the Landsat 8 and Sentinel-2 satellite platforms offers unprecedented opportunities for long-term high-frequency monitoring applications. The present letter aims at highlighting some potentials and challenges deriving from the spectral and spatial characteristics of the two instruments. Some comparisons between corresponding bands and band combinations were performed on the basis of different datasets: the first consists of a set of simulated images derived from a hyperspectral Hyperion image, the other five consist instead of pairs of real images (Landsat 8 and Sentinel-2A) acquired on the same date, over five areas. Results point out that in most cases the two sensors can be well combined; however, some issues arise regarding near-infrared bands when Sentinel-2 data are combined with both Landsat 8 and older Landsat images.",
"title": ""
}
] |
scidocsrr
|
6b0914a5e35da6d821753f2e7f3fa3cc
|
Constructing Unrestricted Adversarial Examples with Generative Models
|
[
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "3a7f3e75a5d534f6475c40204ba2403f",
"text": "In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images and, at inference time, finds a close output to a given image. This output will not contain the adversarial changes and is fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies.",
"title": ""
},
{
"docid": "88a1549275846a4fab93f5727b19e740",
"text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"title": ""
}
] |
[
{
"docid": "6042afa9c75aae47de19b80ece21932c",
"text": "In this paper, a fault diagnostic system in a multilevel-inverter using a neural network is developed. It is difficult to diagnose a multilevel-inverter drive (MLID) system using a mathematical model because MLID systems consist of many switching devices and their system complexity has a nonlinear factor. Therefore, a neural network classification is applied to the fault diagnosis of a MLID system. Five multilayer perceptron (MLP) networks are used to identify the type and location of occurring faults from inverter output voltage measurement. The neural network design process is clearly described. The classification performance of the proposed network between normal and abnormal condition is about 90%, and the classification performance among fault features is about 85%. Thus, by utilizing the proposed neural network fault diagnostic system, a better understanding about fault behaviors, diagnostics, and detections of a multilevel inverter drive system can be accomplished. The results of this analysis are identified in percentage tabular form of faults and switch locations",
"title": ""
},
{
"docid": "96d8971bf4a8d18f4471019796348e1b",
"text": "Most wired active electrodes reported so far have a gain of one and require at least three wires. This leads to stiff cables, large connectors and additional noise for the amplifier. The theoretical advantages of amplifying the signal on the electrodes right from the source has often been described, however, rarely implemented. This is because a difference in the gain of the electrodes due to component tolerances strongly limits the achievable common mode rejection ratio (CMRR). In this paper, we introduce an amplifier for bioelectric events where the major part of the amplification (40 dB) is achieved on the electrodes to minimize pick-up noise. The electrodes require only two wires of which one can be used for shielding, thus enabling smaller connecters and smoother cables. Saturation of the electrodes is prevented by a dc-offset cancelation scheme with an active range of /spl plusmn/250 mV. This error feedback simultaneously allows to measure the low frequency components down to dc. This enables the measurement of slow varying signals, e.g., the change of alertness or the depolarization before an epileptic seizure normally not visible in a standard electroencephalogram (EEG). The amplifier stage provides the necessary supply current for the electrodes and generates the error signal for the feedback loop. The amplifier generates a pseudodifferential signal where the amplified bioelectric event is present on one lead, but the common mode signal is present on both leads. Based on the pseudodifferential signal we were able to develop a new method to compensate for a difference in the gain of the active electrodes which is purely software based. The amplifier system is then characterized and the input referred noise as well as the CMRR are measured. For the prototype circuit the CMRR evaluated to 78 dB (without the driven-right-leg circuit). The applicability of the system is further demonstrated by the recording of an ECG.",
"title": ""
},
{
"docid": "ae1109343879d05eaa4b524e4f5d92f3",
"text": "Implantable devices, often dependent on software, save countless lives. But how secure are they?",
"title": ""
},
{
"docid": "06731beb8a4563ed89338b4cba88d1df",
"text": "It has been almost five years since the ISO adopted a standard for measurement of image resolution of digital still cameras using slanted-edge gradient analysis. The method has also been applied to the spatial frequency response and MTF of film and print scanners, and CRT displays. Each of these applications presents challenges to the use of the method. Previously, we have described causes of both bias and variation error in terms of the various signal processing steps involved. This analysis, when combined with observations from practical systems testing, has suggested improvements and interpretation of results. Specifically, refinements in data screening for signal encoding problems, edge feature location and slope estimation, and noise resilience will be addressed.",
"title": ""
},
{
"docid": "f7276b8fee4bc0633348ce64594817b2",
"text": "Meta-modelling is at the core of Model-Driven Engineering, where it is used for language engineering and domain modelling. The OMG’s Meta-Object Facility is the standard framework for building and instantiating meta-models. However, in the last few years, several researchers have identified limitations and rigidities in such scheme, most notably concerning the consideration of only two meta-modelling levels at the same time. In this paper we present MetaDepth, a novel framework that supports a dual linguistic/ontological instantiation and permits building systems with an arbitrary number of meta-levels through deep meta-modelling. The framework implements advanced modelling concepts allowing the specification and evaluation of derived attributes and constraints across multiple meta-levels, linguistic extensions of ontological instance models, transactions, and hosting different constraint and action languages.",
"title": ""
},
{
"docid": "8ffaf2a272bc7e52baf3443e9fcd136d",
"text": "Maturity models have become a common tool for organisations to assess their capabilities in a variety of domains. However, for fields that have not yet been researched thoroughly, it can be difficult to create and evolve a maturity model that features all the important aspects in that field. It takes time and many iterative improvements for a maturity model to come of age. This is the case for Green ICT maturity models, whose aim is typically to either provide insight on the important aspects an organisation or a researcher should take into account when trying to improve the social or environmental impact of ICT, or to assist in the auditing of such aspects. In fact, when we were commissioned a comprehensive ICT-sustainability auditing for Utrecht University, we not only faced the need of selecting a Green ICT maturity model, but also to ensure that it covered as many organisational aspects as possible, extending the model if needed. This paper reports on the comparison we carried out of several Green ICT maturity models, how we extended our preferred model with needed constructs, and how we applied the resulting model during the ICT-sustainability auditing.",
"title": ""
},
{
"docid": "6834abfb692dbfe6d629f4153a873d85",
"text": "Wikidata is a free and open knowledge base from the Wikimedia Foundation, that not only acts as a central storage of structured data for other projects of the organization, but also for a growing array of information systems, including search engines. Like Wikipedia, Wikidata’s content can be created and edited by anyone; which is the main source of its strength, but also allows for malicious users to vandalize it, risking the spreading of misinformation through all the systems that rely on it as a source of structured facts. Our task at the WSDM Cup 2017 was to come up with a fast and reliable prediction system that narrows down suspicious edits for human revision [8]. Elaborating on previous works by Heindorf et al. we were able to outperform all other contestants, while incorporating new interesting features, unifying the programming language used to only Python and refactoring the feature extractor into a simpler and more compact code base.",
"title": ""
},
{
"docid": "5ecde325c3d01dc62bc179bc21fc8a0d",
"text": "Rapid access to situation-sensitive data through social media networks creates new opportunities to address a number of real-world problems. Damage assessment during disasters is a core situational awareness task for many humanitarian organizations that traditionally takes weeks and months. In this work, we analyze images posted on social media platforms during natural disasters to determine the level of damage caused by the disasters. We employ state-of-the-art machine learning techniques to perform an extensive experimentation of damage assessment using images from four major natural disasters. We show that the domain-specific fine-tuning of deep Convolutional Neural Networks (CNN) outperforms other state-of-the-art techniques such as Bag-of-Visual-Words (BoVW). High classification accuracy under both event-specific and cross-event test settings demonstrate that the proposed approach can effectively adapt deep-CNN features to identify the severity of destruction from social media images taken after a disaster strikes.",
"title": ""
},
{
"docid": "d78c6c3ed642e04263de583f2bbebcf8",
"text": "This letter presents an omnidirectional horizontally polarized planar printed loop-antenna using left-handed CL loading with 50-Omega input impedance. The antenna has a one wavelength circumference and gives an omnidirectional pattern in the plane of the loop, whilst working in an n = 0 mode. In contrast, a conventional right-handed loop, with the same dimensions, has a figure of eight pattern in the plane of the loop. The antenna is compared with other right-handed periodically loading loop antennas and shown to have the best efficiency and is much easier to match. Design details and simulated results are presented. The concept significantly extends the design degrees of freedom for loop antennas.",
"title": ""
},
{
"docid": "6d21d5da7cd3bf0a52b57307831f08d2",
"text": "This paper presents a broadband, dual-polarized base station antenna element by reasonably designing two orthogonal symmetrical dipole, four loading cylinders, balun, feed patches, specific shape reflector and plastic fasteners. Coupling feed is adopted to avoid the direct connection between the feed cables and the dipoles. The antenna element matches well in the frequency range of 1.7-2.7 GHz and the return loss (RL) S11<;-15 dB and the isolation S21<; -30 dB. Low cross-polarization, high front-back ratio (>25dB) and stable half-power beam width (HPBW) with 65±5° are also achieved. The proposed antenna element covers the whole long term evolution (LTE) band and is backward compatible with 3G and 2G bands.",
"title": ""
},
{
"docid": "fc9193f15f6e96043271302be917f2c7",
"text": "In this article we introduce the main notions of our core ontology for the robotics and automation field, one of first results of the newly formed IEEE-RAS Working Group, named Ontologies for Robotics and Automation. It aims to provide a common ground for further ontology development in Robotics and Automation. Furthermore, we will discuss the main core ontology definitions as well as the ontology development process employed.",
"title": ""
},
{
"docid": "621d66aeff489c65eb9877270cb86b5f",
"text": "Electronic customer relationship management (e-CRM) emerges from the Internet and Web technology to facilitate the implementation of CRM. It focuses on Internet- or Web-based interaction between companies and their customers. Above all, e-CRM enables service sectors to provide appropriate services and products to satisfy the customers so as to retain customer royalty and enhance customer profitability. This research is to explore the key research issues about e-CRM performance influence for service sectors in Taiwan. A research model is proposed based on the widely applied technology-organization-environment (TOE) framework. Survey data from the questionnaire are collected to empirically assess our research model.",
"title": ""
},
{
"docid": "ca0f1c0be79d9993ea94f77fd46c0921",
"text": "We have established methods to evaluate key properties that are needed to commercialize polyelectrolyte membranes for fuel cell electric vehicles such as water diffusion, gas permeability, and mechanical strength. These methods are based on coarse-graining models. For calculating water diffusion and gas permeability through the membranes, the dissipative particle dynamics–Monte Carlo approach was applied, while mechanical strength of the hydrated membrane was simulated by coarse-grained molecular dynamics. As a result of our systematic search and analysis, we can now grasp the direction necessary to improve water diffusion, gas permeability, and mechanical strength. For water diffusion, a map that reveals the relationship between many kinds of molecular structures and diffusion constants was obtained, in which the direction to enhance the diffusivity by improving membrane structure can be clearly seen. In order to achieve high mechanical strength, the molecular structure should be such that the hydrated membrane contains narrow water channels, but these might decrease the proton conductivity. Therefore, an optimal design of the polymer structure is needed, and the developed models reviewed here make it possible to optimize these molecular structures.",
"title": ""
},
{
"docid": "57af881dbb159dae0966472473539011",
"text": "We present in this paper our system developed for SemEval 2015 Shared Task 2 (2a English Semantic Textual Similarity, STS, and 2c Interpretable Similarity) and the results of the submitted runs. For the English STS subtask, we used regression models combining a wide array of features including semantic similarity scores obtained from various methods. One of our runs achieved weighted mean correlation score of 0.784 for sentence similarity subtask (i.e., English STS) and was ranked tenth among 74 runs submitted by 29 teams. For the interpretable similarity pilot task, we employed a rule-based approach blended with chunk alignment labeling and scoring based on semantic similarity features. Our system for interpretable text similarity was among the top three best performing systems.",
"title": ""
},
{
"docid": "8f1e3444c073a510df1594dc88d24b6b",
"text": "Purpose – The purpose of this paper is to provide industrial managers with insight into the real-time progress of running processes. The authors formulated a periodic performance prediction algorithm for use in a proposed novel approach to real-time business process monitoring. Design/methodology/approach – In the course of process executions, the final performance is predicted probabilistically based on partial information. Imputation method is used to generate probable progresses of ongoing process and Support Vector Machine classifies the performances of them. These procedures are periodically iterated along with the real-time progress in order to describe the ongoing status. Findings – The proposed approach can describe the ongoing status as the probability that the process will be executed continually and terminated as the identical result. Furthermore, before the actual occurrence, a proactive warning can be provided for implicit notification of eventualities if the probability of occurrence of the given outcome exceeds the threshold. Research limitations/implications – The performance of the proactive warning strategy was evaluated only for accuracy and proactiveness. However, the process will be improved by additionally considering opportunity costs and benefits from actual termination types and their warning errors. Originality/value – Whereas the conventional monitoring approaches only classify the already occurred result of a terminated instance deterministically, the proposed approach predicts the possible results of an ongoing instance probabilistically over entire monitoring periods. As such, the proposed approach can provide the real-time indicator describing the current capability of ongoing process.",
"title": ""
},
{
"docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8",
"text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).",
"title": ""
},
{
"docid": "79414d5ba6a202bf52d26a74caff4784",
"text": "The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach.",
"title": ""
},
{
"docid": "63af822cd877b95be976f990b048f90c",
"text": "We propose a method for generating classifier ensembles based on feature extraction. To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and principal component analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation approach is to encourage simultaneously individual accuracy and diversity within the ensemble. Diversity is promoted through the feature extraction for each base classifier. Decision trees were chosen here because they are sensitive to rotation of the feature axes, hence the name \"forest\". Accuracy is sought by keeping all principal components and also using the whole data set to train each base classifier. Using WEKA, we examined the rotation forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with bagging, AdaBoost, and random forest. The results were favorable to rotation forest and prompted an investigation into diversity-accuracy landscape of the ensemble models. Diversity-error diagrams revealed that rotation forest ensembles construct individual classifiers which are more accurate than these in AdaBoost and random forest, and more diverse than these in bagging, sometimes more accurate as well",
"title": ""
},
{
"docid": "3f8e6ebe83ba2d4bf3a1b4ab5044b6e4",
"text": "-This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the \"classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration. Irony: combination of circumstances, the result of which is the direct opposite of what might be expected. Paradox: seemingly absurd though perhaps really well-founded",
"title": ""
},
{
"docid": "eb59f239621dde59a13854c5e6fa9f54",
"text": "This paper presents a novel application of grammatical inference techniques to the synthesis of behavior models of software systems. This synthesis is used for the elicitation of software requirements. This problem is formulated as a deterministic finite-state automaton induction problem from positive and negative scenarios provided by an end-user of the software-to-be. A query-driven state merging algorithm (QSM) is proposed. It extends the RPNI and Blue-Fringe algorithms by allowing membership queries to be submitted to the end-user. State merging operations can be further constrained by some prior domain knowledge formulated as fluents, goals, domain properties, and models of external software components. The incorporation of domain knowledge both reduces the number of queries and guarantees that the induced model is consistent with such knowledge. The proposed techniques are implemented in the ISIS tool and practical evaluations on standard requirements engineering test cases and synthetic data illustrate the interest of this approach. Contact author: Pierre Dupont Department of Computing Science and Engineering (INGI) Université catholique de Louvain Place Sainte Barbe, 2. B-1348 Louvain-la-Neuve Belgium Email: Pierre.Dupont@uclouvain.be Phone: +32 10 47 91 14 Fax: +32 10 45 03 45",
"title": ""
}
] |
scidocsrr
|
dcd2794a2da057607f4f2764a20b2096
|
Automating High-Precision X-Ray and Neutron Imaging Applications With Robotics
|
[
{
"docid": "076c176608ea457ad091dec2c672e274",
"text": "On March 11, 2011, a massive earthquake (Magnitude 9.0) and accompanying tsunami hit the Tohoku region of eastern Japan. Since then, the Fukushima Daiichi Nuclear Power Plants have been facing a crisis because of loss of all power resulting in meltdown accidents. Three buildings housing nuclear reactors were seriously damaged from hydrogen explosions and, in one building, the nuclear reactions were also out of control. The situation was too ∗Webpage: http://www.astro.mech.tohoku.ac.jp/ 1 dangerous for humans to enter the buildings to inspect the damage as radioactive materials were also being released. In response to this crisis, it was decided that mobile rescue robots would be used to carry out surveillance missions. The mobile rescue robots needed could not be delivered to Tokyo Electric Power Company (TEPCO) until after resolving various technical issues. These issues included hardware reliability, communication functions, and radiation hardness of its electronic components. Additional sensors and functionality that would enable the robots to respond effectively to the crisis were also needed. Available robots were therefore retrofitted for the disaster reponse missions. First, the radiation tolerance of the electronic componenets were checked by means of gamma ray irradiation tests, conducted using the facilities of the Japan Atomic Energy Agency (JAEA). The commercial electronic devices used in the original robot systems worked long enough (more than 100 h at a 10% safety margin) in the assumed environment (100 mGy/h). Next, the usability of wireless communication in the target environment was assessed. Such tests were not possible in the target environment itself, so they were performed in the Hamaoka Daiichi Nuclear Power Plants, which is similar to the target environment. As previously predicted, the test results indicated that robust wireless communication would not be possible in the reactor buildings. It was therefore determined that a wired communication device would need to be installed. After TEPCO’s official urgent mission proposal was received, the team mounted additional devices to facilitate the installation of a water gauge in the basement of the reactor buildings to determine flooding levels. While these preparations were taking place, prospective robot operators from TEPCO trained in a laboratory environment. Finally, one of the robots was delivered to the Fukushima Daiichi Nuclear Power Plants on June 20, 2011, where it performed a number of important missions inside the buildings. In this paper, the requirements used for the exploration mission in the Fukushima Daiichi Nuclear Power Plants are presented, the implementation is discussed, and the results of the mission are reported.",
"title": ""
}
] |
[
{
"docid": "67bbd10e1ed9201fb589e16c58ae76ce",
"text": "Author name disambiguation has been one of the hardest problems faced by digital libraries since their early days. Historically, supervised solutions have empirically outperformed those based on heuristics, but with the burden of having to rely on manually labeled training sets for the learning process. Moreover, most supervised solutions just apply some type of generic machine learning solution and do not exploit specific knowledge about the problem. In this article, we follow a similar reasoning, but in the opposite direction. Instead of extending an existing supervised solution, we propose a set of carefully designed heuristics and similarity functions, and apply supervision only to optimize such parameters for each particular dataset. As our experiments show, the result is a very effective, efficient and practical author name disambiguation method that can be used in many different scenarios. In fact, we show that our method can beat state-of-the-art supervised methods in terms of effectiveness in many situations while being orders of magnitude faster. It can also run without any training information, using only default parameters, and still be very competitive when compared to these supervised methods (beating several of them) and better than most existing unsupervised author name disambiguation solutions.",
"title": ""
},
{
"docid": "2ba35cf1bea1794b060f3d89ac78dd24",
"text": "A Recurrent Neural Network (RNN) is a powerful connectionist model that can be applied to many challenging sequential problems, including problems that naturally arise in language and speech. However, RNNs are extremely hard to train on problems that have long-term dependencies, where it is necessary to remember events for many timesteps before using them to make a prediction. In this paper we consider the problem of training RNNs to predict sequences that exhibit significant long-term dependencies, focusing on a serial recall task where the RNN needs to remember a sequence of characters for a large number of steps before reconstructing it. We introduce the Temporal-Kernel Recurrent Neural Network (TKRNN), which is a variant of the RNN that can cope with long-term dependencies much more easily than a standard RNN, and show that the TKRNN develops short-term memory that successfully solves the serial recall task by representing the input string with a stable state of its hidden units.",
"title": ""
},
{
"docid": "68257960bdbc6c4f326108ee7ba3e756",
"text": "In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image. Convolutional neural networks achieve good performance on this task, while being computationally efficient. In this paper we carry these ideas over to the problem of assigning a sequence of labels to a set of speech frames, a task commonly known as framewise classification. We show that dense prediction view of framewise classification offers several advantages and insights, including computational efficiency and the ability to apply batch normalization. When doing dense prediction we pay specific attention to strided pooling in time and introduce an asymmetric dilated convolution, called time-dilated convolution, that allows for efficient and elegant implementation of pooling in time. We show that by using time-dilated convolutions with a very deep VGG-style CNN with batch normalization, we achieve best published single model accuracy result on the switchboard-2000 benchmark dataset.",
"title": ""
},
{
"docid": "0cf1c430d24a93f5d4da9200fbda41d4",
"text": "For some time I have been involved in efforts to develop computer-controlled systems for instruction. One such effort has been a computer-assistedinstruction (CAI) program for teaching reading in the primary grades (Atkinson, 1974) and another for teaching computer science at the college level (Atkinson, in press). The goal has been to use psychological theory to devise optimal instructional procedures—procedures that make moment-by-moment decisions based on the student's unique response history. To help guide some of the theoretical aspects of this work, research has also been done on the restricted but well-defined problem of optimizing the teaching of a foreign language vocabulary. This is an area in which mathematical models provide an accurate description of learning, and these models can be used in conjunction with the methods of control theory to develop precise algorithms for sequencing instruction among vocabulary items. Some of this work has been published, and those who have read about it know that the optimization schemes are quite effective—far more effective than procedures that permit the learner to make his own instructional decisions (Atkinson, 1972a, 1972b; Atkinson & Paulson, 1972). In conducting these vocabulary learning experiments, I have been struck by the incredible variability in learning rates across subjects. Even Stanford University students, who are a fairly select sample, display impressively large betweensubject differences. These differences may reflect differences in fundamental abilities, but it is easy to demonstrate that they also depend on the strategies that subjects bring to bear on the task. Good learners can introspect with ease about a \"bag of tricks\" for learning vocabulary items, whereas poor",
"title": ""
},
{
"docid": "7462f38fa4f99595bdb04a4519f7d9e9",
"text": "The use of Unmanned Aerial Vehicles (UAV) has been increasing over the last few years in many sorts of applications due mainly to the decreasing cost of this technology. One can see the use of the UAV in several civilian applications such as surveillance and search and rescue. Automatic detection of pedestrians in aerial images is a challenging task. The computing vision system must deal with many sources of variability in the aerial images captured with the UAV, e.g., low-resolution images of pedestrians, images captured at distinct angles due to the degrees of freedom that a UAV can move, the camera platform possibly experiencing some instability while the UAV flies, among others. In this work, we created and evaluated different implementations of Pattern Recognition Systems (PRS) aiming at the automatic detection of pedestrians in aerial images captured with multirotor UAV. The main goal is to assess the feasibility and suitability of distinct PRS implementations running on top of low-cost computing platforms, e.g., single-board computers such as the Raspberry Pi or regular laptops without a GPU. For that, we used four machine learning techniques in the feature extraction and classification steps, namely Haar cascade, LBP cascade, HOG + SVM and Convolutional Neural Networks (CNN). In order to improve the system performance (especially the processing time) and also to decrease the rate of false alarms, we applied the Saliency Map (SM) and Thermal Image Processing (TIP) within the segmentation and detection steps of the PRS. The classification results show the CNN to be the best technique with 99.7% accuracy, followed by HOG + SVM with 92.3%. In situations of partial occlusion, the CNN showed 71.1% sensitivity, which can be considered a good result in comparison with the current state-of-the-art, since part of the original image data is missing. As demonstrated in the experiments, by combining TIP with CNN, the PRS can process more than two frames per second (fps), whereas the PRS that combines TIP with HOG + SVM was able to process 100 fps. It is important to mention that our experiments show that a trade-off analysis must be performed during the design of a pedestrian detection PRS. The faster implementations lead to a decrease in the PRS accuracy. For instance, by using HOG + SVM with TIP, the PRS presented the best performance results, but the obtained accuracy was 35 percentage points lower than the CNN. The obtained results indicate that the best detection technique (i.e., the CNN) requires more computational resources to decrease the PRS computation time. Therefore, this work shows and discusses the pros/cons of each technique and trade-off situations, and hence, one can use such an analysis to improve and tailor the design of a PRS to detect pedestrians in aerial images.",
"title": ""
},
{
"docid": "484ddecc4ebcf33da0c3655034e47e37",
"text": "Determining the optimal thresholding for image segmentation has got more attention in recent years since it has many applications. There are several methods used to find the optimal thresholding values such as Otsu and Kapur based methods. These methods are suitable for bi-level thresholding case and they can be easily extended to the multilevel case, however, the process of determining the optimal thresholds in the case of multilevel thresholding is time-consuming. To avoid this problem, this paper examines the ability of two nature inspired algorithms namely: Whale Optimization Algorithm (WOA) and Moth-Flame Optimization (MFO) to determine the optimal multilevel thresholding for image segmentation. The MFO algorithm is inspired from the natural behavior of moths which have a special navigation style at night since they fly using the moonlight, whereas, the WOA algorithm emulates the natural cooperative behaviors of whales. The candidate solutions in the adapted algorithms were created using the image histogram, and then they were updated based on the characteristics of each algorithm. The solutions are assessed using the Otsu’s fitness function during the optimization operation. The performance of the proposed algorithms has been evaluated using several of benchmark images and has been compared with five different swarm algorithms. The results have been analyzed based on the best fitness values, PSNR, and SSIM measures, as well as time complexity and the ANOVA test. The experimental results showed that the proposed methods outperformed the other swarm algorithms; in addition, the MFO showed better results than WOA, as well as provided a good balance between exploration and exploitation in all images at small and high threshold numbers. © 2017 Elsevier Ltd. All rights reserved. r t e p m a h o r b t K",
"title": ""
},
{
"docid": "1b4e407523ad094f8a238045b89d1baa",
"text": "A modified multilevel inverter (MLI) structure has been presented for Photovoltaic (PV) fed 2.3 kV micro-grid applications. To feed the cascaded multilevel micro-grid connected inverter (CM-MGCI) voltage, a PV array is designed considering the environmental effects of Bangladesh. We have estimated the performance of the classical and a modified cascaded H-bridge (CHB) MLI with level-shifted carrier sinusoidal pulse width modulation (LSC-SPWM) technique in MATLAB/Simulink environment. The voltage and total harmonic distortion (THD) profile for the proposed eleven-level modified H-bridge MLI topology has been compared with the conventional CHB topology. The harmonic profile has also been compared with other modified MLI topologies. Compared to the conventional CHB and other recent modified MLI topologies, the proposed H-bridge CM-MGCI shows better THD profile.",
"title": ""
},
{
"docid": "7456c4c524a395db754a11cf9c8ee2bc",
"text": "The object of this study was to monitor the safety and efficacy of long-term use of an oromucosal cannabis-based medicine (CBM) in patients with multiple sclerosis (MS). A total of 137 MS patients with symptoms not controlled satisfactorily using standard drugs entered this open-label trial following a 10-week, placebo-controlled study. Patients were assessed every eight weeks using visual analogue scales and diary scores of main symptoms, and were followed for an average of 434 days (range: 21 -814). A total of 58 patients (42.3%) withdrew due to lack of efficacy (24); adverse events (17); withdrew consent (6); lost to follow-up (3); and other (8). Patients reported 292 unwanted effects, of which 251 (86%) were mild to moderate, including oral pain (28), dizziness (20), diarrhoea (17), nausea (15) and oromucosal disorder (12). Three patients had five 'serious adverse events' between them--two seizures, one fall, one aspiration pneumonia, one gastroenteritis. Four patients had first-ever seizures. The improvements recorded and dosage taken in the acute study remained stable. Planned, sudden interruption of CBM for two weeks in 25 patients (of 62 approached) did not cause a consistent withdrawal syndrome, although 11 (46%) patients reported at least one of--tiredness, interrupted sleep, hot and cold flushes, mood alteration, reduced appetite, emotional lability, intoxication or vivid dreams. Twenty-two (88%) patients re-started CBM treatment. We conclude that long-term use of an oromucosal CBM (Sativex) maintains its effect in those patients who perceive initial benefit. The precise nature and rate of risks with long-term use, especially epilepsy, will require larger and longer-term studies.",
"title": ""
},
{
"docid": "c624b1ab8127ea8cafd217c9c0387a46",
"text": "A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite wellchosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new “looks linear” (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.",
"title": ""
},
{
"docid": "7e2b4e6a887d99a58e4ae9d9666d05e0",
"text": "Much has been written on shortest path problems with weight, or resource, constraints. However, relatively little of it has provided systematic computational comparisons for a representative selection of algorithms. Furthermore, there has been almost no work showing numerical performance of scaling algorithms, although worst-case complexity guarantees for these are well known, nor has the effectiveness of simple preprocessing techniques been fully demonstrated. Here, we provide a computational comparison of three scaling techniques and a standard label-setting method. We also describe preprocessing techniques which take full advantage of cost and upper-bound information that can be obtained from simple shortest path information. We show that integrating information obtained in preprocessing within the label-setting method can lead to very substantial improvements in both memory required and run time, in some cases, by orders of magnitude. Finally, we show how the performance of the label-setting method can be further improved by making use of all Lagrange multiplier information collected in a Lagrangean relaxation first step. © 2003 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "eb5043aa57e6140bca2722a590eec656",
"text": "The estimation of correspondences between two images resp. point sets is a core problem in computer vision. One way to formulate the problem is graph matching leading to the quadratic assignment problem which is NP-hard. Several so called second order methods have been proposed to solve this problem. In recent years hypergraph matching leading to a third order problem became popular as it allows for better integration of geometric information. For most of these third order algorithms no theoretical guarantees are known. In this paper we propose a general framework for tensor block coordinate ascent methods for hypergraph matching. We propose two algorithms which both come along with the guarantee of monotonic ascent in the matching score on the set of discrete assignment matrices. In the experiments we show that our new algorithms outperform previous work both in terms of achieving better matching scores and matching accuracy. This holds in particular for very challenging settings where one has a high number of outliers and other forms of noise.",
"title": ""
},
{
"docid": "3301a0cf26af8d4d8c7b2b9d56cec292",
"text": "Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
"title": ""
},
{
"docid": "648a1ff0ad5b2742ff54460555287c84",
"text": "In the European academic and institutional debate, interoperability is predominantly seen as a means to enable public administrations to collaborate within Members State and across borders. The article presents a conceptual framework for ICT-enabled governance and analyses the role of interoperability in this regard. The article makes a specific reference to the exploratory research project carried out by the Information Society Unit of the Institute for Prospective Technological Studies (IPTS) of the European Commission’s Joint Research Centre on emerging ICT-enabled governance models in EU cities (EXPGOV). The aim of this project is to study the interplay between ICTs and governance processes at city level and formulate an interdisciplinary framework to assess the various dynamics emerging from the application of ICT-enabled service innovations in European cities. In this regard, the conceptual framework proposed in this article results from an action research perspective and investigation of e-governance experiences carried out in Europe. It aims to elicit the main value drivers that should orient how interoperable systems are implemented, considering the reciprocal influences that occur between these systems and different governance models in their specific context.",
"title": ""
},
{
"docid": "debf183822616eabc57b95f5e6037d4f",
"text": "A new algorithm is proposed which accelerates the mini-batch k-means algorithm of Sculley (2010) by using the distance bounding approach of Elkan (2003). We argue that, when incorporating distance bounds into a mini-batch algorithm, already used data should preferentially be reused. To this end we propose using nested mini-batches, whereby data in a mini-batch at iteration t is automatically reused at iteration t+ 1. Using nested mini-batches presents two difficulties. The first is that unbalanced use of data can bias estimates, which we resolve by ensuring that each data sample contributes exactly once to centroids. The second is in choosing mini-batch sizes, which we address by balancing premature fine-tuning of centroids with redundancy induced slow-down. Experiments show that the resulting nmbatch algorithm is very effective, often arriving within 1% of the empirical minimum 100× earlier than the standard mini-batch algorithm.",
"title": ""
},
{
"docid": "3eea5fa01ddd5bef75de7d0a4184bd30",
"text": "Monodisperse samples of silver nanocubes were synthesized in large quantities by reducing silver nitrate with ethylene glycol in the presence of poly(vinyl pyrrolidone) (PVP). These cubes were single crystals and were characterized by a slightly truncated shape bounded by [100], [110], and [111] facets. The presence of PVP and its molar ratio (in terms of repeating unit) relative to silver nitrate both played important roles in determining the geometric shape and size of the product. The silver cubes could serve as sacrificial templates to generate single-crystalline nanoboxes of gold: hollow polyhedra bounded by six [100] and eight [111] facets. Controlling the size, shape, and structure of metal nanoparticles is technologically important because of the strong correlation between these parameters and optical, electrical, and catalytic properties.",
"title": ""
},
{
"docid": "704f4681b724a0e4c7c10fd129f3378b",
"text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.",
"title": ""
},
{
"docid": "2a0b49c9844d4b048688cbabdf6daa18",
"text": "This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.",
"title": ""
},
{
"docid": "dc310f1a5fb33bd3cbe9de95b2a0159c",
"text": "The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.",
"title": ""
},
{
"docid": "9ff6d7a36646b2f9170bd46d14e25093",
"text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.",
"title": ""
},
{
"docid": "6cb43a0f16b69cad9a7e5c5a528e23f5",
"text": "New substation technology, such as nonconventional instrument transformers, and a need to reduce design and construction costs are driving the adoption of Ethernet-based digital process bus networks for high-voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include Generic Object Oriented Substation Event, Simple Network Management Protocol, and Sampled Values (SVs). A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP-based traffic segregation method. While this paper focuses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High-volume SV data and time-critical circuit breaker tripping commands do not interact on a full-duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high-voltage switchyards.",
"title": ""
}
] |
scidocsrr
|
297cc2c7a7ac361f528114196ea2f1e2
|
Procedural urban environments for FPS games
|
[
{
"docid": "f13ba9090e873f98c941517b7ae0a31b",
"text": "Computational creativity has traditionally relied on well-controlled, single-faceted and established domains such as visual art, narrative and audio. On the other hand, research on autonomous generation methods for game artifacts has not yet considered the creative capacity of those methods. In this paper we position computer games as the ideal application domain for computational creativity for the unique features they offer: being highly interactive, dynamic and content-intensive software applications. Their multifaceted nature is key in our argumentation as the successful orchestration of different art domains (such as visual art, audio and level architecture) with game mechanics design is a grand challenge for the study of computational creativity in this multidisciplinary domain. Computer games not only challenge computational creativity and provide a creative sandbox for advancing the field but they also offer an opportunity for computational creativity methods to be extensively assessed (via a huge population of gamers) through commercial-standard products of high impact and financial value. Games: the Killer App for Computational Creativity More than a decade of research in computational creativity (CC) has explored the study of autonomous generative systems in a plethora of domains including non-photorealistic art (Colton 2012), music (Wiggins et al. 1999), jokes (Binsted and Ritchie 1997), and stories (Peinado and Gervás 2006) as well as mathematics (Colton 2002) and engineering (Gemeinboeck and Saunders 2013). While commercial games have used computer generated artifacts such as levels and visuals since the early 1980s, academic research in more ambitious and rigorous autonomous game artifact generation methods, e.g. search-based procedural content generation (Togelius et al. 2011), is only very recent. Despite notable exceptions (Cook, Colton, and Gow 2013; Zook, Riedl, and Magerko 2011; Smith and Mateas 2011), the creation of games and their content has not yet systematically been explored as a computationally creative process. From a CC perspective, procedural content generation (PCG) in games has been viewed — like mathematics and engineering — as a potentially creative activity but only if done exceptionally well. The intersection of CC, game design and advanced game technology (e.g. PCG) opens up an entirely new field for studying CC as well as a new perspective for game research. This paper argues that the creative capacity of automated game designers is expected to advance the field of computational creativity and lead to major breakthroughs as, due to their very nature, computer games challenge computational creativity methods at large. This position paper contends that games constitute the killer application for the study of CC for a number of reasons. First, computer games are multifaceted: the types of creative processes met in computer games include visual art, sound design, graphic design, interaction design, narrative generation, virtual cinematography, aesthetics and environment beautification. The fusion of the numerous and highly diverse creative domains within a single software application makes games the ideal arena for the study of computational (and human) creativity. It is also important to note that each art form (or facet) met in games elicits different experiences to its users, e.g. game rules affect the player’s immersion (Calleja 2011); their fusion into the final software targeting the ultimate play experience for a rather large and diverse audience is an additional challenge for CC research. Second, games are content-intensive processes with open boundaries for creativity as content for each creative facet comes in different representations, under different sets of constraints and often created in massive amounts. Finally, the creation (game) offers a rich interaction with the user (player): a game can be appreciated as an art form or for its creative capacity only when experienced through play. The play experience is highly interactive and engaging, moreso than any other form of art. Thus, autonomous computational game creators should attempt to design new games that can be both useful (playable) and deemed to be creative (or novel) considering that artifacts generated can be experienced and possibly altered. For example, the game narrative, the illumination of a room, or the placement of objects can be altered by a player in a game; this explodes in terms of complexity when the game includes user-generated content or social dynamics in multiplayer games. Another unique property of games is that autonomous creative systems have a long history in the game industry. PCG is used, in specific roles, by many commercial games in order to create engaging but unpredictable game experiences and to lessen the burden of manual game content creation by automating parts of it. Unlike other creative domains where computational creativity is shunned by human artists and critics (Colton 2008), the game industry not only “invented” PCG but proudly advertises its presence as a selling point. Diablo III (Blizzard 2012), which set a record by selling 3.5 million copies in the first 24 hours of its release, proudly states that “[previous] games established the series’ hallmarks: randomized levels, the relentless onslaught of monsters and events in a perpetually fresh world, [...]”1. Highlyawarded Skyrim (Bethesda 2011) boasts of its Radiant A.I. (which allows for the “dynamic reaction to the player’s actions by both NPCs and the game world”) and its Radiant Story (which “records your actions and changes things in the world according to what you have done”). The prevalence of e.g. level generators in games makes both developers and end-users acceptant of the power of computational creativity. Unlike traditional art media, where CC is considered more of an academic pursuit, PCG is a commercial necessity for many games: this makes synergies between game industry and CC research desirable as evidenced by Howlett, Colton, and Browne (2010). This paper introduces computational game creativity as the study of computational creativity within and for computer games. Games can be (1) improved as products via computational creations (for) and/or (2) used as the ultimate canvas for the study of computational creativity as a process (within). Computational game creativity (CGC) is positioned at the intersection of developing fields within games research and long-studied fields within computational creativity such as visual art and narrative. To position computational creativity within games we identify a number of key creative facets in modern game development and design and discuss their required orchestration for a final successful game product. The paper concludes with a discussion on the future trends of CGC and key open research questions. Creative Facets of Games Games are multifaceted as they have several creative domains contributing substantially to the game’s look, feel, and experience. This section highlights different creative facets of games and points to instances of algorithmically created game content for these facets. While several frameworks and ontologies exist for describing elements of games, e.g. by Hunicke, Leblanc, and Zubek (2004), the chosen facets are a closer match to established creative domains such as music, painting or architecture. This section primarily argues that each facet fulfills Ritche’s definition of a potentially “creative” activity (Ritchie 2007, p.71). Additionally, it uses Ritchie’s essential properties for creativity, i.e. novelty, quality and typicality (Ritchie 2007) in terms of the goals of each creation process; whether these goals (or the greater goal of creativity) are met, however, will not be evaluated in this paper. From the official ‘What is Diablo 3?’ page at Blizzard’s website: http://us.battle.net/d3/en/game/what-is Visuals As digital games are uniformly displayed on a screen, any game primarily relies on visual output to convey information to the player. Game visuals can range from photorealistic, to caricaturized, to abstract (Järvinen 2002). While photorealistic visuals as those in the FIFA series (EA Sports 1993) are direct representations of objects, in cases where no real-world equivalent exists (such as in fantasy or sci-fi settings) artists must use real-world reference material and extrapolate them to fantastical lengths with “what if” scenarios. Caricaturized visuals often aim at eliciting a specific emotion, such as melancholy in the black and white theme of Limbo (Playdead 2010). Abstract visuals include the 8-bit art of early games, where constraints of the medium (low-tech monitors) forced game artists to become particularly creative in their design of memorable characters using as few pixels or colors as possible. In terms of computer generated visual output for games, the most commercially successful examples thereof are middleware which algorithmically create 3D models of trees with SpeedTree (IDV 2002) or faces with FaceGen (Singular Inversions 2001). Since such middleware are used by multiple high-end commercial games, their algorithms are carefully finetuned to ensure that the generated artifacts imitate real-world objects, targeting typicality in their creations. Games with fewer tethers in the real world can allow a broader range of generated visual elements. Petalz (Risi et al. 2012), for instance, generates colorful flowers which are the core focus of a flower-collecting game. Galactic Arms Race (Hastings, Guha, and Stanley 2009), on the other hand, generates the colors and trajectories of weapons in a space shooter game. Both examples have a wide expressive range as they primarily target novelty, with uninteresting or unwanted visuals being pruned by the player via interactive evolution. In order to impart a sense of visual appreciation to the generator, Liapis, Yannakakis, and Togelius (2012) assigned several dimensio",
"title": ""
},
{
"docid": "0108144cd6a40b8a6cd66517db3bad5e",
"text": "Level designers create gameplay through geometry, AI scripting, and item placement. There is little formal understanding of this process, but rather a large body of design lore and rules of thumb. As a result, there is no accepted common language for describing the building blocks of level design and the gameplay they create. This paper presents level design patterns for first-person shooter (FPS) games, providing cause-effect relationships between level design elements and gameplay. These patterns allow designers to create more interesting and varied levels.",
"title": ""
},
{
"docid": "20f6a794edae8857a04036afc84f532e",
"text": "Genetic algorithms play a significant role, as search techniques forhandling complex spaces, in many fields such as artificial intelligence, engineering, robotic, etc. Genetic algorithms are based on the underlying genetic process in biological organisms and on the naturalevolution principles of populations. These algorithms process apopulation of chromosomes, which represent search space solutions,with three operations: selection, crossover and mutation. Under its initial formulation, the search space solutions are coded using the binary alphabet. However, the good properties related with these algorithms do not stem from the use of this alphabet; other coding types have been considered for the representation issue, such as real coding, which would seem particularly natural when tackling optimization problems of parameters with variables in continuous domains. In this paper we review the features of real-coded genetic algorithms. Different models of genetic operators and some mechanisms available for studying the behaviour of this type of genetic algorithms are revised and compared.",
"title": ""
}
] |
[
{
"docid": "e59136e0d0a710643a078b58075bd8cd",
"text": "PURPOSE\nEpidemiological evidence suggests that chronic consumption of fruit-based flavonoids is associated with cognitive benefits; however, the acute effects of flavonoid-rich (FR) drinks on cognitive function in the immediate postprandial period require examination. The objective was to investigate whether consumption of FR orange juice is associated with acute cognitive benefits over 6 h in healthy middle-aged adults.\n\n\nMETHODS\nMales aged 30-65 consumed a 240-ml FR orange juice (272 mg) and a calorie-matched placebo in a randomized, double-blind, counterbalanced order on 2 days separated by a 2-week washout. Cognitive function and subjective mood were assessed at baseline (prior to drink consumption) and 2 and 6 h post consumption. The cognitive battery included eight individual cognitive tests. A standardized breakfast was consumed prior to the baseline measures, and a standardized lunch was consumed 3 h post-drink consumption.\n\n\nRESULTS\nChange from baseline analysis revealed that performance on tests of executive function and psychomotor speed was significantly better following the FR drink compared to the placebo. The effects of objective cognitive function were supported by significant benefits for subjective alertness following the FR drink relative to the placebo.\n\n\nCONCLUSIONS\nThese data demonstrate that consumption of FR orange juice can acutely enhance objective and subjective cognition over the course of 6 h in healthy middle-aged adults.",
"title": ""
},
{
"docid": "b389cf1f4274b250039414101cf0cc98",
"text": "We present a framework for analyzing the structure of digital media streams. Though our methods work for video, text, and audio, we concentrate on detecting the structure of digital music files. In the first step, spectral data is used to construct a similarity matrix calculated from inter-frame spectral similarity. The digital audio can be robustly segmented by correlating a kernel along the diagonal of the similarity matrix. Once segmented, spectral statistics of each segment are computed. In the second step, segments are clustered based on the selfsimilarity of their statistics. This reveals the structure of the digital music in a set of segment boundaries and labels. Finally, the music can be summarized by selecting clusters with repeated segments throughout the piece. The summaries can be customized for various applications based on the structure of the original music.",
"title": ""
},
{
"docid": "89e25ae1d0f5dbe3185a538c2318b447",
"text": "This paper presents a fully-integrated 3D image radar engine utilizing beamforming for electrical scanning and precise ranging technique for distance measurement. Four transmitters and four receivers form a sensor frontend with phase shifters and power combiners adjusting the beam direction. A built-in 31.3 GHz clock source and a frequency tripler provide both RF carrier and counting clocks for the distance measurement. Flip-chip technique with low-temperature co-fired ceramic (LTCC) antenna design creates a miniature module as small as 6.5 × 4.4 × 0.8 cm3. Designed and fabricated in 65 nm CMOS technology, the transceiver array chip dissipates 960 mW from a 1.2-V supply and occupies chip area of 3.6 × 2.1 mm 2. This prototype achieves ±28° scanning range, 2-m maximum distance, and 1 mm depth resolution.",
"title": ""
},
{
"docid": "833e112f159c27d5ec32076863f879dc",
"text": "Horizontal bone augmentation of the maxillary and mandibular alveolar ridges has been conventionally performed using mini titanium alloy screws. The titanium alloy screws are used to fixate corticocancellous block grafts to the recipient site or for tenting the mucoperiosteum to retain particulate bone grafts. Nonresorbable guided tissue regenerative membranes reinforced with titanium have also been developed to use with particulate bone grafts to augment alveolar ridge defects. This report demonstrates the use of resorbable ultrasound-activated pins and resorbable foil panels developed by KLS Martin for augmenting the alveolar ridges with particulate bone grafts.",
"title": ""
},
{
"docid": "da43061319adbfd41c77483590a3c819",
"text": "Sleep bruxism (SB) is reported by 8% of the adult population and is mainly associated with rhythmic masticatory muscle activity (RMMA) characterized by repetitive jaw muscle contractions (3 bursts or more at a frequency of 1 Hz). The consequences of SB may include tooth destruction, jaw pain, headaches, or the limitation of mandibular movement, as well as tooth-grinding sounds that disrupt the sleep of bed partners. SB is probably an extreme manifestation of a masticatory muscle activity occurring during the sleep of most normal subjects, since RMMA is observed in 60% of normal sleepers in the absence of grinding sounds. The pathophysiology of SB is becoming clearer, and there is an abundance of evidence outlining the neurophysiology and neurochemistry of rhythmic jaw movements (RJM) in relation to chewing, swallowing, and breathing. The sleep literature provides much evidence describing the mechanisms involved in the reduction of muscle tone, from sleep onset to the atonia that characterizes rapid eye movement (REM) sleep. Several brainstem structures (e.g., reticular pontis oralis, pontis caudalis, parvocellularis) and neurochemicals (e.g., serotonin, dopamine, gamma aminobutyric acid [GABA], noradrenaline) are involved in both the genesis of RJM and the modulation of muscle tone during sleep. It remains unknown why a high percentage of normal subjects present RMMA during sleep and why this activity is three times more frequent and higher in amplitude in SB patients. It is also unclear why RMMA during sleep is characterized by co-activation of both jaw-opening and jaw-closing muscles instead of the alternating jaw-opening and jaw-closing muscle activity pattern typical of chewing. The final section of this review proposes that RMMA during sleep has a role in lubricating the upper alimentary tract and increasing airway patency. The review concludes with an outline of questions for future research.",
"title": ""
},
{
"docid": "d2a2bcbed12dacfd0c3355d18ff10f75",
"text": "We investigate the problem of refining SQL queries to satisfy cardinality constraints on the query result. This has applications to the many/few answers problems often faced by database users. We formalize the problem of query refinement and propose a framework to support it in a database system. We introduce an interactive model of refinement that incorporates user feedback to best capture user preferences. Our techniques are designed to handle queries having range and equality predicates on numerical and categorical attributes. We present an experimental evaluation of our framework implemented in an open source data manager and demonstrate the feasibility and practical utility of our approach.",
"title": ""
},
{
"docid": "71f419b67599bce974b229523e6291d2",
"text": "We describe a novel approach to imitation learning that infers latent policies directly from state observations. We introduce a method that characterizes the causal effects of unknown actions on observations while simultaneously predicting their likelihood. We then outline an action alignment procedure that leverages a small amount of environment interactions to determine a mapping between latent and real-world actions. We show that this corrected labeling can be used for imitating the observed behavior, even though no expert actions are given. We evaluate our approach within classic control and photo-realistic visual environments and demonstrate that it performs well when compared to standard approaches.",
"title": ""
},
{
"docid": "adb46bea91457f027c6040cd1d706a76",
"text": "Several new algorithms for visual correspondence based on graph cuts [6, 13, 16] have recently been developed. While these methods give very strong results in practice, they do not handle occlusions properly. Specifically, they treat the two input images asymmetrically, and they do not ensure that a pixel corresponds to at most one pixel in the other image. In this paper, we present two new methods which properly address occlusions, while preserving the advantages of graph cut algorithms. We give experimental results for stereo as well as motion, which demonstrate that our methods perform well both at detecting occlusions and computing disparities.",
"title": ""
},
{
"docid": "3b34e09d2b7109c9cbc8249aec3f23c2",
"text": "The purpose of this paper is to explore the concept of brand equity and discuss its different perspectives, we try to review existing literature of brand equity and evaluate various Customer-based brand equity models to provide a collection from well-known databases for further research in this area.",
"title": ""
},
{
"docid": "79f87d478af99ef60efadb7c5ff7c4ec",
"text": "This study proposes an interior permanent magnet (IPM) brushless dc (BLDC) motor design strategy that utilizes BLDC control based on Hall sensor signals. The magnetic flux of IPM motors varies according to the rotor position and abnormal Hall sensor problems are related to magnetic flux. To find the cause of the abnormality in the Hall sensors, an analysis of the magnetic flux density at the Hall sensor position by finite element analysis is conducted. In addition, an IPM model with a notch structure is proposed to solve abnormal Hall sensor problems and its magnetic equivalent circuit (MEC) model is derived. Based on the MEC model, an optimal rotor design method is proposed and the final model is derived. However, the Hall sensor signal achieved from the optimal rotor is not perfect. To improve the accuracy of the BLDC motor control, a rotor position estimation method is proposed. Finally, experiments are performed to evaluate the performance of the proposed IPM-type BLDC motor and the Hall sensor compensation method.",
"title": ""
},
{
"docid": "869ad7b6bf74f283c8402958a6814a21",
"text": "In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients’ self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis.",
"title": ""
},
{
"docid": "05469beb54d88fc6cf9a8510d60b05cb",
"text": "Recurrent neural networks like long short-term memory (LSTM) are important architectures for sequential prediction tasks. LSTMs (and RNNs in general) model sequences along the forward time direction. Bidirectional LSTMs (Bi-LSTMs) on the other hand model sequences along both forward and backward directions and are generally known to perform better at such tasks because they capture a richer representation of the data. In the training of Bi-LSTMs, the forward and backward paths are learned independently. We propose a variant of the Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a channel between the two paths (during training, but which may be omitted during inference); thus optimizing the two paths jointly. We arrive at this joint objective for our model by minimizing a variational lower bound of the joint likelihood of the data sequence. Our model acts as a regularizer and encourages the two networks to inform each other in making their respective predictions using distinct information. We perform ablation studies to better understand the different components of our model and evaluate the method on various benchmarks, showing state-of-the-art performance.",
"title": ""
},
{
"docid": "7ac1249e901e558443bc8751b11c9427",
"text": "Despite the growing popularity of leasing as an alternative to purchasing a vehicle, there is very little research on how consumers choose among various leasing and ̄nancing (namely buying) contracts and how this choice a®ects the brand they choose. In this paper therefore, we develop a structural model of the consumer's choice of automobile brand and the related decision of whether to lease or buy it. We conceptualize the leasing and buying of the same vehicle as two di®erent goods, each with its own costs and bene ̄ts. The di®erences between the two types of contracts are summarized along three dimensions: (i) the \\net price\" or ̄nancial cost of the contract, (ii) maintenance and repair costs and (iii) operating costs, which depend on the consumer's driving behavior. Based on consumer utility maximization, we derive a nested logit of brand and contract choice that captures the tradeo®s among all three costs. The model is estimated on a dataset of new car purchases from the near luxury segment of the automobile market. The optimal choice of brand and contract is determined by the consumer's implicit interest rate and the number of miles she expects to drive, both of which are estimated as parameters of the model. The empirical results yield several interesting ̄ndings. We ̄nd that (i) cars that deteriorate faster are more likely to be leased than bought, (ii) the estimated implicit interest rate is higher than the market rate, which implies that consumers do not make e±cient tradeo®s between the net price and operating costs and may often incorrectly choose to lease and (iii) the estimate of the annual expected mileage indicates that most consumers would incur substantial penalties if they lease, which explains why buying or ̄nancing continues to be more popular than leasing. This research also provides several interesting managerial insights into the e®ectiveness of various promotional instruments. We examine this issue by looking at (i) sales response to a promotion, (ii) the ability of the promotion to draw sales from other brands and (iii) its overall pro ̄tability. We ̄nd, for example that although the sales response to a cash rebate on a lease is greater than an equivalent increase in the residual value, under certain conditions and for certain brands, a residual value promotion yields higher pro ̄ts. These ̄ndings are of particular value to manufacturers in the prevailing competitive environment, which is marked by the extensive use of large rebates and 0% APR o®ers.",
"title": ""
},
{
"docid": "d3b03d65b61b98db03445bda899b44ba",
"text": "Positioning is basis for providing location information to mobile users, however, with the growth of wireless and mobile communications technologies. Mobile phones are equipped with several radio frequency technologies for driving the positioning information like GSM, Wi-Fi or Bluetooth etc. In this way, the objective of this thesis was to implement an indoor positioning system relying on Bluetooth Received Signal Strength (RSS) technology and it integrates into the Global Positioning Module (GPM) to provide precise information inside the building. In this project, we propose indoor positioning system based on RSS fingerprint and footprint architecture that smart phone users can get their position through the assistance collections of Bluetooth signals, confining RSSs by directions, and filtering burst noises that can overcome the server signal fluctuation problem inside the building. Meanwhile, this scheme can raise more accuracy in finding the position inside the building.",
"title": ""
},
{
"docid": "a75c512d3041f049fe044df57d32b9a0",
"text": "Prediction of small molecule binding modes to macromolecules of known three-dimensional structure is a problem of paramount importance in rational drug design (the \"docking\" problem). We report the development and validation of the program GOLD (Genetic Optimisation for Ligand Docking). GOLD is an automated ligand docking program that uses a genetic algorithm to explore the full range of ligand conformational flexibility with partial flexibility of the protein, and satisfies the fundamental requirement that the ligand must displace loosely bound water on binding. Numerous enhancements and modifications have been applied to the original technique resulting in a substantial increase in the reliability and the applicability of the algorithm. The advanced algorithm has been tested on a dataset of 100 complexes extracted from the Brookhaven Protein DataBank. When used to dock the ligand back into the binding site, GOLD achieved a 71% success rate in identifying the experimental binding mode.",
"title": ""
},
{
"docid": "e0f5eb430f53ae8c72b77db5bd9846bd",
"text": "Insufficient prior knowledge about the array of skills possessed by medical students in information communication technology account for failed efforts at incorporating ICT into their academic work. The aim of this study is to access information communication and technology skills and its use among clinical students undergoing medical training in northern Ghana. A longitudinal questionnaire was administered to 175 clinical year (1st, 2nd, and 3rd year) medical students aged between 22 and 29 years (mean ± standard deviation; 25. 0 ± 1. 26 years). Out of the total 175 questionnaires administered 140 (82. 0%) students returned their questionnaires. Questionnaires from 5 students were incomplete leaving 135 complete and analyzable questionnaires, resulting in a 77. 0% responses rate. Of the remaining 135 students, 55. 6% of the respondents were proficient in the use of ICT related tools, 37. 8% were using ICT resources for their academic work, and 85. 2% were using such resources for social purposes, while use of ICT for academic work by gender was: 88. 2% for males, and 11. 8% for females. By gender 49. 0% males and 52. 2% females were using ICT for social purposes. The study revealed high and low levels of proficiency in ICT depending upon the ICT task to be performed, and concluded that a good curriculum designed to encourage ICT use by students as well as develop in them a multiplicity of skills, coupled with a teaching methodology that is student centred and encourages student engagement in active cognitive activities involving the use of ICTs may help stem this skewedness in proficiency.",
"title": ""
},
{
"docid": "489a131de4f9fb15e971087387862b87",
"text": "AIM\nTo assess caffeine intake habits of Osijek high school students and identify the most important sources of caffeine intake.\n\n\nMETHODS\nAdjusted Wisconsin University Caffeine Consumption Questionnaire was administered to 571 high school students (371 boys and 200 girls in the ninth grade) from Osijek, the largest town in eastern Croatia. The level of caffeine in soft drinks was determined by the high pressure liquid chromatography method, and in chocolate and coffee from the literature data.\n\n\nRESULTS\nOnly 10% of our participants did not use foodstuffs containing caffeine. The intake of caffeine originated from soft drinks (50%), coffee (37%), and chocolate (13%). The mean caffeine concentration in soft drinks was 100-/+26.9 mg/L. The mean estimated caffeine intake was 62.8-/+59.8 mg/day. There was no statistically significant difference between boys and girls in caffeine consumption (1.0-/+0.9 mg/kg bw for boys vs 1.1-/+1.4 mg/kg bw for girls). Daily caffeine intake of 50-100 mg was recorded in 32% of girls and 29% of boys, whereas intake greater than 100 mg/day was recorded in 18% of girls and 25% of boys.\n\n\nCONCLUSION\nSoft drinks containing caffeine were the major source of caffeine intake in high school students. Large-scale public health measures are needed to inform the public on health issues related to excessive intake of caffeine-containing foodstuffs by children and adolescents.",
"title": ""
},
{
"docid": "dfc0f23dbb0a0556f53f5a913b936c8f",
"text": "Neural network-based methods represent the state-of-the-art in question generation from text. Existing work focuses on generating only questions from text without concerning itself with answer generation. Moreover, our analysis shows that handling rare words and generating the most appropriate question given a candidate answer are still challenges facing existing approaches. We present a novel two-stage process to generate question-answer pairs from the text. For the first stage, we present alternatives for encoding the span of the pivotal answer in the sentence using Pointer Networks. In our second stage, we employ sequence to sequence models for question generation, enhanced with rich linguistic features. Finally, global attention and answer encoding are used for generating the question most relevant to the answer. We motivate and linguistically analyze the role of each component in our framework and consider compositions of these. This analysis is supported by extensive experimental evaluations. Using standard evaluation metrics as well as human evaluations, our experimental results validate the significant improvement in the quality of questions generated by our framework over the state-of-the-art. The technique presented here represents another step towards more automated reading comprehension assessment. We also present a live system to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "b24222b3dd3f50c94c902eb03a81b502",
"text": "This brief note addresses the historical background of the invention of the enzyme immunoassay (EIA) and enzyme-linked immunosorbent assay (ELISA). These assays were developed independently and simultaneously by the research group of Peter Perlmann and Eva Engvall at Stockholm University in Sweden and by the research group of Anton Schuurs and Bauke van Weemen in The Netherlands. Today, fully automated instruments in medical laboratories around the world use the immunoassay principle with an enzyme as the reporter label for routine measurements of innumerable analytes in patient samples. The impact of EIA/ELISA is reflected in the overwhelmingly large number of times it has appeared as a keyword in the literature since the 1970s. Clinicians and their patients, medical laboratories, in vitro diagnostics manufacturers, and worldwide healthcare systems owe much to these four inventors.",
"title": ""
},
{
"docid": "24a0f441ff09e7a60a1e22e2ca3f1194",
"text": "As an important information portal, online healthcare forum are playing an increasingly crucial role in disseminating information and offering support to people. It connects people with the leading medical experts and others who have similar experiences. During an epidemic outbreak, such as H1N1, it is critical for the health department to understand how the public is responding to the ongoing pandemic, which has a great impact on the social stability. In this case, identifying influential users in the online healthcare forum and tracking the information spreading in such online community can be an effective way to understand the public reaction toward the disease. In this paper, we propose a framework to monitor and identify influential users from online healthcare forum. We first develop a mechanism to identify and construct social networks from the discussion board of an online healthcare forum. We propose the UserRank algorithm which combines link analysis and content analysis techniques to identify influential users. We have also conducted an experiment to evaluate our approach on the Swine Flu forum which is a sub-community of a popular online healthcare community, MedHelp (www.medhelp.org). Experimental results show that our technique outperforms PageRank, in-degree and out-degree centrality in identifying influential user from an online healthcare forum.",
"title": ""
}
] |
scidocsrr
|
a0a63f230fc0d5234904058c4dc87c23
|
Virtual Try-On Using Kinect and HD Camera
|
[
{
"docid": "02447ce33a1fa5f8b4f156abf5d2f746",
"text": "In this paper, we present TeleHuman, a cylindrical 3D display portal for life-size human telepresence. The TeleHuman 3D videoconferencing system supports 360 degree motion parallax as the viewer moves around the cylinder and optionally, stereoscopic 3D display of the remote person. We evaluated the effect of perspective cues on the conveyance of nonverbal cues in two experiments using a one-way telecommunication version of the system. The first experiment focused on how well the system preserves gaze and hand pointing cues. The second experiment evaluated how well the system conveys 3D body postural information. We compared 3 perspective conditions: a conventional 2D view, a 2D view with 360 degree motion parallax, and a stereoscopic view with 360 degree motion parallax. Results suggest the combined presence of motion parallax and stereoscopic cues significantly improved the accuracy with which participants were able to assess gaze and hand pointing cues, and to instruct others on 3D body poses. The inclusion of motion parallax and stereoscopic cues also led to significant increases in the sense of social presence and telepresence reported by participants.",
"title": ""
},
{
"docid": "d922dbcdd2fb86e7582a4fb78990990e",
"text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"title": ""
},
{
"docid": "03d33ceac54b501c281a954e158d0224",
"text": "HoloDesk is an interactive system combining an optical see through display and Kinect camera to create the illusion that users are directly interacting with 3D graphics. A virtual image of a 3D scene is rendered through a half silvered mirror and spatially aligned with the real-world for the viewer. Users easily reach into an interaction volume displaying the virtual image. This allows the user to literally get their hands into the virtual display and to directly interact with an spatially aligned 3D virtual world, without the need for any specialized head-worn hardware or input device. We introduce a new technique for interpreting raw Kinect data to approximate and track rigid (e.g., books, cups) and non-rigid (e.g., hands, paper) physical objects and support a variety of physics-inspired interactions between virtual and real. In particular the algorithm models natural human grasping of virtual objects with more fidelity than previously demonstrated. A qualitative study highlights rich emergent 3D interactions, using hands and real-world objects. The implementation of HoloDesk is described in full, and example application scenarios explored. Finally, HoloDesk is quantitatively evaluated in a 3D target acquisition task, comparing the system with indirect and glasses-based variants.",
"title": ""
}
] |
[
{
"docid": "1498977b6e68df3eeca6e25c550a5edd",
"text": "The Raven's Progressive Matrices (RPM) test is a commonly used test of intelligence. The literature suggests a variety of problem-solving methods for addressing RPM problems. For a graduate-level artificial intelligence class in Fall 2014, we asked students to develop intelligent agents that could address 123 RPM-inspired problems, essentially crowdsourcing RPM problem solving. The students in the class submitted 224 agents that used a wide variety of problem-solving methods. In this paper, we first report on the aggregate results of those 224 agents on the 123 problems, then focus specifically on four of the most creative, novel, and effective agents in the class. We find that the four agents, using four very different problem-solving methods, were all able to achieve significant success. This suggests the RPM test may be amenable to a wider range of problem-solving methods than previously reported. It also suggests that human computation might be an effective strategy for collecting a wide variety of methods for creative tasks.",
"title": ""
},
{
"docid": "9d195abaff4bdd283ba8e331501968fb",
"text": "These days, instructors in universities and colleges take the attendance manually either by calling out individual's name or by passing around an attendance sheet for student's signature to confirm his/her presence. Using these methods is both cumbersome and time-consuming. Therefore a method of taking attendance using instructor's mobile telephone has been presented in this paper which is paperless, quick, and accurate. An application software installed in the instructor's mobile telephone enables it to query students' mobile telephone via Bluetooth connection and, through transfer of students' mobile telephones' Media Access Control (MAC) addresses to the instructor's mobile telephone, presence of the student can be confirmed. Moreover, detailed record of a student's attendance can also be generated for printing and filing, if needed.",
"title": ""
},
{
"docid": "d2abcdcdb6650c30838507ec1521b263",
"text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.",
"title": ""
},
{
"docid": "36f960b37e7478d8ce9d41d61195f83a",
"text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.",
"title": ""
},
{
"docid": "adcbc47e18f83745f776dec84d09559f",
"text": "Adaptive and flexible production systems require modular and reusable software especially considering their long-term life cycle of up to 50 years. SWMAT4aPS, an approach to measure Software Maturity for automated Production Systems is introduced. The approach identifies weaknesses and strengths of various companies’ solutions for modularity of software in the design of automated Production Systems (aPS). At first, a self-assessed questionnaire is used to evaluate a large number of companies concerning their software maturity. Secondly, we analyze PLC code, architectural levels, workflows and abilities to configure code automatically out of engineering information in four selected companies. In this paper, the questionnaire results from 16 German world-leading companies in machine and plant manufacturing and four case studies validating the results from the detailed analyses are introduced to prove the applicability of the approach and give a survey of the state of the art in industry. Keywords—factory automation, automated production systems, maturity, modularity, control software, Programmable Logic Controller.",
"title": ""
},
{
"docid": "fd0dccac0689390e77a0cc1fb14e5a34",
"text": "Chromatin remodeling is a complex process shaping the nucleosome landscape, thereby regulating the accessibility of transcription factors to regulatory regions of target genes and ultimately managing gene expression. The SWI/SNF (switch/sucrose nonfermentable) complex remodels the nucleosome landscape in an ATP-dependent manner and is divided into the two major subclasses Brahma-associated factor (BAF) and Polybromo Brahma-associated factor (PBAF) complex. Somatic mutations in subunits of the SWI/SNF complex have been associated with different cancers, while germline mutations have been associated with autism spectrum disorder and the neurodevelopmental disorders Coffin–Siris (CSS) and Nicolaides–Baraitser syndromes (NCBRS). CSS is characterized by intellectual disability (ID), coarsening of the face and hypoplasia or absence of the fifth finger- and/or toenails. So far, variants in five of the SWI/SNF subunit-encoding genes ARID1B, SMARCA4, SMARCB1, ARID1A, and SMARCE1 as well as variants in the transcription factor-encoding gene SOX11 have been identified in CSS-affected individuals. ARID2 is a member of the PBAF subcomplex, which until recently had not been linked to any neurodevelopmental phenotypes. In 2015, mutations in the ARID2 gene were associated with intellectual disability. In this study, we report on two individuals with private de novo ARID2 frameshift mutations. Both individuals present with a CSS-like phenotype including ID, coarsening of facial features, other recognizable facial dysmorphisms and hypoplasia of the fifth toenails. Hence, this study identifies mutations in the ARID2 gene as a novel and rare cause for a CSS-like phenotype and enlarges the list of CSS-like genes.",
"title": ""
},
{
"docid": "8722d7864499c76f76820b5f7f0c4fc6",
"text": "This paper proposes a new scientific integration of the classical and quantum fundamentals of neuropsychotherapy. The history, theory, research, and practice of neuropsychotherapy are reviewed and updated in light of the current STEM perspectives on science, technology, engineering, and mathematics. New technology is introduced to motivate more systematic research comparing the bioelectronic amplitudes of varying states of human stress, relaxation, biofeedback, creativity, and meditation. Case studies of the neuropsychotherapy of attention span, consciousness, cognition, chirality, and dissociation along with the psychodynamics of therapeutic hypnosis and chronic post-traumatic stress disorder (PTSD) are explored. Implications of neuropsychotheraputic research for investigating relationships between activity-dependent gene expression, brain plasticity, and the quantum qualia of consciousness and cognition are discussed. Symmetry in neuropsychotherapy is related to Noether’s theorem of nature’s conservation laws for a unified theory of physics, biology, and psychology on the quantum level. Neuropsychotheraputic theory, research, and practice is conceptualized as a common yardstick for integrating the fundamentals of physics, biology, and the psychology of consciousness, cognition, and behavior at the quantum level.",
"title": ""
},
{
"docid": "fb655a622c2e299b8d7f8b85769575b4",
"text": "With the substantial development of digital technologies in multimedia, network communication and user interfaces, we are seeing an increasing number of applications of these technologies, in particular in the entertainment domain. They include computer gaming, elearning, high-definition and interactive TVs, and virtual environments. The development of these applications typically involves the integration of existing technologies as well as the development of new technologies. This Introduction summarizes latest interactive entertainment technologies and applications, and briefly highlights some potential research directions. It also introduces the seven papers that are accepted to the special issue. Hopefully, this will provide the readers some insights into future research topics in interactive entertainment technologies and applications.",
"title": ""
},
{
"docid": "8eb3b8fb9420cc27ec17aa884531fa83",
"text": "Participation has emerged as an appropriate approach for enhancing natural resources management. However, despite long experimentation with participation, there are still possibilities for improvement in designing a process of stakeholder involvement by addressing stakeholder heterogeneity and the complexity of decision-making processes. This paper provides a state-of-the-art overview of methods. It proposes a comprehensive framework to implement stakeholder participation in environmental projects, from stakeholder identification to evaluation. For each process within this framework, techniques are reviewed and practical tools proposed. The aim of this paper is to establish methods to determine who should participate, when and how. The application of this framework to one river restoration case study in Switzerland will illustrate its strengths and weaknesses.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "ff71aa2caed491f9bf7b67a5377b4d66",
"text": "In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer.",
"title": ""
},
{
"docid": "4d5119db64e4e0a31064bd22b47e2534",
"text": "Reliability and scalability of an application is dependent on how its application state is managed. To run applications at massive scale requires one to operate datastores that can scale to operate seamlessly across thousands of servers and can deal with various failure modes such as server failures, datacenter failures and network partitions. The goal of Amazon DynamoDB is to eliminate this complexity and operational overhead for our customers by offering a seamlessly scalable database service. In this talk, I will talk about how developers can build applications on DynamoDB without having to deal with the complexity of operating a large scale database.",
"title": ""
},
{
"docid": "e8e8869d74dd4667ceff63c8a24caa27",
"text": "We address the problem of recommending suitable jobs to people who are seeking a new job. We formulate this recommendation problem as a supervised machine learning problem. Our technique exploits all past job transitions as well as the data associated with employees and institutions to predict an employee's next job transition. We train a machine learning model using a large number of job transitions extracted from the publicly available employee profiles in the Web. Experiments show that job transitions can be accurately predicted, significantly improving over a baseline that always predicts the most frequent institution in the data.",
"title": ""
},
{
"docid": "6d0ba36e4371cbd9aa7d136aec11f92d",
"text": "The DNS is a fundamental service that has been repeatedly attacked and abused. DNS manipulation is a prominent case: Recursive DNS resolvers are deployed to explicitly return manipulated answers to users' queries. While DNS manipulation is used for legitimate reasons too (e.g., parental control), rogue DNS resolvers support malicious activities, such as malware and viruses, exposing users to phishing and content injection. We introduce REMeDy, a system that assists operators to identify the use of rogue DNS resolvers in their networks. REMeDy is a completely automatic and parameter-free system that evaluates the consistency of responses across the resolvers active in the network. It operates by passively analyzing DNS traffic and, as such, requires no active probing of third-party servers. REMeDy is able to detect resolvers that manipulate answers, including resolvers that affect unpopular domains. We validate REMeDy using large-scale DNS traces collected in ISP networks where more than 100 resolvers are regularly used by customers. REMeDy automatically identifies regular resolvers, and pinpoint manipulated responses. Among those, we identify both legitimate services that offer additional protection to clients, and resolvers under the control of malwares that steer traffic with likely malicious goals.",
"title": ""
},
{
"docid": "3aa58539c69d6706bc0a9ca0256cdf80",
"text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.",
"title": ""
},
{
"docid": "3257f01d96bd126bd7e3d6f447e0326d",
"text": "Voice SMS is an application developed in this work that allows a user to record and convert spoken messages into SMS text message. User can send messages to the entered phone number or the number of contact from the phonebook. Speech recognition is done via the Internet, connecting to Google's server. The application is adapted to input messages in English. Used tools are Android SDK and the installation is done on mobile phone with Android operating system. In this article we will give basic features of the speech recognition and used algorithm. Speech recognition for Voice SMS uses a technique based on hidden Markov models (HMM - Hidden Markov Model). It is currently the most successful and most flexible approach to speech recognition.",
"title": ""
},
{
"docid": "cb49d71778f873d2f21df73b9e781c8e",
"text": "Many people with mental health problems do not use mental health care, resulting in poorer clinical and social outcomes. Reasons for low service use rates are still incompletely understood. In this longitudinal, population-based study, we investigated the influence of mental health literacy, attitudes toward mental health services, and perceived need for treatment at baseline on actual service use during a 6-month follow-up period, controlling for sociodemographic variables, symptom level, and a history of lifetime mental health service use. Positive attitudes to mental health care, higher mental health literacy, and more perceived need at baseline significantly predicted use of psychotherapy during the follow-up period. Greater perceived need for treatment and better literacy at baseline were predictive of taking psychiatric medication during the following 6 months. Our findings suggest that mental health literacy, attitudes to treatment, and perceived need may be targets for interventions to increase mental health service use.",
"title": ""
},
{
"docid": "c0dd3979344c5f327fe447f46c13cffc",
"text": "Clinicians and researchers often ask patients to remember their past pain. They also use patient's reports of relief from pain as evidence of treatment efficacy, assuming that relief represents the difference between pretreatment pain and present pain. We have estimated the accuracy of remembering pain and described the relationship between remembered pain, changes in pain levels and reports of relief during treatment. During a 10-week randomized controlled clinical trial on the effectiveness of oral appliances for the management of chronic myalgia of the jaw muscles, subjects recalled their pretreatment pain and rated their present pain and perceived relief. Multiple regression analysis and repeated measures analyses of variance (ANOVA) were used for data analysis. Memory of the pretreatment pain was inaccurate and the errors in recall got significantly worse with the passage of time (P < 0.001). Accuracy of recall for pretreatment pain depended on the level of pain before treatment (P < 0.001): subjects with low pretreatment pain exaggerated its intensity afterwards, while it was underestimated by those with the highest pretreatment pain. Memory of pretreatment pain was also dependent on the level of pain at the moment of recall (P < 0.001). Ratings of relief increased over time (P < 0.001), and were dependent on both present and remembered pain (Ps < 0.001). However, true changes in pain were not significantly related to relief scores (P = 0.41). Finally, almost all patients reported relief, even those whose pain had increased. These results suggest that reports of perceived relief do not necessarily reflect true changes in pain.",
"title": ""
},
{
"docid": "b0c2d9130a48fc0df8f428460b949741",
"text": "A micro-strip patch antenna for a passive radio frequency identification (RFID) tag which can operate in the ultra high frequency (UHF) range from 865 MHz to 867 MHz is presented in this paper. The proposed antenna is designed and suitable for tagging the metallic boxes in the UK and Europe warehouse environment. The design is supplemented with the simulation results. In addition, the effect of the antenna substrate thickness and the ground plane on the performance of the proposed antenna is also investigated. The study shows that there is little affect by the antenna substrate thickness on the performance.",
"title": ""
}
] |
scidocsrr
|
75fff063f9521fb9386c33e7aed5b7ee
|
Exploring web scale language models for search query processing
|
[
{
"docid": "6feb2b03d1fe9495ac3c601d6130da79",
"text": "A variety of statistical methods for noun compound analysis are implemented and compared. The results support two main conclusions. First, the use of conceptual association not only enables a broad coverage, but also improves the accuracy. Second, an analysis model based on dependency grammar is substantially more accurate than one based on deepest constituents, even though the latter is more prevalent in the literature.",
"title": ""
}
] |
[
{
"docid": "d909528f98e49f8107bf0cee7a83bbfe",
"text": "INTRODUCTION\nThe increasing use of cone-beam computed tomography in orthodontics has been coupled with heightened concern about the long-term risks of x-ray exposure in orthodontic populations. An industry response to this has been to offer low-exposure alternative scanning options in newer cone-beam computed tomography models.\n\n\nMETHODS\nEffective doses resulting from various combinations of field of view size and field location comparing child and adult anthropomorphic phantoms with the recently introduced i-CAT FLX cone-beam computed tomography unit (Imaging Sciences, Hatfield, Pa) were measured with optical stimulated dosimetry using previously validated protocols. Scan protocols included high resolution (360° rotation, 600 image frames, 120 kV[p], 5 mA, 7.4 seconds), standard (360°, 300 frames, 120 kV[p], 5 mA, 3.7 seconds), QuickScan (180°, 160 frames, 120 kV[p], 5 mA, 2 seconds), and QuickScan+ (180°, 160 frames, 90 kV[p], 3 mA, 2 seconds). Contrast-to-noise ratio was calculated as a quantitative measure of image quality for the various exposure options using the QUART DVT phantom.\n\n\nRESULTS\nChild phantom doses were on average 36% greater than adult phantom doses. QuickScan+ protocols resulted in significantly lower doses than standard protocols for the child (P = 0.0167) and adult (P = 0.0055) phantoms. The 13 × 16-cm cephalometric fields of view ranged from 11 to 85 μSv in the adult phantom and 18 to 120 μSv in the child phantom for the QuickScan+ and standard protocols, respectively. The contrast-to-noise ratio was reduced by approximately two thirds when comparing QuickScan+ with standard exposure parameters.\n\n\nCONCLUSIONS\nQuickScan+ effective doses are comparable with conventional panoramic examinations. Significant dose reductions are accompanied by significant reductions in image quality. However, this trade-off might be acceptable for certain diagnostic tasks such as interim assessment of treatment results.",
"title": ""
},
{
"docid": "c1ccbb8e8a9fa8a3291e9b8a2f8ee8aa",
"text": "Chronic stress is one of the predominant environmental risk factors for a number of psychiatric disorders, particularly for major depression. Different hypotheses have been formulated to address the interaction between early and adult chronic stress in psychiatric disease vulnerability. The match/mismatch hypothesis of psychiatric disease states that the early life environment shapes coping strategies in a manner that enables individuals to optimally face similar environments later in life. We tested this hypothesis in female Balb/c mice that underwent either stress or enrichment early in life and were in adulthood further subdivided in single or group housed, in order to provide aversive or positive adult environments, respectively. We studied the effects of the environmental manipulation on anxiety-like, depressive-like and sociability behaviors and gene expression profiles. We show that continuous exposure to adverse environments (matched condition) is not necessarily resulting in an opposite phenotype compared to a continuous supportive environment (matched condition). Rather, animals with mismatched environmental conditions behaved differently from animals with matched environments on anxious, social and depressive like phenotypes. These results further support the match/mismatch hypothesis and illustrate how mild or moderate aversive conditions during development can shape an individual to be optimally adapted to similar conditions later in life.",
"title": ""
},
{
"docid": "970a1c802a4c731c3fcb03855d5cfb8c",
"text": "Visual prior from generic real-world images can be learned and transferred for representing objects in a scene. Motivated by this, we propose an algorithm that transfers visual prior learned offline for online object tracking. From a collection of real-world images, we learn an overcomplete dictionary to represent visual prior. The prior knowledge of objects is generic, and the training image set does not necessarily contain any observation of the target object. During the tracking process, the learned visual prior is transferred to construct an object representation by sparse coding and multiscale max pooling. With this representation, a linear classifier is learned online to distinguish the target from the background and to account for the target and background appearance variations over time. Tracking is then carried out within a Bayesian inference framework, in which the learned classifier is used to construct the observation model and a particle filter is used to estimate the tracking result sequentially. Experiments on a variety of challenging sequences with comparisons to several state-of-the-art methods demonstrate that more robust object tracking can be achieved by transferring visual prior.",
"title": ""
},
{
"docid": "0f3a795be7101977171a9232e4f98bf4",
"text": "Emotions are universally recognized from facial expressions--or so it has been claimed. To support that claim, research has been carried out in various modern cultures and in cultures relatively isolated from Western influence. A review of the methods used in that research raises questions of its ecological, convergent, and internal validity. Forced-choice response format, within-subject design, preselected photographs of posed facial expressions, and other features of method are each problematic. When they are altered, less supportive or nonsupportive results occur. When they are combined, these method factors may help to shape the results. Facial expressions and emotion labels are probably associated, but the association may vary with culture and is loose enough to be consistent with various alternative accounts, 8 of which are discussed.",
"title": ""
},
{
"docid": "08abb05937b93c9089460bd85d8ef97a",
"text": "A navigation filter combines measurements from sensors currently available on vehicles - Global Positioning System (GPS), inertial measurement unit, inertial measurement unit (IMU), camera, and light detection and ranging (lidar) - for achieving lane-level positioning in environments where stand-alone GPS can suffer or fail. Measurements from the camera and lidar are used in two lane-detection systems, and the calculated lateral distance (to the lane markings) estimates of both lane-detection systems are compared with centimeter-level truth to show decimeter-level accuracy. The navigation filter uses the lateral distance measurements from the lidar- and camera-based systems with a known waypoint-based map to provide global measurements for use in a GPS/Inertial Navigation System (INS) system. Experimental results show that the inclusion of lateral distance measurements and a height constraint from the map creates a fully observable system even with only two satellite observations and, as such, greatly enhances the robustness of the integrated system over GPS/INS alone. Various scenarios are presented, which affect the navigation filter, including satellite geometry, number of satellites, and loss of lateral distance measurements from the camera and lidar systems.",
"title": ""
},
{
"docid": "e40ac3775c0891951d5f375c10928ca0",
"text": "The present study investigates the role of process and social oriented smartphone usage, emotional intelligence, social stress, self-regulation, gender, and age in relation to habitual and addictive smartphone behavior. We conducted an online survey among 386 respondents. The results revealed that habitual smartphone use is an important contributor to addictive smartphone behavior. Process related smartphone use is a strong determinant for both developing habitual and addictive smartphone behavior. People who extensively use their smartphones for social purposes develop smartphone habits faster, which in turn might lead to addictive smartphone behavior. We did not find an influence of emotional intelligence on habitual or addictive smartphone behavior, while social stress positively influences addictive smartphone behavior, and a failure of self-regulation seems to cause a higher risk of addictive smartphone behavior. Finally, men experience less social stress than women, and use their smartphones less for social purposes. The result is that women have a higher chance in developing habitual or addictive smartphone behavior. Age negatively affects process and social usage, and social stress. There is a positive effect on self-regulation. Older people are therefore less likely to develop habitual or addictive smartphone behaviors. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7419fa101c2471e225c976da196ed813",
"text": "A 4×40 Gb/s collaborative digital CDR is implemented in 28nm CMOS. The CDR is capable of recovering a low jitter clock from a partially-equalized or un-equalized eye by using a phase detection scheme that inherently filters out ISI edges. The CDR uses split feedback that simultaneously allows wider bandwidth and lower recovered clock jitter. A shared frequency tracking is also introduced that results in lower periodic jitter. Combining these techniques the CDR recovers a 10GHz clock from an eye containing 0.8UIpp DDJ and still achieves 1-10 MHz of tracking bandwidth while adding <; 300fs of jitter. Per lane CDR occupies only .06 mm2 and consumes 175 mW.",
"title": ""
},
{
"docid": "3a95be7cbc37f20a6c41b84f78013263",
"text": "We demonstrate a simple strategy to cope with missing data in sequential inputs, addressing the task of multilabel classification of diagnoses given clinical time series. Collected from the pediatric intensive care unit (PICU) at Children’s Hospital Los Angeles, our data consists of multivariate time series of observations. The measurements are irregularly spaced, leading to missingness patterns in temporally discretized sequences. While these artifacts are typically handled by imputation, we achieve superior predictive performance by treating the artifacts as features. Unlike linear models, recurrent neural networks can realize this improvement using only simple binary indicators of missingness. For linear models, we show an alternative strategy to capture this signal. Training models on missingness patterns only, we show that for some diseases, what tests are run can as predictive as the results themselves.",
"title": ""
},
{
"docid": "52a3688f1474b824a6696b03a8b6536c",
"text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed for significantly improving the accuracy of the credit scoring models. In this paper, two-stage genetic programming (2SGP) is proposed to deal with the credit scoring problem by incorporating the advantages of the IF–THEN rules and the discriminant function. On the basis of the numerical results, we can conclude that 2SGP can provide the better accuracy than other models. 2005 Published by Elsevier Inc. 0096-3003/$ see front matter 2005 Published by Elsevier Inc. doi:10.1016/j.amc.2005.05.027 * Corresponding author. Address: Institute of Management of Technology and Institute of Traffic and Transportation College of Management, National Chiao Tung University, 1001 TaHsueh Road, Hsinchu 300, Taiwan. E-mail address: u5460637@ms16.hinet.net (G.-H. Tzeng). 2 J.-J. Huang et al. / Appl. Math. Comput. xxx (2005) xxx–xxx ARTICLE IN PRESS",
"title": ""
},
{
"docid": "ff32e960fb5ff7b7e0910e6e69421860",
"text": "Abslracl Semantic mapping aims to create maps that include meaningful features, both to robots nnd humans. We prescnt :10 extens ion to our feature based mapping technique that includes information about the locations of horizontl.lJ surfaces such as tables, shelves, or counters in the map. The surfaces a rc detected in 3D point clouds, the locations of which arc optimized by our SLAM algorithm. The resulting scans of surfaces :lrc then analyzed to segment them into distinct surfaces, which may include measurements of a single surface across multiple scans. Preliminary rl'Sults arc presented in the form of a feature based map augmented with a sct of 3D point clouds in a consistent global map frame that represent all detected surfaces within the mapped area.",
"title": ""
},
{
"docid": "34aacd740fdaaac1e79222b8f7768741",
"text": "An energy-efficient forwarded-clock transmitter that offers a scalable pre-emphasis equalization and output voltage swing is presented. A resistive-feedback inverter-based driver is used to overcome the drawbacks of the conventional drivers. Moreover, half-rate clocking structure is employed in order to minimize power consumption in 65-nm CMOS technology. The proposed transmitter consists of two data lanes, a shared clock lane, and a global impedance regulator. The prototype chip is fabricated in 65-nm CMOS technology and occupies an active area of 0.15 mm2. The proposed transmitter achieves 100-250 mV single-ended swing and exhibits the energy efficiency of 1 pJ/bit at the per-pin data rate of 10 Gb/s.",
"title": ""
},
{
"docid": "753eb03a060a5e5999eee478d6d164f9",
"text": "Recently reported results with distributed-vector word representations in natural language processing make them appealing for incorporation into a general cognitive architecture like Sigma. This paper describes a new algorithm for learning such word representations from large, shallow information resources, and how this algorithm can be implemented via small modifications to Sigma. The effectiveness and speed of the algorithm are evaluated via a comparison of an external simulation of it with state-of-the-art algorithms. The results from more limited experiments with Sigma are also promising, but more work is required for it to reach the effectiveness and speed of the simulation.",
"title": ""
},
{
"docid": "55b405991dc250cd56be709d53166dca",
"text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.",
"title": ""
},
{
"docid": "836c51ed5c9ef5e432498684996f4eb5",
"text": "This paper presents a system that compositionally maps outputs of a wide-coverage Japanese CCG parser onto semantic representations and performs automated inference in higher-order logic. The system is evaluated on a textual entailment dataset. It is shown that the system solves inference problems that focus on a variety of complex linguistic phenomena, including those that are difficult to represent in the standard first-order logic.",
"title": ""
},
{
"docid": "67c5253ef0fd9816fe334cf460f49bcf",
"text": "Previous research has suggested that presence in a virtual environment (VE) is important for several reasons (Sheridan, 1992; Barfield et al., 1995; Slater et al., 1996). A highly present individual is more likely to behave in the VE in a manner similar to their behaviour n similar circumstances in everyday reality. Therefore, an immersive virtual environment (IVE) may b e a useful system for training and skill acquisition, where to train or gain the skill in the real world may be too expensive or dangerous. This formulation of the effect of presence may be used to cons truct a measure of the degree of presence. Suppose that individuals are placed in an environment which is f amiliar relative to everyday reality. Those individuals who have a high degree of presence would be l ikely to exhibit similar behaviours to that in the real world for example, obeying social conventions, pe rceived psycho-physical limitations, and egocentric interactions. Those individuals who are less present may be mor likely to break these conventions (for example, walking through virtual walls and off virtual clif fs). This paper discusses research in presence within IVEs and presents an experiment using a measure of p resence based on observable behaviours of people placed in a VE that is a representation of a famili r environment.",
"title": ""
},
{
"docid": "c7b7ca49ea887c25b05485e346b5b537",
"text": "I n our last article 1 we described the external features which characterize the cranial and facial structures of the cranial strains known as hyperflexion and hyperextension. To understand how these strains develop we have to examine the anatomical relations underlying all cranial patterns. Each strain represent a variation on a theme. By studying the features in common, it is possible to account for the facial and dental consequences of these variations. The key is the spheno-basilar symphysis and the displacements which can take place between the occiput and the sphenoid at that suture. In hyperflexion there is shortening of the cranium in an antero-posterior direction with a subsequent upward buckling of the spheno-basilar symphysis (Figure 1). In children, where the cartilage of the joint has not ossified, a v-shaped wedge can be seen occasionally on the lateral skull radiograph (Figure 2). Figure (3a) is of the cranial base seen from a vertex viewpoint. By leaving out the temporal bones the connection between the centrally placed spheno-basilar symphysis and the peripheral structures of the cranium can be seen more easily. Sutherland realized that the cranium could be divided into quadrants (Figure 3b) centered on the spheno-basilar symphysis and that what happens in each quadrant is directly influenced by the spheno-basilar symphysis. He noted that accompanying the vertical changes at the symphysis there are various lateral displacements. As the peripheral structures move laterally, this is known as external rotation. If they move closer to the midline, this is called internal rotation. It is not unusual to have one side of the face externally rotated and the other side internally rotated (Figure 4a). This can have a significant effect in the mouth, giving rise to asymmetries (Figure 4b). This shows a palatal view of the maxilla with the left posterior dentition externally rotated and the right buccal posterior segment internally rotated, reflecting the internal rotation of the whole right side of the face. This can be seen in hyperflexion but also other strains. With this background, it is now appropriate to examine in detail the cranial strain known as hyperflexion. As its name implies, it is brought about by an exaggeration of the flexion/ extension movement of the cranium into flexion. Rhythmic movement of the cranium continues despite the displacement into flexion, but it does so more readily into flexion than extension. As the skull is shortened in an antero-posterior plane, it is widened laterally. Figures 3a and 3b. 3a: cranial base from a vertex view (temporal bones left out). 3b: Sutherland’s quadrants imposed on cranial base. Figure 2. Lateral Skull Radiograph of Hyperflexion patient. Note V-shaped wedge at superior border of the spheno-basillar symphysis. Figure 1. Movement of Occiput and Sphenold in Hyperflexion. Reprinted from Orthopedic Gnathology, Hockel, J., Ed. 1983. With permission from Quintessence Publishing Co.",
"title": ""
},
{
"docid": "78a38e1bdb15fc57d94a1d8ddd330459",
"text": "One of the most powerful aspects of biological inquiry using model organisms is the ability to control gene expression. A holy grail is both temporal and spatial control of the expression of specific gene products - that is, the ability to express or withhold the activity of genes or their products in specific cells at specific times. Ideally such a method would also regulate the precise levels of gene activity, and alterations would be reversible. The related goal of controlled or purposefully randomized expression of visible markers is also tremendously powerful. While not all of these feats have been accomplished in Caenorhabditis elegans to date, much progress has been made, and recent technologies put these goals within closer reach. Here, I present published examples of successful two-component site-specific recombination in C. elegans. These technologies are based on the principle of controlled intra-molecular excision or inversion of DNA sequences between defined sites, as driven by FLP or Cre recombinases. I discuss several prospects for future applications of this technology.",
"title": ""
},
{
"docid": "0858f3c76ea9570eeae23c33307f2eaf",
"text": "Geometrical validation around the Calpha is described, with a new Cbeta measure and updated Ramachandran plot. Deviation of the observed Cbeta atom from ideal position provides a single measure encapsulating the major structure-validation information contained in bond angle distortions. Cbeta deviation is sensitive to incompatibilities between sidechain and backbone caused by misfit conformations or inappropriate refinement restraints. A new phi,psi plot using density-dependent smoothing for 81,234 non-Gly, non-Pro, and non-prePro residues with B < 30 from 500 high-resolution proteins shows sharp boundaries at critical edges and clear delineation between large empty areas and regions that are allowed but disfavored. One such region is the gamma-turn conformation near +75 degrees,-60 degrees, counted as forbidden by common structure-validation programs; however, it occurs in well-ordered parts of good structures, it is overrepresented near functional sites, and strain is partly compensated by the gamma-turn H-bond. Favored and allowed phi,psi regions are also defined for Pro, pre-Pro, and Gly (important because Gly phi,psi angles are more permissive but less accurately determined). Details of these accurate empirical distributions are poorly predicted by previous theoretical calculations, including a region left of alpha-helix, which rates as favorable in energy yet rarely occurs. A proposed factor explaining this discrepancy is that crowding of the two-peptide NHs permits donating only a single H-bond. New calculations by Hu et al. [Proteins 2002 (this issue)] for Ala and Gly dipeptides, using mixed quantum mechanics and molecular mechanics, fit our nonrepetitive data in excellent detail. To run our geometrical evaluations on a user-uploaded file, see MOLPROBITY (http://kinemage.biochem.duke.edu) or RAMPAGE (http://www-cryst.bioc.cam.ac.uk/rampage).",
"title": ""
},
{
"docid": "c4e3a01c881f2eaef6adc41b41f7556f",
"text": "The accumulation of solid organic waste is thought to be reaching critical levels in almost all regions of the world. These organic wastes require to be managed in a sustainable way to avoid depletion of natural resources, minimize risk to human health, reduce environmental burdens and maintain an overall balance in the ecosystem. A number of methods are currently applied to the treatment and management of solid organic waste. This review focuses on the process of anaerobic digestion which is considered to be one of the most viable options for recycling the organic fraction of solid waste. This manuscript provides a broad overview of the digestibility and energy production (biogas) yield of a range of substrates and the digester configurations that achieve these yields. The involvement of a diverse array of microorganisms and effects of co-substrates and environmental factors on the efficiency of the process has been comprehensively addressed. The recent literature indicates that anaerobic digestion could be an appealing option for converting raw solid organic wastes into useful products such as biogas and other energy-rich compounds, which may play a critical role in meeting the world's ever-increasing energy requirements in the future.",
"title": ""
},
{
"docid": "7401d33980f6630191aa7be7bf380ec3",
"text": "We present PennCOSYVIO, a new challenging Visual Inertial Odometry (VIO) benchmark with synchronized data from a VI-sensor (stereo camera and IMU), two Project Tango hand-held devices, and three GoPro Hero 4 cameras. Recorded at UPenn's Singh center, the 150m long path of the hand-held rig crosses from outdoors to indoors and includes rapid rotations, thereby testing the abilities of VIO and Simultaneous Localization and Mapping (SLAM) algorithms to handle changes in lighting, different textures, repetitive structures, and large glass surfaces. All sensors are synchronized and intrinsically and extrinsically calibrated. We demonstrate the accuracy with which ground-truth poses can be obtained via optic localization off of fiducial markers. The data set can be found at https://daniilidis-group.github.io/penncosyvio/.",
"title": ""
}
] |
scidocsrr
|
84b9a1dc0cbf95014744b7a91301993a
|
Rumor Detection with Hierarchical Social Attention Network
|
[
{
"docid": "b20720aa8ea6fa5fc0f738a605534fbe",
"text": "e proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of diusion is known as early rumor detection, which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. us, identifying trending rumors demands an ecient yet exible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classication algorithms to rumor detection in earliness since they rely on hand-craed features which require intensive manual eorts in the case of large amount of posts. is paper presents a deep aention model on the basis of recurrent neural networks (RNN) to learn selectively temporal hidden representations of sequential posts for identifying rumors. e proposed model delves so-aention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep aention based RNN model outperforms state-of-thearts that rely on hand-craed features; (2) the introduction of so aention mechanism can eectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.",
"title": ""
},
{
"docid": "81f36795df7b839fff84d6a551d97fdf",
"text": "Matching natural language sentences is central for many applications such as information retrieval and question answering. Existing deep models rely on a single sentence representation or multiple granularity representations for matching. However, such methods cannot well capture the contextualized local information in the matching process. To tackle this problem, we present a new deep architecture to match two sentences with multiple positional sentence representations. Specifically, each positional sentence representation is a sentence representation at this position, generated by a bidirectional long short term memory (Bi-LSTM). The matching score is finally produced by aggregating interactions between these different positional sentence representations, through k-Max pooling and a multi-layer perceptron. Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.",
"title": ""
},
{
"docid": "517916f4c62bc7b5766efa537359349d",
"text": "Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.",
"title": ""
}
] |
[
{
"docid": "40bd8351735f780ba104fa63383002fe",
"text": "M a y / J u n e 2 0 0 0 I E E E S O F T W A R E 37 between the user-requirements specification and the software-requirements specification, mandating complete documentation of each according to various rules. Other cases emphasize this distinction less. For instance, some groups at Microsoft argue that the difficulty of keeping a technical specification consistent with the program is more trouble than the benefit merits.2 We can find a wide range of views in industry literature and from the many organizations that write software. Is it possible to clarify these various artifacts and study their properties, given the wide variations in the use of terms and the many different kinds of software being written? Our aim is to provide a framework for talking about key artifacts, their attributes, and relationships at a general level, but precisely enough that we can rigorously analyze substantive properties.",
"title": ""
},
{
"docid": "10733e267b9959ef57aac7cc18eee5d6",
"text": "In this paper, we explore how strongly author name disambiguation (AND) affects the results of an author-based citation analysis study, and identify conditions under which the commonly used simplified approach of using surnames and first initials may suffice in practice. We compare author citation ranking and co-citation mapping results in the stem cell research field 2004-2009 between two AND approaches: the traditional simplified approach of using author surnames and first initials, and a sophisticated algorithmic approach. We find that the traditional approach leads to extremely distorted rankings and substantially distorted mappings of authors in this field when based on firstor all-author citation counting, whereas last-author based citation ranking and co-citation mapping both appear relatively immune to the author name ambiguity problem. This is largely because romanized names of Chinese and Korean authors, who are very active in this field, are extremely ambiguous, but few of these researchers consistently publish as last authors in by-lines. We conclude that more earnest effort is required to deal with the author name ambiguity problem in both citation analysis and information retrieval, especially given the current trend towards globalization. In the stem cell field, where lab heads are traditionally listed as last authors in by-lines, last-author based citation ranking and co-citation mapping using the traditional simple approach to author name disambiguation may serve as a simple workaround, but likely at the price of largely filtering out Chinese and Korean contributions to the field as well as important contributions by young researchers.",
"title": ""
},
{
"docid": "3535e70b1c264d99eff5797413650283",
"text": "MIMO is one of the techniques used in LTE Release 8 to achieve very high data rates. A field trial was performed in a pre-commercial LTE network. The objective is to investigate how well MIMO works with realistically designed handhelds in band 13 (746-756 MHz in downlink). In total, three different handheld designs were tested using antenna mockups. In addition to the mockups, a reference antenna design with less stringent restrictions on physical size and excellent properties for MIMO was used. The trial comprised test drives in areas with different characteristics and with different network load levels. The effects of hands holding the devices and the effect of using the device inside a test vehicle were also investigated. In general, it is very clear from the trial that MIMO works very well and gives a substantial performance improvement at the tested carrier frequency if the antenna design of the hand-held is well made with respect to MIMO. In fact, the best of the handhelds performed similar to the reference antenna.",
"title": ""
},
{
"docid": "537d6fdfb26e552fb3254addfbb6ac49",
"text": "We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters, hence raising the question from our title.",
"title": ""
},
{
"docid": "29919802acf8bab0fde157f822562d23",
"text": "Capacitive electrodes have been studied as an alternative to gel electrodes, as they allow measurement of biopotentials without conductive contact with the patient. However, because the skin interface is not as precisely defined as with gel electrodes, this could lead to signal deformation and misdiagnoses. Thus, measurement of a capacitive coupling of the electrodes may allow to draw conclusions about the applicability of such systems. In addition, combining capacitive biosignal sensing with an impedance measurement unit may enable bioimpedance measurements, from which additional information on the hydration status can be extracted. A prototype system is introduced which measures impedance over capacitive electrodes in parallel with biopotential measurements. Also presented are the first results on characterization of the skin electrode coupling achieved with the system.",
"title": ""
},
{
"docid": "0fd7a70c0d46100d32e0bcb0f65528e3",
"text": "INTRODUCTION Document clustering is an automatic grouping of text documents into clusters so that documents within a cluster have high similarity in comparison to one another, but are dissimilar to documents in other clusters. Unlike document classification (Wang, Zhou, and He, 2001), no labeled documents are provided in clustering; hence, clustering is also known as unsupervised learning. Hierarchical document clustering organizes clusters into a tree or a hierarchy that facilitates browsing. The parent-child relationship among the nodes in the tree can be viewed as a topic-subtopic relationship in a subject hierarchy such as the Yahoo! directory. This chapter discusses several special challenges in hierarchical document clustering: high dimensionality, high volume of data, ease of browsing, and meaningful cluster labels. State-ofthe-art document clustering algorithms are reviewed: the partitioning method (Steinbach, Karypis, and Kumar, 2000), agglomerative and divisive hierarchical clustering (Kaufman and Rousseeuw, 1990), and frequent itemset-based hierarchical clustering (Fung, Wang, and Ester, 2003). The last one, which was recently developed by the authors, is further elaborated since it has been specially designed to address the hierarchical document clustering problem.",
"title": ""
},
{
"docid": "b747e174a4381565c83ee595a2d76d20",
"text": "With the advent of improved speech recognition and information retrieval systems, more and more users are increasingly relying on digital assistants to solve their information needs. Intelligent digital assistants on mobile devices and computers, such as Windows Cortana and Apple Siri, provide users with more functionalities than was possible in the traditional web search paradigm. While most user interaction studies have focused on the traditional web search seing; in this work, we instead consider user interactions with digital assistants (e.g. Cortana, Siri) and aim at identifying the dierences in user interactions, session characteristics and use cases. To our knowledge, this is one of the rst studies investigating the dierent use cases of user interactions with a desktop based digital assistant. Our analysis reveals that given the conversational nature of user interactions, longer sessions (i.e. sessions with a large number of queries) are more common than they were in the traditional web search paradigm. Exploring the dierent use cases, we observe that users go beyond general search and use a digital assistant to issue commands, seek instant answers and nd local information. Our analysis could inform the design of future support systems capable of proactively understanding user needs and developing enhanced evaluation techniques for developing appropriate metrics for the evaluation of digital assistants.",
"title": ""
},
{
"docid": "6737955fd1876a40fc0e662a4cac0711",
"text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.",
"title": ""
},
{
"docid": "30cd626772ad8c8ced85e8312d579252",
"text": "An off-state leakage current unique for short-channel SOI MOSFETs is reported. This off-state leakage is the amplification of gate-induced-drain-leakage current by the lateral bipolar transistor in an SOI device due to the floating body. The leakage current can be enhanced by as much as 100 times for 1/4 mu m SOI devices. This can pose severe constraints in future 0.1 mu m SOI device design. A novel technique was developed based on this mechanism to measure the lateral bipolar transistor current gain beta of SOI devices without using a body contact.<<ETX>>",
"title": ""
},
{
"docid": "2d718fdaecb286ef437b81d2a31383dd",
"text": "In this paper, we present a novel non-parametric polygonal approximation algorithm for digital planar curves. The proposed algorithm first selects a set of points (called cut-points) on the contour which are of very ‘high’ curvature. An optimization procedure is then applied to find adaptively the best fitting polygonal approximations for the different segments of the contour as defined by the cut-points. The optimization procedure uses one of the efficiency measures for polygonal approximation algorithms as the objective function. Our algorithm adaptively locates segments of the contour with different levels of details. The proposed algorithm follows the contour more closely where the level of details on the curve is high, while addressing noise by using suppression techniques. This makes the algorithm very robust for noisy, real-life contours having different levels of details. The proposed algorithm performs favorably when compared with other polygonal approximation algorithms using the popular shapes. In addition, the effectiveness of the algorithm is shown by measuring its performance over a large set of handwritten Arabic characters and MPEG7 CE Shape-1 Part B database. Experimental results demonstrate that the proposed algorithm is very stable and robust compared with other algorithms. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "53e1c4fc0732efb9b6992a2468425a1a",
"text": "This study investigated the effects of gameplaying on fifth-graders’ maths performance and attitudes. One hundred twenty five fifth graders were recruited and assigned to a cooperative Teams-Games-Tournament (TGT), interpersonal competitive or no gameplaying condition. A state standardsbased maths exam and an inventory on attitudes towards maths were used for the pretest and posttest. The students’ gender, socio-economic status and prior maths ability were examined as the moderating variables and covariate. Multivariate analysis of covariance (MANCOVA) indicated that gameplaying was more effective than drills in promoting maths performance, and cooperative gameplaying was most effective for promoting positive maths attitudes regardless of students’ individual differences. Introduction The problem of low achievement in American mathematics education has been discussed in numerous policy reports (Mathematical Sciences Education Board, 2004). Educational researchers (eg Ferrini-Mundy & Schram, 1996) and administrators (eg Brodinsky, 1985), for years, have appealed for mathematics-education reform and proposed various solutions to foster mathematics learning. Amongst these propositions were computer games as powerful mathematical learning tools with great motivational appeal and multiple representations of learning materials (Betz, 1995; Malone, 1981; Moreno, 2002; Quinn, 1994). Researchers reported (eg Ahl, 1981; Bahr & Rieth, 1989; Inkpen, 1994) that a variety of computer games have been used in classrooms to support learning of basic arithmetic and problem-solving skills. Other researchers (Amory, Naicker, Vincent & Adams, 1999; Papert, 1980) contend that computer games need to be carefully aligned with sound learning strategies and 250 British Journal of Educational Technology Vol 38 No 2 2007 © 2006 The Authors. Journal compilation © 2006 British Educational Communications and Technology Agency. conditions to be beneficial. Consistent with this proposition, the incorporation of computer games within a cooperative learning setting becomes an attractive possibility. Cooperative learning in mathematics has been well discussed by Davidson (1990): group learning helps to remove students’ frustration; it is not only a source for additional help but also offers a support network. Empirical research (Jacobs, 1996; Reid, 1992; Whicker, Bol & Nunnery, 1997) verifies the importance of cooperative learning in mathematics education. Hence, the potential benefit of combining computer games with cooperative learning in mathematics warrants a field investigation. Specific research on the cooperative use of computer games is limited. Empirical study of this technique is especially sparse. Therefore, the purpose of this research was to explore whether computer games and cooperative learning could be used together to enrich K-12 mathematics education. Employing a pretest–posttest experimental design, the study examined the effects of cooperative gameplaying on fifth-grade students’ maths performance and maths attitudes when compared to the interpersonal competitive gameplaying and control groups.",
"title": ""
},
{
"docid": "0f493c438c0eb256e92996603e1ea41f",
"text": "This paper proposes a method to compensate RGB-D images from the original target RGB images by transferring the depth knowledge of source data. Conventional RGB databases (e.g., UT-Interaction database) do not contain depth information since they are captured by the RGB cameras. Therefore, the methods designed for {RGB} databases cannot take advantage of depth information, which proves useful for simplifying intra-class variations and background subtraction. In this paper, we present a novel transfer learning method that can transfer the knowledge from depth information to the RGB database, and use the additional source information to recognize human actions in RGB videos. Our method takes full advantage of 3D geometric information contained within the learned depth data, thus, can further improve action recognition performance. We treat action data as a fourth-order tensor (row, column, frame and sample), and apply latent low-rank transfer learning to learn shared subspaces of the source and target databases. Moreover, we introduce a novel cross-modality regularizer that plays an important role in finding the correlation between RGB and depth modalities, and then more depth information from the source database can be transferred to that of the target. Our method is extensively evaluated on public by available databases. Results of two action datasets show that our method outperforms existing methods.",
"title": ""
},
{
"docid": "3d28f86795ddcd249657703cbedf87b1",
"text": "A 2.5V high precision BiCMOS bandgap reference with supply voltage range of 6V to 18V was proposed and realized. It could be applied to lots of Power Management ICs (Intergrated Circuits) due the high voltage. By introducing a preregulated current source, the PSRR (Power Supply Rejection Ratio) of 103dB at low frequency and the line regulation of 26.7μV/V was achieved under 15V supply voltage at ambient temperature of 27oC. Moreover, if the proper resistance trimming is implemented, the temperature coefficient could be reduced to less than 16.4ppm/oC. The start up time of the reference voltage could also be decreased with an additional bipolar and capacitor.",
"title": ""
},
{
"docid": "678558c9c8d629f98b77a61082bd9b95",
"text": "Internet of Things (IoT) makes all objects become interconnected and smart, which has been recognized as the next technological revolution. As its typical case, IoT-based smart rehabilitation systems are becoming a better way to mitigate problems associated with aging populations and shortage of health professionals. Although it has come into reality, critical problems still exist in automating design and reconfiguration of such a system enabling it to respond to the patient's requirements rapidly. This paper presents an ontology-based automating design methodology (ADM) for smart rehabilitation systems in IoT. Ontology aids computers in further understanding the symptoms and medical resources, which helps to create a rehabilitation strategy and reconfigure medical resources according to patients' specific requirements quickly and automatically. Meanwhile, IoT provides an effective platform to interconnect all the resources and provides immediate information interaction. Preliminary experiments and clinical trials demonstrate valuable information on the feasibility, rapidity, and effectiveness of the proposed methodology.",
"title": ""
},
{
"docid": "cd52e8b57646a81d985b2fab9083bda9",
"text": "Tagging of faces present in a photo or video at shot level has multiple applications related to indexing and retrieval. Face clustering, which aims to group similar faces corresponding to an individual, is a fundamental step of face tagging. We present a progressive method of applying easy-to-hard grouping technique that applies increasingly sophisticated feature descriptors and classifiers on reducing number of faces from each of the iteratively generated clusters. Our primary goal is to design a cost effective solution for deploying it on low-power devices like mobile phones. First, the method initiates the clustering process by applying K-Means technique with relatively large K value on simple LBP features to generate the first set of high precision clusters. Multiple clusters generated for each individual (low recall) are then progressively merged by applying linear and non-linear subspace modelling strategies on custom selected sophisticated features like Gabor filter, Gabor Jets, and Spin LGBP (Local Gabor Binary Patterns) with spatially spinning bin support for histogram computation. Our experiments on the standard face databases like YouTube Faces, YouTube Celebrities, Indian Movie Face database, eNTERFACE, Multi-Pie, CK+, MindReading and internally collected mobile phone samples demonstrate the effectiveness of proposed approach as compared to state-of-the-art methods and a commercial solution on a mobile phone.",
"title": ""
},
{
"docid": "c6f52d8333406bce50d72779f07d5ac2",
"text": "Dimensionality reduction studies methods that effectively reduce data dimensionality for efficient data processing tasks such as pattern recognition, machine learning, text retrieval, and data mining. We introduce the field of dimensionality reduction by dividing it into two parts: feature extraction and feature selection. Feature extraction creates new features resulting from the combination of the original features; and feature selection produces a subset of the original features. Both attempt to reduce the dimensionality of a dataset in order to facilitate efficient data processing tasks. We introduce key concepts of feature extraction and feature selection, describe some basic methods, and illustrate their applications with some practical cases. Extensive research into dimensionality reduction is being carried out for the past many decades. Even today its demand is further increasing due to important high-dimensional applications such as gene expression data, text categorization, and document indexing.",
"title": ""
},
{
"docid": "dd0074bd8b057002efc02e17f69d3ad1",
"text": "The purpose of this study is to recognize modeling methods for coal combustion and gasification in commercial process analysis codes. Many users have appreciated the reliability of commercial process analysis simulation codes; however, it is necessary to understand the physical meaning and limitations of the modeling results. Modeling of coal gasification phenomena has been embodied in commercial process analysis simulators such as Aspen. Commercial code deals with modeling of the gasification system with a number of reactor blocks supported by the specific code, not as a coal gasifier. However, the primary purpose of using process analysis simulation code is to interpret the whole plant cycle rather than an individual unit such as a gasifier. Equilibrium models of a coal gasifier are generally adopted in the commercial codes, where the method of Gibbs free energy minimization of chemical species is applied at the given temperature and pressure. The equilibrium model of the coal gasifier, RGibbs, in commercial codes provides users with helpful information, such as exit syngas temperature, composition, flow rate, performance of coal gasifier model, etc. with various input and operating conditions. This simulation code is being used to generate simple and fast response of results. Limitations and uncertainties are interpreted in the view of the gasification process, chemical reaction, char reactivity, and reactor geometry. In addition, case studies are introduced with examples. Finally, a way to improve the coal gasifier model is indicated, and a kinetically modified model considering reaction rate is proposed.",
"title": ""
},
{
"docid": "b0b4fe1bfe64f306895f2cfc28d50415",
"text": "Background\nFollowing news of deaths in two districts of Jharkhand (West Singhbum and Garhwa) in November 2016, epidemiological investigations were contemplated to investigate any current outbreak of falciparum malaria and deaths attributed to it.\n\n\nMethodology\nThe epidemiological investigations, verbal autopsy of suspected deaths attributed to malaria and keys interviews were conducted in the 2nd and 4th week of November 2016 in Khuntpani and Dhurki block of West Singhbum and Garhwa districts, respectively, following a strict protocol.\n\n\nResults\nThe affected villages were Argundi and Korba-Pahariya and their adjacent tolas in Khuntpani and Dhurki block. Undoubtedly, there was the continuous transmission of falciparum malaria in both the regions in October and November 2016. The total cases (according to case definitions) were 1002, of them, 338 and 12 patients were positive for Plasmodium falciparum positive (Pf +ve) and Plasmodium vivax positive (Pv +ve), respectively, in the affected areas of Khuntpani block. In Dhurki block, out of the total of 631 patients fulfilling the case definition, 65 patients were PF +ve and 23 Pv +ve. Comparing to the last year, there is remarkably high number of falciparum cases. Verbal autopsy of deceased individuals showed that malaria might be one of the strongly probable diagnoses, but not conclusively.\n\n\nConclusion\nAccording to epidemiological investigation, verbal autopsy and key interviews conducted, it may be concluded that there is a definite outbreak of falciparum malaria in the area and environment is congenial for malaria and other tropical diseases.",
"title": ""
},
{
"docid": "ec2257854faa3076b5c25d2c947d1780",
"text": "This paper presents a novel approach for road marking detection and classification based on machine learning algorithms. Road marking recognition is an important feature of an intelligent transportation system (ITS). Previous works are mostly developed using image processing and decisions are often made using empirical functions, which makes it difficult to be generalized. Hereby, we propose a general framework for object detection and classification, aimed at video-based intelligent transportation applications. It is a two-step approach. The detection is carried out using binarized normed gradient (BING) method. PCA network (PCANet) is employed for object classification. Both BING and PCANet are among the latest algorithms in the field of machine learning. Practically the proposed method is applied to a road marking dataset with 1,443 road images. We randomly choose 60% images for training and use the remaining 40% images for testing. Upon training, the system can detect 9 classes of road markings with an accuracy better than 96.8%. The proposed approach is readily applicable to other ITS applications.",
"title": ""
},
{
"docid": "72e9ab4b335ac9865138ad26fc24b29b",
"text": "Recently, convolutional neural networks (ConvNets) have achieved marvellous results in different field of recognition, especially in computer vision. In this paper, a seven-layer ConvNet using data augmentation is proposed for leaves recognition. First, we implement multiform transformations (e.g., rotation and translation etc.) to enlarge the dataset without changing their labels. This novel technique recently makes tremendous contribution to the performance of ConvNets as it is able to reduce the over-fitting degree and enhance the generalization ability of the ConvNet. Moreover, in order to get the shapes of leaves, we sharpen all the images with a random parameter. This method is similar to the edge detection, which has been proved useful in the image classification. Then we train a deep convolutional neural network to classify the augmented leaves data with three groups of test set and finally find that the method is quite feasible and effective. The accuracy achieved by our algorithm outperforms other methods for supervised learning on the popular leaf dataset Flavia.",
"title": ""
}
] |
scidocsrr
|
94eb5bd7ab82d76fa066f0a9744bd5f7
|
Discriminative Bi-Term Topic Model for Headline-Based Social News Clustering
|
[
{
"docid": "5183794d8bef2d8f2ee4048d75a2bd3c",
"text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.",
"title": ""
},
{
"docid": "c668dd96bbb4247ad73b178a7ba1f921",
"text": "Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and supportvector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotionrelated natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement. 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "435803f0f30a60d2083d7a903e98823a",
"text": "A dual-frequency radar, which estimates the range of a target based on the phase difference between two closely spaced frequencies, has been shown to be a cost-effective approach to accomplish both range-to-motion estimation and tracking. This approach, however, suffers from two drawbacks: it cannot deal with multiple moving targets, and it has poor performance in noisy environments. In this letter, we propose the use of time-frequency signal representations to overcome these drawbacks. The phase, and subsequently the range information, is obtained based on the moving target instantaneous Doppler frequency law, which is provided through time-frequency signal representations. The case of multiple moving targets is handled by separating the different Doppler signatures prior to phase estimation.",
"title": ""
},
{
"docid": "d60fa38df9a4e5692d5c36ea3aa88772",
"text": "Many human activities require precise judgments about the physical properties and dynamics of multiple objects. Classic work suggests that people’s intuitive models of physics are relatively poor and error-prone, based on highly simplified heuristics that apply only in special cases or incorrect general principles (e.g., impetus instead of momentum). These conclusions seem at odds with the breadth and sophistication of naive physical reasoning in real-world situations. Our work measures the boundaries of people’s physical reasoning and tests the richness of intuitive physics knowledge in more complex scenes. We asked participants to make quantitative judgments about stability and other physical properties of virtual 3D towers. We found their judgments correlated highly with a model observer that uses simulations based on realistic physical dynamics and sampling-based approximate probabilistic inference to efficiently and accurately estimate these properties. Several alternative heuristic accounts provide substantially worse fits.",
"title": ""
},
{
"docid": "7dda8adb207e69ccbc52ce0497d3f5d4",
"text": "Statistics from security firms, research institutions and government organizations show that the number of data-leak instances have grown rapidly in recent years. Among various data-leak cases, human mistakes are one of the main causes of data loss. There exist solutions detecting inadvertent sensitive data leaks caused by human mistakes and to provide alerts for organizations. A common approach is to screen content in storage and transmission for exposed sensitive information. Such an approach usually requires the detection operation to be conducted in secrecy. However, this secrecy requirement is challenging to satisfy in practice, as detection servers may be compromised or outsourced. In this paper, we present a privacy-preserving data-leak detection (DLD) solution to solve the issue where a special set of sensitive data digests is used in detection. The advantage of our method is that it enables the data owner to safely delegate the detection operation to a semihonest provider without revealing the sensitive data to the provider. We describe how Internet service providers can offer their customers DLD as an add-on service with strong privacy guarantees. The evaluation results show that our method can support accurate detection with very small number of false alarms under various data-leak scenarios.",
"title": ""
},
{
"docid": "55767c008ad459f570fb6b99eea0b26d",
"text": "The Tor network relies on volunteer relay operators for relay bandwidth, which may limit its growth and scaling potential. We propose an incentive scheme for Tor relying on two novel concepts. We introduce TorCoin, an “altcoin” that uses the Bitcoin protocol to reward relays for contributing bandwidth. Relays “mine” TorCoins, then sell them for cash on any existing altcoin exchange. To verify that a given TorCoin represents actual bandwidth transferred, we introduce TorPath, a decentralized protocol for forming Tor circuits such that each circuit is privately-addressable but publicly verifiable. Each circuit’s participants may then collectively mine a limited number of TorCoins, in proportion to the end-to-end transmission goodput they measure on that circuit.",
"title": ""
},
{
"docid": "4ba866eb1a9c541f87c9e3b7632cc5bf",
"text": "Biologists worry that the rapid rates of warming projected for the planet (1) will doom many species to extinction. Species could face extinction with climate change if climatically suitable habitat disappears or is made inaccessible by geographic barriers or species' inability to disperse (see the figure, panels A to E). Previous studies have provided region- or taxon-specific estimates of biodiversity loss with climate change that range from 0% to 54%, making it difficult to assess the seriousness of this problem. On page 571 of this issue, Urban (2) provides a synthetic and sobering estimate of climate change–induced biodiversity loss by applying a model-averaging approach to 131 of these studies. The result is a projection that up to one-sixth of all species may go extinct if we follow “business as usual” trajectories of carbon emissions.",
"title": ""
},
{
"docid": "45ba4b96658d0b13fbc89630b3abf840",
"text": "A broadband microstrip-to-coplanar strip (CPS) transition employing composite right/left handed (CRLH) transmission line technology is presented. The transition is based on the unique phase slope and offset control properties of the CRLH transmission line. The pairing of a microstrip delay line and the CRLH transmission line has a broadband out-of-phase characteristic, thus enabling the broadband transition between microstrip and CPS. An unbalanced and a balanced type of transition are built and tested. A 3dB insertion loss bandwidth of 85% and 80% can be achieved by connecting the transition back-to- back for the unbalanced and balanced type, respectively. To demonstrate the applicability of the proposed microstrip-to-CPS transition, a modified quasi-Yagi antenna is designed. An impedance bandwidth of 65% can be obtained, showing a 15% improvement compared to the conventional one.",
"title": ""
},
{
"docid": "afeb909f4be9da56dcaeb86d464ec75e",
"text": "Synthesizing expressive speech with appropriate prosodic variations, e.g., various styles, still has much room for improvement. Previous methods have explored to use manual annotations as conditioning attributes to provide variation information. However, the related training data are expensive to obtain and the annotated style codes can be ambiguous and unreliable. In this paper, we explore utilizing the residual error as conditioning attributes. The residual error is the difference between the prediction of a trained average model and the ground truth. We encode the residual error into a style embedding via a neural networkbased error encoder. The style embedding is then fed to the target synthesis model to provide information for modeling various style distributions more accurately. The average model and the error encoder are jointly optimized with the target synthesis model. Our proposed method has two advantages: 1) the embedding is automatically learned with no need of manual style annotations, which helps overcome data sparsity and ambiguity limitations; 2) For any unseen audio utterance, the style embedding can be efficiently generated. This enables rapid adaptation to the desired style to be achieved with only a single adaptation utterance. Experimental results show that our proposed method outperforms the baseline model in both speech quality and style similarity.",
"title": ""
},
{
"docid": "a69ee712617a6bf9743709ef1b11be64",
"text": "Using an ostomy appliance can affect many aspects of a person's health-related quality of life (HRQL). A 2-part, descrip- tive study was designed to develop and validate an instrument to assess quality-of-life outcomes related to ostomy ap- pliance use. Study inclusion/exclusion criteria stipulated participants should be 18 to 85 years of age, have an ileostomy or colostomy, used an appliance for a minimum of 3 months without assistance, and able to complete an online survey. All participants provided sociodemographic and clinical information. In phase 1, a literature search was conducted and existing instruments used to measure HRQL in persons with an ostomy were assessed. Subsequently, the Ostomy-Q, a 23-item, Likert-response type questionnaire, divided into 4 domains (Discreetness, Comfort, Confidence, and Social Life), was developed based on published evidence and existing ostomy-related HRQL tools. Seven (7) participants re- cruited from a manufacturer user panel took part in exploratory/cognitive qualitative interviews to refine the new quality- of-life questionnaire. In phase 2, the instrument was tested to assess item variability and conceptual structure, item-total correlation, internal consistency, test-retest reliability, sensitivity, and minimal important difference (MID) in an online validation study among 200 participants from the manufacturer's user panel (equally divided by gender, 125 [62.5%] >50 years old, 128 [64%] with an ileostomy). This exercise also included completion of the Stoma Quality of Life Question- naire and 2 domains from the Ostomy Adjustment Inventory-23 to assess convergent validity. Eighty-two (82) participants recompleted these study instruments 2 weeks later to assess test-retest reliability. Sociodemographic and clinical data were assessed using descriptive statistics; Cronbach's alpha was used for internal consistency (minimum 0.70), principle component analysis for item variability/conceptual structure, and item-total correlation; intraclass correlation coefficient was used for test-retest reliability; and standard error of measurement was applied to MID. All domains demonstrated good internal consistency (between 0.69 and 0.78). All scales showed stability, with a minimum intraclass correlation coefficient of 0.743 (P <.001). The Ostomy-Q showed good convergent validity with other instruments to which it was compared (P <.01). In this study, the Ostomy-Q was found to be a reliable and valid outcome measure that can enhance understanding of the impact of ostomy appliances on users. Some items for social relationships and discreetness may need more exploring in the future with other patient groups.",
"title": ""
},
{
"docid": "a532355548d9937c496555b868181d06",
"text": "• We have modeled an opportunistic ad library that aggressively collects targeted data from Android devices. • We demonstrate that the access channels considered are realistic. • We have designed a reliable and extensible framework that can be leveraged to assess user data exposure by an app to a library. Classifier Age Marital Status Sex P(%) R(%) P(%) R(%) P(%) R(%) Random Forest 88.6 88.6 95.0 93.8 93.8 92.9",
"title": ""
},
{
"docid": "dba627e41a71ddeb2390a2d5d4682930",
"text": "We present GeoXp, an R package implementing interactive graphics for exploratory spatial data analysis. We use a data basis concerning public schools of the French MidiPyrénées region to illustrate the use of these exploratory techniques based on the coupling between a statistical graph and a map. Besides elementary plots like boxplots, histograms or simple scatterplots, GeoXp also couples maps with Moran scatterplots, variogram clouds, Lorenz curves, etc. In order to make the most of the multidimensionality of the data, GeoXp includes dimension reduction techniques such as principal components analysis and cluster analysis whose results are also linked to the map.",
"title": ""
},
{
"docid": "13873728b108a3142f23d79dd823c171",
"text": "This paper presents an innovative architecture to drastically enlarge the bandwidth of the Doherty power amplifier (DPA). The proposed topology, based on novel input/output splitting/combining networks, allows to overcome the typical bandwidth limiting factors of the conventional DPA. A complete and rigorous theoretical investigation of the developed architecture is presented leading to a closed-form formulation suitable for a direct synthesis of ultra-wideband DPAs. The theoretical formulation is validated through the design, realization, and test of a hybrid prototype based on commercial GaN HEMT device showing a fractional bandwidth larger than 83%. From 1.05 to 2.55 GHz, experimental results with continuous-wave signals have shown efficiency levels within 83%-45% and within 58%-35% at about 42- and 36-dBm output power, respectively. The DPA has also been tested and digitally predistorted by using a 5-MHz Third Generation Partnership Project (3GPP) signal. In particular, to evaluate the ultra-wideband and the multi-mode capabilities of the prototype, f1 = 1.2 GHz, f2 = 1.8 GHz, and f3 = 2.5 GHz have been selected as carrier frequencies for the 3GPP signal. Under these conditions and at 36-dBm average output power, the DPA shows 52%, 35%, and 52% efficiency and an adjacent channel power ratio always lower than -43 dBc.",
"title": ""
},
{
"docid": "8ff5fb7c1da449d311400757fdba8832",
"text": "There is a widespread concern in Western society about the visibility of pornography in public places and on the Internet. What are the consequences for young men and women, and how do they think about gender, sexuality, and pornography? Data was collected, through 22 individual interviews and seven focus groups, from 51 participants (36 women and 37 men aged 14-20 years) in Sweden. The results indicated a process of both normalization and ambivalence. Pornography was used as a form of social intercourse, a source of information, and a stimulus for sexual arousal. Pornography consumption was more common among the young men than among the women. For both the young men and women, the pornographic script functioned as a frame of reference in relation to bodily ideals and sexual performances. Most of the participants had acquired the necessary skills of how to deal with the exposure to pornography in a sensible and reflective manner.",
"title": ""
},
{
"docid": "2d6d5c8b1ac843687db99ccf50a0baff",
"text": "This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier’s behaviour.",
"title": ""
},
{
"docid": "d3f35e91d5d022de5fe816cf1234e415",
"text": "Rock mass description and characterisation is a basic task for exploration, mining work-flows and ground-water studies. Rock analysis can be performed using borehole logs that are created using a televiewer. Planar discontinuities in the rock appear as sinusoidal curves in borehole logs. The aim of this project is to develop a fast algorithm to analyse borehole imagery using image processing techniques, to identify and trace the discontinuities, and to perform quantitative analysis on their distribution.",
"title": ""
},
{
"docid": "b185af5b9bff8c1542c70f3a0e0e9e47",
"text": "Purpose Organizations have to evaluate their internal and external environments in this highly competitive world. Strengths, weaknesses, opportunities and threats (SWOT) analysis is a very useful technique which analyzes the strengths, weaknesses, opportunities and threats of an organization for taking strategic decisions and it also provides a foundation for the formulation of strategies. But the drawback of SWOT analysis is that it does not quantify the importance of individual factors affecting the organization and the individual factors are described in brief without weighing them. Because of this reason, SWOT analysis can be integrated with any multiple attribute decision-making (MADM) technique like the technique for order preference by similarity to ideal solution (TOPSIS), analytical hierarchy process, etc., to evaluate the best alternative among the available strategic alternatives. The paper aims to discuss these issues. Design/methodology/approach In this study, SWOT analysis is integrated with a multicriteria decision-making technique called TOPSIS to rank different strategies for Indian medical tourism in order of priority. Findings SO strategy (providing best facilitation and care to the medical tourists at par to developed countries) is the best strategy which matches with the four elements of S, W, O and T of SWOT matrix and 35 strategic indicators. Practical implications This paper proposes a solution based on a combined SWOT analysis and TOPSIS approach to help the organizations to evaluate and select strategies. Originality/value Creating a new technology or administering a new strategy always has some degree of resistance by employees. To minimize resistance, the author has used TOPSIS as it involves group thinking, requiring every manager of the organization to analyze and evaluate different alternatives and average measure of each parameter in final decision matrix.",
"title": ""
},
{
"docid": "055cb9aca6b16308793944154dc7866a",
"text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?",
"title": ""
},
{
"docid": "72839a67032eba63246dd2bdf5799f75",
"text": "We use a supervised multi-spike learning algorithm for spiking neural networks (SNNs) with temporal encoding to simulate the learning mechanism of biological neurons in which the SNN output spike trains are encoded by firing times. We first analyze why existing gradient-descent-based learning methods for SNNs have difficulty in achieving multi-spike learning. We then propose a new multi-spike learning method for SNNs based on gradient descent that solves the problems of error function construction and interference among multiple output spikes during learning. The method could be widely applied to single spiking neurons to learn desired output spike trains and to multilayer SNNs to solve classification problems. By overcoming learning interference among multiple spikes, our method has high learning accuracy when there are a relatively large number of output spikes in need of learning. We also develop an output encoding strategy with respect to multiple spikes for classification problems. This effectively improves the classification accuracy of multi-spike learning compared to that of single-spike learning.",
"title": ""
},
{
"docid": "5e2eee141595ae58ca69ee694dc51c8a",
"text": "Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations.",
"title": ""
},
{
"docid": "0a2a39149013843b0cece63687ebe9e9",
"text": "177Lu-labeled PSMA-617 is a promising new therapeutic agent for radioligand therapy (RLT) of patients with metastatic castration-resistant prostate cancer (mCRPC). Initiated by the German Society of Nuclear Medicine, a retrospective multicenter data analysis was started in 2015 to evaluate efficacy and safety of 177Lu-PSMA-617 in a large cohort of patients.\n\n\nMETHODS\nOne hundred forty-five patients (median age, 73 y; range, 43-88 y) with mCRPC were treated with 177Lu-PSMA-617 in 12 therapy centers between February 2014 and July 2015 with 1-4 therapy cycles and an activity range of 2-8 GBq per cycle. Toxicity was categorized by the common toxicity criteria for adverse events (version 4.0) on the basis of serial blood tests and the attending physician's report. The primary endpoint for efficacy was biochemical response as defined by a prostate-specific antigen decline ≥ 50% from baseline to at least 2 wk after the start of RLT.\n\n\nRESULTS\nA total of 248 therapy cycles were performed in 145 patients. Data for biochemical response in 99 patients as well as data for physician-reported and laboratory-based toxicity in 145 and 121 patients, respectively, were available. The median follow-up was 16 wk (range, 2-30 wk). Nineteen patients died during the observation period. Grade 3-4 hematotoxicity occurred in 18 patients: 10%, 4%, and 3% of the patients experienced anemia, thrombocytopenia, and leukopenia, respectively. Xerostomia occurred in 8%. The overall biochemical response rate was 45% after all therapy cycles, whereas 40% of patients already responded after a single cycle. Elevated alkaline phosphatase and the presence of visceral metastases were negative predictors and the total number of therapy cycles positive predictors of biochemical response.\n\n\nCONCLUSION\nThe present retrospective multicenter study of 177Lu-PSMA-617 RLT demonstrates favorable safety and high efficacy exceeding those of other third-line systemic therapies in mCRPC patients. Future phase II/III studies are warranted to elucidate the survival benefit of this new therapy in patients with mCRPC.",
"title": ""
},
{
"docid": "b60850caccf9be627b15c7c83fb3938e",
"text": "Research and development of hip stem implants started centuries ago. However, there is still no yet an optimum design that fulfills all the requirements of the patient. New manufacturing technologies have opened up new possibilities for complicated theoretical designs to become tangible reality. Current trends in the development of hip stems focus on applying porous structures to improve osseointegration and reduce stem stiffness in order to approach the stiffness of the natural human bone. In this field, modern additive manufacturing machines offer unique flexibility in manufacturing parts combining variable density mesh structures with solid and porous metal in a single manufacturing process. Furthermore, additive manufacturing machines became powerful competitors in the economical mass production of hip implants. This is due to their ability to manufacture several parts with different geometries in a single setup and with minimum material consumption. This paper reviews the application of additive manufacturing (AM) techniques in the production of innovative porous femoral hip stem design.",
"title": ""
}
] |
scidocsrr
|
e4c9e17c2ab867615800454a162bbad4
|
Link Prediction on Evolving Data Using Matrix and Tensor Factorizations
|
[
{
"docid": "7ddf5c53b9ee56cb92c67253f495aafd",
"text": "Two-way arrays or matrices are often not enough to represent all the information in the data and standard two-way analysis techniques commonly applied on matrices may fail to find the underlying structures in multi-modal datasets. Multiway data analysis has recently become popular as an exploratory analysis tool in discovering the structures in higher-order datasets, where data have more than two modes. We provide a review of significant contributions in the literature on multiway models, algorithms as well as their applications in diverse disciplines including chemometrics, neuroscience, social network analysis, text mining and computer vision.",
"title": ""
},
{
"docid": "d97e9181f01f195c0b299ce8893ddbbd",
"text": "Linear algebra is a powerful and proven tool in Web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score Web pages based on the principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the Web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the Web pages while the third dimension adds the anchor text. We then use the rank-1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative Web pages.",
"title": ""
}
] |
[
{
"docid": "20ebefc5be0e91e15e4773c633624224",
"text": "Effects of different levels of Biomin® IMBO synbiotic, including Enterococcus faecium (as probiotic), and fructooligosaccharides (as prebiotic) on survival, growth performance, and digestive enzyme activities of common carp fingerlings (Cyprinus carpio) were evaluated. The experiment was carried out in four treatments (each with 3 replicates), including T1 = control with non-synbiotic diet, T2 = 0.5 g/kg synbiotic diet, T3 = 1 g/kg synbiotic diet, and T4 = 1.5 g/kg synbiotic diet. In total 300 fish with an average weight of 10 ± 1 g were distributed in 12 tanks (25 animals per 300 l) and were fed experimental diets over a period of 60 days. The results showed that synbiotic could significantly enhance growth parameters (weight gain, length gain, specific growth rate, percentage weight gain) (P < 0.05), but did not exhibit any effect on survival rate (P > 0.05) compared with the control. An assay of the digestive enzyme activities demonstrated that the trypsin and chymotrypsin activities of synbiotic groups were considerably increased than those in the control (P < 0.05), but there was no significant difference in the levels of α-amylase, lipase, or alkaline phosphatase (P > 0.05). This study indicated that different levels of synbiotic have the capability to enhance probiotic substitution, to improve digestive enzyme activity which leads to digestive system efficiency, and finally to increase growth. It seems that the studied synbiotic could serve as a good diet supplement for common carp cultures.",
"title": ""
},
{
"docid": "ae1109343879d05eaa4b524e4f5d92f3",
"text": "Implantable devices, often dependent on software, save countless lives. But how secure are they?",
"title": ""
},
{
"docid": "6dbf49c714f6e176273317d4274b93de",
"text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.",
"title": ""
},
{
"docid": "b52580bfad9621a1b0537ceed0c912c0",
"text": "Partial discharge (PD) detection is an effective method for finding insulation defects in HV and EHV power cables. PD apparent charge is typically expressed in picocoulombs (pC) when the calibration procedure defined in IEC 60270 is applied during off-line tests. During on-line PD detection, measured signals are usually denoted in mV or dB without transforming the measured signal into a charge quantity. For AC XLPE power cable systems, on-line PD detection is conducted primarily with the use of high frequency current transformer (HFCT). The HFCT is clamped around the cross-bonding link of the joint or the grounding wire of termination. In on-line occasion, PD calibration is impossible from the termination. A novel on-line calibration method using HFCT is introduced in this paper. To eliminate the influence of cross-bonding links, the interrupted cable sheath at the joint was reconnected via the high-pass C-arm connector. The calibration signal was injected into the cable system via inductive coupling through the cable sheath. The distributed transmission line equivalent circuit of the cable was used in consideration of the signal attenuation. Both the conventional terminal calibration method and the proposed on-line calibration method were performed on the coaxial cable model loop for experimental verification. The amplitude and polarity of signals that propagate in the cable sheath and the conductor were evaluated. The results indicate that the proposed method can calibrate the measured signal during power cable on-line PD detection.",
"title": ""
},
{
"docid": "7ba8492090482fe9d05d5adcea23a120",
"text": "The sequential minimal optimization (SMO) algorithm has been widely used for training the support vector machine (SVM). In this paper, we present the first chip design for sequential minimal optimization. This chip is implemented as an intellectual property (IP) core, suitable to be utilized in an SVM-based recognition system on a chip. The proposed SMO chip has been tested to be fully functional, using a prototype system based on the Altera DE2 board with Cyclone II 2C70 FPGA (field-programmable gate array).",
"title": ""
},
{
"docid": "547cd9db3337be35e8df9074fdd331d5",
"text": "Performing a digital forensic investigation (DFI) requires a standardized and formalized process. There is currently neither an international standard nor does a global, harmonized DFI process (DFIP) exist. The authors studied existing state-of-the-art DFIP models and concluded that there are significant disparities pertaining to the number of processes, the scope, the hierarchical levels, and concepts applied. This paper proposes a comprehensive model that harmonizes existing models. An effort was made to incorporate all types of processes proposed by the existing models, including those aimed at achieving digital forensic readiness. The authors introduce a novel class of processes called concurrent processes. This is a novel contribution that should, together with the rest of the model, enable more efficient and effective DFI, while ensuring admissibility of digital evidence. Ultimately, the proposed model is intended to be used for different types of DFI and should lead to standardization.",
"title": ""
},
{
"docid": "ab6371d4c57d9cf453826833f32677c5",
"text": "In this paper, we consider two inter-dependent deep networks, where one network taps into the other, to perform two challenging cognitive vision tasks - scene classification and object recognition jointly. Recently, convolutional neural networks have shown promising results in each of these tasks. However, as scene and objects are interrelated, the performance of both of these recognition tasks can be further improved by exploiting dependencies between scene and object deep networks. The advantages of considering the inter-dependency between these networks are the following: 1. improvement of accuracy in both scene and object classification, and 2. significant reduction of computational cost in object detection. In order to formulate our framework, we employ two convolutional neural networks (CNNs), scene-CNN and object-CNN. We utilize scene-CNN to generate object proposals which indicate the probable object locations in an image. Object proposals found in the process are semantically relevant to the object. More importantly, the number of object proposals is fewer in amount when compared to other existing methods which reduces the computational cost significantly. Thereafter, in scene classification, we train three hidden layers in order to combine the global (image as a whole) and local features (object information in an image). Features extracted from CNN architecture along with the features processed from object-CNN are combined to perform efficient classification. We perform rigorous experiments on five datasets to demonstrate that our proposed framework outperforms other state-of-the-art methods in classifying scenes as well as recognizing objects.",
"title": ""
},
{
"docid": "426d3b0b74eacf4da771292abad06739",
"text": "Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.",
"title": ""
},
{
"docid": "aadd1d3e22b767a12b395902b1b0c6ca",
"text": "Long-term situation prediction plays a crucial role for intelligent vehicles. A major challenge still to overcome is the prediction of complex downtown scenarios with multiple road users, e.g., pedestrians, bikes, and motor vehicles, interacting with each other. This contribution tackles this challenge by combining a Bayesian filtering technique for environment representation, and machine learning as long-term predictor. More specifically, a dynamic occupancy grid map is utilized as input to a deep convolutional neural network. This yields the advantage of using spatially distributed velocity estimates from a single time step for prediction, rather than a raw data sequence, alleviating common problems dealing with input time series of multiple sensors. Furthermore, convolutional neural networks have the inherent characteristic of using context information, enabling the implicit modeling of road user interaction. Pixel-wise balancing is applied in the loss function counteracting the extreme imbalance between static and dynamic cells. One of the major advantages is the unsupervised learning character due to fully automatic label generation. The presented algorithm is trained and evaluated on multiple hours of recorded sensor data and compared to Monte-Carlo simulation. Experiments show the ability to model complex interactions.",
"title": ""
},
{
"docid": "68a6edfafb8e7dab899f8ce1f76d311c",
"text": "Networks such as social networks, airplane networks, and citation networks are ubiquitous. The adjacency matrix is often adopted to represent a network, which is usually high dimensional and sparse. However, to apply advanced machine learning algorithms to network data, low-dimensional and continuous representations are desired. To achieve this goal, many network embedding methods have been proposed recently. The majority of existing methods facilitate the local information i.e. local connections between nodes, to learn the representations, while completely neglecting global information (or node status), which has been proven to boost numerous network mining tasks such as link prediction and social recommendation. Hence, it also has potential to advance network embedding. In this paper, we study the problem of preserving local and global information for network embedding. In particular, we introduce an approach to capture global information and propose a network embedding framework LOG, which can coherently model LOcal and Global information. Experimental results demonstrate the ability to preserve global information of the proposed framework. Further experiments are conducted to demonstrate the effectiveness of learned representations of the proposed framework.",
"title": ""
},
{
"docid": "da70744d008c2d0f76d6214e2172f1f8",
"text": "Advanced mobile technology continues to shape professional environments. Smart cell phones, pocket computers and laptop computers reduce the need of users to remain close to a wired information system infrastructure and allow for task performance in many different contexts. Among the consequences are changes in technology requirements, such as the need to limit weight and size of the devices. In the current paper, we focus on the factors that users find important in mobile devices. Based on a content analysis of online user reviews that was followed by structural equation modeling, we found four factors to be significantly related with overall user evaluation, namely functionality, portability, performance, and usability. Besides the practical relevance for technology developers and managers, our research results contribute to the discussion about the extent to which previously established theories of technology adoption and use are applicable to mobile technology. We also discuss the methodological suitability of online user reviews for the assessment of user requirements, and the complementarity of automated and non-automated forms of content analysis.",
"title": ""
},
{
"docid": "53e7e1053129702b7fc32b32d11656da",
"text": "A new and robust constant false alarm rate (CFAR) detector based on truncated statistics (TSs) is proposed for ship detection in single-look intensity and multilook intensity synthetic aperture radar data. The approach is aimed at high-target-density situations such as busy shipping lines and crowded harbors, where the background statistics are estimated from potentially contaminated sea clutter samples. The CFAR detector uses truncation to exclude possible statistically interfering outliers and TSs to model the remaining background samples. The derived truncated statistic CFAR (TS-CFAR) algorithm does not require prior knowledge of the interfering targets. The TS-CFAR detector provides accurate background clutter modeling, a stable false alarm regulation property, and improved detection performance in high-target-density situations.",
"title": ""
},
{
"docid": "ab5963208b0c5a513ceca6e926e8aab9",
"text": "This paper presents a large-scale corpus for non-task-oriented dialogue response selection, which contains over 27K distinct prompts more than 82K responses collected from social media.1 To annotate this corpus, we define a 5-grade rating scheme: bad, mediocre, acceptable, good, and excellent, according to the relevance, coherence, informativeness, interestingness, and the potential to move a conversation forward. To test the validity and usefulness of the produced corpus, we compare various unsupervised and supervised models for response selection. Experimental results confirm that the proposed corpus is helpful in training response selection models.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "e49ea1a6aa8d7ffec9ca16ac18cfc43a",
"text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M.",
"title": ""
},
{
"docid": "24f141bd7a29bb8922fa010dd63181a6",
"text": "This paper reports on the development of a hand to machine interface device that provides real-time gesture, position and orientation information. The key element is a glove and the device as a whole incorporates a collection of technologies. Analog flex sensors on the glove measure finger bending. Hand position and orientation are measured either by ultrasonics, providing five degrees of freedom, or magnetic flux sensors, which provide six degrees of freedom. Piezoceramic benders provide the wearer of the glove with tactile feedback. These sensors are mounted on the light-weight glove and connected to the driving hardware via a small cable.\nApplications of the glove and its component technologies include its use in conjunction with a host computer which drives a real-time 3-dimensional model of the hand allowing the glove wearer to manipulate computer-generated objects as if they were real, interpretation of finger-spelling, evaluation of hand impairment in addition to providing an interface to a visual programming language.",
"title": ""
},
{
"docid": "1d1be59a2c3d3b11039f9e4b2e8e351c",
"text": "The impact of digital mobility services on individual traffic behavior within cities has increased significantly over the last years. Therefore, the aim of this paper is to provide an overview of existing digital services for urban transportation. Towards this end, we analyze 59 digital mobility services available as smartphone applications or web services. Building on a framework for service system modularization, we identified the services’ modules and data sources. While some service modules and data sources are integrated in various mobility services, others are only used in specific services, even though they would generate value in other services as well. This overview provides the basis for future design science research in the area of digital service systems for sustainable transportation. Based on the overview, practitioners from industry and public administration can identify potential for innovative service and foster co-creation and innovation within existing service systems.",
"title": ""
},
{
"docid": "9fb492c57ef0795a9d71cd94a8ebc8f4",
"text": "The increasing reliance on Computational Intelligence techniques like Artificial Neural Networks and Genetic Algorithms to formulate trading decisions have sparked off a chain of research into financial forecasting and trading trend identifications. Many research efforts focused on enhancing predictive capability and identifying turning points. Few actually presented empirical results using live data and actual technical trading rules. This paper proposed a novel RSPOP Intelligent Stock Trading System, that combines the superior predictive capability of RSPOP FNN and the use of widely accepted Moving Average and Relative Strength Indicator Trading Rules. The system is demonstrated empirically using real live stock data to achieve significantly higher Multiplicative Returns than a conventional technical rule trading system. It is able to outperform the buy-and-hold strategy and generate several folds of dollar returns over an investment horizon of four years. The Percentage of Winning Trades was increased significantly from an average of 70% to more than 92% using the system as compared to the conventional trading system; demonstrating the system’s ability to filter out erroneous trading signals generated by technical rules and to preempt any losing trades. The system is designed based on the premise that it is possible to capitalize on the swings in a stock counter’s price, without a need for predicting target prices.",
"title": ""
},
{
"docid": "1574abcbcff64f1c6fd725e0b5cf3df0",
"text": "Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. As a case study, a neural language model usually consists of one or more recurrent layers sandwiched between an embedding layer used for representing input tokens and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves great performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6 times compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26 times compression rate, which translates to a factor of 12.8 times compression for the entire model with very little degradation in perplexity.",
"title": ""
},
{
"docid": "3e0076e4f2e69238c5f5ebcdc1dbbda1",
"text": "This work presents a self-biased MOSFET threshold voltage VT0 monitor. The threshold condition is defined based on a current-voltage relationship derived from a continuous physical model. The model is valid for any operating condition, from weak to strong inversion, and under triode or saturation regimes. The circuit consists in balancing two self-cascode cells operating at different inversion levels, where one of the transistors that compose these cells is biased at the threshold condition. The circuit is MOSFET-only (can be implemented in any standard digital process), and it operates with a power supply of less than 1 V, consuming tenths of nW. We propose a process independent design methodology, evaluating different trade-offs of accuracy, area and power consumption. Schematic simulation results, including Monte Carlo variability analysis, support the VT0 monitoring behavior of the circuit with good accuracy on a 180 nm process.",
"title": ""
}
] |
scidocsrr
|
ef7656d8f6e36f830b6f07bcc8273752
|
Doc2Sent2Vec: A Novel Two-Phase Approach for Learning Document Representation
|
[
{
"docid": "497088def9f5f03dcb32e33d1b6fcb64",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "6eec78a8b8c58f2c9e28dfcb952a0e8f",
"text": "Typical quadrotor aerial robots used in research weigh less than 3 kg and carry payloads measured in hundreds of grams. Several obstacles in design and control must be overcome to cater for expected industry demands that push the boundaries of existing quadrotor performance. The X-4 Flyer, a 4 kg quadrotor with a 1 kg payload, is intended to be prototypical of useful commercial quadrotors. The custom-built craft uses tuned plant dynamics with an onboard embedded attitude controller to stabilise flight. Independent linear SISO controllers were designed to regulate flyer attitude. The performance of the system is demonstrated in indoor and outdoor flight.",
"title": ""
},
{
"docid": "688d6f57a4567b7d23a849e33ae584d4",
"text": "Whereas traditional theories of gender development have focused on individualistic paths, recent analyses have argued for a more social categorical approach to children's understanding of gender. Using a modeling paradigm based on K. Bussey and A. Bandura (1984), 3 experiments (N = 62, N = 32, and N = 64) examined preschoolers' (M age = 52.9 months) imitation of, and memory for, behaviors of same-sex and opposite-sex children and adults. In all experiments, children's imitation of models varied according to the emphasis given to the particular category of models, despite equal attention being paid to both categories. It is suggested that the categorical nature of gender, or age, informs children's choice of imitative behaviors.",
"title": ""
},
{
"docid": "60e56e7ad5ab23d3f428872de245f028",
"text": "To automatically determine the number of clusters and generate more quality clusters while clustering data samples, we propose a harmonious genetic clustering algorithm, named HGCA, which is based on harmonious mating in eugenic theory. Different from extant genetic clustering methods that only use fitness, HGCA aims to select the most suitable mate for each chromosome and takes into account chromosomes gender, age, and fitness when computing mating attractiveness. To avoid illegal mating, we design three mating prohibition schemes, i.e., no mating prohibition, mating prohibition based on lineal relativeness, and mating prohibition based on collateral relativeness, and three mating strategies, i.e., greedy eugenics-based mating strategy, eugenics-based mating strategy based on weighted bipartite matching, and eugenics-based mating strategy based on unweighted bipartite matching, for harmonious mating. In particular, a novel single-point crossover operator called variable-length-and-gender-balance crossover is devised to probabilistically guarantee the balance between population gender ratio and dynamics of chromosome lengths. We evaluate the proposed approach on real-life and artificial datasets, and the results show that our algorithm outperforms existing genetic clustering methods in terms of robustness, efficiency, and effectiveness.",
"title": ""
},
{
"docid": "594113ed497356eba99b63ddc5c749d7",
"text": "Aspect-based opinion mining is finding elaborate opinions towards a subject such as a product or an event. With explosive growth of opinionated texts on the Web, mining aspect-level opinions has become a promising means for online public opinion analysis. In particular, the boom of various types of online media provides diverse yet complementary information, bringing unprecedented opportunities for cross media aspect-opinion mining. Along this line, we propose CAMEL, a novel topic model for complementary aspect-based opinion mining across asymmetric collections. CAMEL gains information complementarity by modeling both common and specific aspects across collections, while keeping all the corresponding opinions for contrastive study. An auto-labeling scheme called AME is also proposed to help discriminate between aspect and opinion words without elaborative human labeling, which is further enhanced by adding word embedding-based similarity as a new feature. Moreover, CAMEL-DP, a nonparametric alternative to CAMEL is also proposed based on coupled Dirichlet Processes. Extensive experiments on real-world multi-collection reviews data demonstrate the superiority of our methods to competitive baselines. This is particularly true when the information shared by different collections becomes seriously fragmented. Finally, a case study on the public event “2014 Shanghai Stampede” demonstrates the practical value of CAMEL for real-world applications.",
"title": ""
},
{
"docid": "6ddfb4631928eec4247adf2ac033129e",
"text": "Facial micro-expression recognition is an upcoming area in computer vision research. Up until the recent emergence of the extensive CASMEII spontaneous micro-expression database, there were numerous obstacles faced in the elicitation and labeling of data involving facial micro-expressions. In this paper, we propose the Local Binary Patterns with Six Intersection Points (LBP-SIP) volumetric descriptor based on the three intersecting lines crossing over the center point. The proposed LBP-SIP reduces the redundancy in LBP-TOP patterns, providing a more compact and lightweight representation; leading to more efficient computational complexity. Furthermore, we also incorporated a Gaussian multi-resolution pyramid to our proposed approach by concatenating the patterns across all pyramid levels. Using an SVM classifier with leave-one-sample-out cross validation, we achieve the best recognition accuracy of 67.21%, surpassing the baseline performance with further computational efficiency.",
"title": ""
},
{
"docid": "26c4cded1181ce78cc9b61a668e57939",
"text": "Monitoring crop condition and production estimates at the state and county level is of great interest to the U.S. Department of Agriculture. The National Agricultural Statistical Service (NASS) of the U.S. Department of Agriculture conducts field interviews with sampled farm operators and obtains crop cuttings to make crop yield estimates at regional and state levels. NASS needs supplemental spatial data that provides timely information on crop condition and potential yields. In this research, the crop model EPIC (Erosion Productivity Impact Calculator) was adapted for simulations at regional scales. Satellite remotely sensed data provide a real-time assessment of the magnitude and variation of crop condition parameters, and this study investigates the use of these parameters as an input to a crop growth model. This investigation was conducted in the semi-arid region of North Dakota in the southeastern part of the state. The primary objective was to evaluate a method of integrating parameters retrieved from satellite imagery in a crop growth model to simulate spring wheat yields at the sub-county and county levels. The input parameters derived from remotely sensed data provided spatial integrity, as well as a real-time calibration of model simulated parameters during the season, to ensure that the modeled and observed conditions agree. A radiative transfer model, SAIL (Scattered by Arbitrary Inclined Leaves), provided the link between the satellite data and crop model. The model parameters were simulated in a geographic information system grid, which was the platform for aggregating yields at local and regional scales. A model calibration was performed to initialize the model parameters. This calibration was performed using Landsat data over three southeast counties in North Dakota. The model was then used to simulate crop yields for the state of North Dakota with inputs derived from NOAA AVHRR data. The calibration and the state level simulations are compared with spring wheat yields reported by NASS objective yield surveys. Introduction Monitoring agricultural crop conditions during the growing season and estimating the potential crop yields are both important for the assessment of seasonal production. Accurate and timely assessment of particularly decreased production caused by a natural disaster, such as drought or pest infestation, can be critical for countries where the economy is dependent on the crop harvest. Early assessment of yield reductions could avert a disastrous situation and help in strategic planning to meet the demands. The National Agricultural Statistics Service (NASS) of the U.S. Department of Agriculture (USDA) monitors crop conditions in the U.S. and provides monthly projected estimates of crop yield and production. NASS has developed methods to assess crop growth and development from several sources of information, including several types of surveys of farm operators. Field offices in each state are responsible for monitoring the progress and health of the crop and integrating crop condition with local weather information. This crop information is also distributed in a biweekly report on regional weather conditions. NASS provides monthly information to the Agriculture Statistics Board, which assesses the potential yields of all commodities based on crop condition information acquired from different sources. This research complements efforts to independently assess crop condition at the county, agricultural statistics district, and state levels. In the early 1960s, NASS initiated “objective yield” surveys for crops such as corn, soybean, wheat, and cotton in States with the greatest acreages (Allen et al., 1994). These surveys establish small sample units in randomly selected fields which are visited monthly to determine numbers of plants, numbers of fruits (wheat heads, corn ears, soybean pods, etc.), and weight per fruit. Yield forecasting models are based on relationships of samples of the same maturity stage in comparable months during the past four years in each State. Additionally, the Agency implemented a midyear Area Frame that enabled creation of probabilistic based acreage estimates. For major crops, sampling errors are as low as 1 percent at the U.S. level and 2 to 3 percent in the largest producing States. Accurate crop production forecasts require accurate forecasts of acreage at harvest, its geographic distribution, and the associated crop yield determined by local growing conditions. There can be significant year-to-year variability which requires a systematic monitoring capability. To quantify the complex effects of environment, soils, and management practices, both yield and acreage must be assessed at sub-regional levels where a limited range of factors and simple interactions permit modeling and estimation. A yield forecast within homogeneous soil type, land use, crop variety, and climate preclude the necessity for use of a complex forecast model. In 1974, the Large Area Crop Inventory Experiment (LACIE), a joint effort of the National Aeronautics and Space Administration (NASA), the USDA, and the National Oceanic and Atmospheric Administration (NOAA) began to apply satellite remote sensing technology on experimental bases to forecast harvests in important wheat producing areas (MacDonald, 1979). In 1977 LACIE in-season forecasted a 30 percent shortfall in Soviet spring wheat production that came within 10 percent of the official Soviet estimate that came several months after the harvest (Myers, 1983). P H O T O G R A M M E T R I C E N G I N E E R I N G & R E M O T E S E N S I N G Photogrammetric Engineering & Remote Sensing Vol. 69, No. 6, June 2003, pp. 665–674. 0099-1112/03/6906–665$3.00/0 © 2003 American Society for Photogrammetry and Remote Sensing P.C. Doraiswamy and A. Stern are with the USDA, ARS, Hydrology and Remote Sensing Lab, Bldg 007, Rm 104/ BARC West, Beltsville, MD 20705 (pdoraiswamy@ hydrolab.arsusda.gov). Sophie Moulin is with INRA/Unite Climat–Sol–Environnement, Domaine St paul, Site Agroparc, 84914 Avignon Cedex 9, France. P.W. Cook is with the USDA, National Agricultural Statistical Service, Research and Development Division, 3251 Old Lee Highway, Rm 305, Fairfax, VA 22030-1504. IPC_Grams_03-905 4/15/03 1:19 AM Page 1",
"title": ""
},
{
"docid": "2281d739c6858d35eb5f3650d2d03474",
"text": "We discuss an implementation of the RRT* optimal motion planning algorithm for the half-car dynamical model to enable autonomous high-speed driving. To develop fast solutions of the associated local steering problem, we observe that the motion of a special point (namely, the front center of oscillation) can be modeled as a double integrator augmented with fictitious inputs. We first map the constraints on tire friction forces to constraints on these augmented inputs, which provides instantaneous, state-dependent bounds on the curvature of geometric paths feasibly traversable by the front center of oscillation. Next, we map the vehicle's actual inputs to the augmented inputs. The local steering problem for the half-car dynamical model can then be transformed to a simpler steering problem for the front center of oscillation, which we solve efficiently by first constructing a curvature-bounded geometric path and then imposing a suitable speed profile on this geometric path. Finally, we demonstrate the efficacy of the proposed motion planner via numerical simulation results.",
"title": ""
},
{
"docid": "8081ff2372cba7b06cd2ee4deda8570e",
"text": "A myriad of security vulnerabilites can be exposed via the reverse engineering of the integrated circuits contained in electronics systems. The goal of IC reverse engineering is to uncover the functionality and internal structure of the chip via techniques such as depackaging/delayering, high-resolution imaging, probing, and side-channel examination. With this knowledge, an attacker can more efficiently mount various attacks, clone/-counterfeit the design possibly with hardware Trojans inserted, and discover trade secrets. We propose a gate camouflaging technique that relies on the usage of different threshold voltage transistors, but with identical layouts, to determine the logic gate function. In our threshold voltage defined (TVD) camouflaging technique, every TVD logic gate has the same physical structure and is one time mask programmed with different threshold implants for different boolean functionality. We design and implement TVD logic gates in an industrial 65nm bulk CMOS process. Using post-layout extracted simulation, we evaluate the logic style for VLSI overheads (area, power, delay) versus conventional logic, for process variablity robustness, and for various security metrics. Further, we evaluate the macro block overheads for ISCAS benchmark designs under various levels of TVD gate replacement upto and including 100% replacement. TVD logic gates are found to be CMOS process compatible, low overhead, and to increase security against various forms of attacks.",
"title": ""
},
{
"docid": "8047c0ba3b0a2838e7df95c8246863f4",
"text": "Neurons in the ventral premotor cortex of the monkey encode the locations of visual, tactile, auditory and remembered stimuli. Some of these neurons encode the locations of stimuli with respect to the arm, and may be useful for guiding movements of the arm. Others encode the locations of stimuli with respect to the head, and may be useful for guiding movements of the head. We suggest that a general principle of sensory-motor integration is that the space surrounding the body is represented in body-part-centered coordinates. That is, there are multiple coordinate systems used to guide movement, each one attached to a different part of the body. This and other recent evidence from both monkeys and humans suggest that the formation of spatial maps in the brain and the guidance of limb and body movements do not proceed in separate stages but are closely integrated in both the parietal and frontal lobes.",
"title": ""
},
{
"docid": "7c4a0bcdad82d36e3287f8b7e812f501",
"text": "In this paper, a face and hand gesture recognition system which can be applied to a smart TV interaction system is proposed. Human face and natural hand gesture are the key component to interact with smart TV system. The face recognition system is used in viewer authentication and the hand gesture recognition in control of smart TV, for example, volume up/down, channel changing. Personalized service such as favorite channels recommendation or parental guidance can be provided using face recognition. We show that the face recognition detection rate is about 99% and the face recognition rate is about 97% by using DGIST database. Also, hand detection rate is about 98% at distance of 1 meter, 1.5 meter, and 2 meter, respectively. Overall 5 type hand gesture recognition rate is about 80% using support vector machine (SVM).",
"title": ""
},
{
"docid": "066c1cc76c6aa0d1972c39c282e3df70",
"text": "Celebrity endorsement is considered as one of the most known marketing tools in the cosmetics industry. It is considered as a winning strategy to build a unique identity for the brand. Several factors must be considered when choosing a celebrity. Even though it’s not an easy task to select the proper celebrity, it is even tougher to create a match between the celebrity and the brand. The objective of this study is to study the relationship between celebrity endorsement and the brands and the effect of celebrity endorsement on brand loyalty. A questioner was prepared based on the celebrity endorsement factors model done by Seno & Lukas, 2007 and the brand loyalty factors. This questionnaire was distributed to 300 respondents in the Lebanese market and concluded that brand loyalty is affected by the celebrity’s attractiveness, celebrity’s activation and finally the multiplicity of celebrities. Celebrity endorsement if done properly can have a great positive impact on marketing a cosmetics brand.",
"title": ""
},
{
"docid": "73e398a5ae434dbd2a10ddccd2cfb813",
"text": "Face alignment aims to estimate the locations of a set of landmarks for a given image. This problem has received much attention as evidenced by the recent advancement in both the methodology and performance. However, most of the existing works neither explicitly handle face images with arbitrary poses, nor perform large-scale experiments on non-frontal and profile face images. In order to address these limitations, this paper proposes a novel face alignment algorithm that estimates both 2D and 3D landmarks and their 2D visibilities for a face image with an arbitrary pose. By integrating a 3D point distribution model, a cascaded coupled-regressor approach is designed to estimate both the camera projection matrix and the 3D landmarks. Furthermore, the 3D model also allows us to automatically estimate the 2D landmark visibilities via surface normal. We use a substantially larger collection of all-pose face images to evaluate our algorithm and demonstrate superior performances than the state-of-the-art methods.",
"title": ""
},
{
"docid": "a8d616897b7cbb1182d5f6e8cf4318a9",
"text": "User behaviour targeting is essential in online advertising. Compared with sponsored search keyword targeting and contextual advertising page content targeting, user behaviour targeting builds users’ interest profiles via tracking their online behaviour and then delivers the relevant ads according to each user’s interest, which leads to higher targeting accuracy and thus more improved advertising performance. The current user profiling methods include building keywords and topic tags or mapping users onto a hierarchical taxonomy. However, to our knowledge, there is no previous work that explicitly investigates the user online visits similarity and incorporates such similarity into their ad response prediction. In this work, we propose a general framework which learns the user profiles based on their online browsing behaviour, and transfers the learned knowledge onto prediction of their ad response. Technically, we propose a transfer learning model based on the probabilistic latent factor graphic models, where the users’ ad response profiles are generated from their online browsing profiles. The large-scale experiments based on real-world data demonstrate significant improvement of our solution over some strong baselines.",
"title": ""
},
{
"docid": "33fed2809c57080110b00e5b3994d19a",
"text": "Suppose we are given a set of generators for a group G of permutations of a colored set A. The color automorphism problem for G involves finding generators for the subgroup of G which stabilizes the color classes. Testing isomorphism of graphs of valence ≤ t is polynomial-time reducible to the color automorphism problem for groups with small simple sections. The algorithm for the latter problem involves several divide-and-conquer tricks. The problem is solved sequentially on the G-orbits. An orbit is broken into a minimal set of blocks permuted by G. The hypothesis on G guarantees the existence of a 'large' subgroup P which acts as a p-group on the blocks. A similar process is repeated for each coset of P on G. Some results on primitive permutation groups are used to show that the algorithm runs in polynomial time.",
"title": ""
},
{
"docid": "b3231aa69d2a7ebd75ca1d1d52f8e2bb",
"text": "Received: February 11, 2016 / Revision Received: April 6, 2016 / Accepted: April 12, 2016 Correspondence: Chee Jeong Kim, MD, Division of Cardiology, Department of Internal Medicine, Chung-Ang University College of Medicine, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea / Tel: 82-2-6299-1398 / Fax: 82-2-822-2769 / E-mail: cjkim@cau.ac.kr * This guideline is translated from “J Lipid Atheroscler 2015;4(1):61-92”. * Full list of Committee information is available at the end of this article. • The authors have no financial conflicts of interest.",
"title": ""
},
{
"docid": "4bfb3823edf6dece64ebf5cee80368e0",
"text": "As ontology development becomes a more ubiquitous and collaborative process, the developers face the problem of maintaining versions of ontologies akin to maintaining versions of software code in large software projects. Versioning systems for software code provide mechanisms for tracking versions, checking out versions for editing, comparing different versions, and so on. We can directly reuse many of these mechanisms for ontology versioning. However, version comparison for code is based on comparing text files--an approach that does not work for comparing ontologies. Two ontologies can be identical but have different text representation. We have developed the PROMPTDIFF algorithm, which integrates different heuristic matchers for comparing ontology versions. We combine these matchers in a fixed-point manner, using the results of one matcher as an input for others until the matchers produce no more changes. The current implementation includes ten matchers but the approach is easily extendable to an arbitrary number of matchers. Our evaluation showed that PROMPTDIFF correctly identified 96% of the matches in ontology versions from large projects.",
"title": ""
},
{
"docid": "747e46fc4621604d6f551d909cbdf42b",
"text": "Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process. This demonstration shows a computational system that creates flavorful, novel, and perhaps healthy culinary recipes by drawing on big data techniques. It brings analytics algorithms together with disparate data sources from culinary science, chemistry, and hedonic psychophysics.\n In its most powerful manifestation, the system operates through a mixed-initiative approach to human-computer interaction via turns between human and computer. In particular, the sequential creation process is modeled after stages in human cognitive processes of creativity.\n The end result is an ingredient list, ingredient proportions, as well as a directed acyclic graph representing a partial ordering of culinary recipe steps.",
"title": ""
},
{
"docid": "1791d0ce8872fcd1f5fef3aa260be504",
"text": "Recently, remarkable progress has been achieved in human action recognition and detection by using deep learning techniques. However, for action detection in real-world untrimmed videos, the accuracies of most existing approaches are still far from satisfactory, due to the difficulties in temporal action localization. On the other hand, the spatiotempoal features are not well utilized in recent work for video analysis. To tackle these problems, we propose a spatiotemporal, multi-task, 3D deep convolutional neural network to detect (including temporally localize and recognition) actions in untrimmed videos. First, we introduce a fusion framework which aims to extract video-level spatiotemporal features in the training phase. And we demonstrate the effectiveness of video-level features by evaluating our model on human action recognition task. Then, under the fusion framework, we propose a spatiotemporal multi-task network, which has two sibling output layers for action classification and temporal localization, respectively. To obtain precise temporal locations, we present a novel temporal regression method to revise the proposal window which contains an action. Meanwhile, in order to better utilize the rich motion information in videos, we introduce a novel video representation, interlaced images, as an additional network input stream. As a result, our model outperforms state-of-the-art methods for both action recognition and detection on standard benchmarks.",
"title": ""
},
{
"docid": "a84b5fa43c17eebd9cc3ddf2a0d2129e",
"text": "The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station.",
"title": ""
},
{
"docid": "2474db9eed888bba6bb4dd08658bc4b6",
"text": "BACKGROUND\nThe anabolic effect of resistance exercise is enhanced by the provision of dietary protein.\n\n\nOBJECTIVES\nWe aimed to determine the ingested protein dose response of muscle (MPS) and albumin protein synthesis (APS) after resistance exercise. In addition, we measured the phosphorylation of candidate signaling proteins thought to regulate acute changes in MPS.\n\n\nDESIGN\nSix healthy young men reported to the laboratory on 5 separate occasions to perform an intense bout of leg-based resistance exercise. After exercise, participants consumed, in a randomized order, drinks containing 0, 5, 10, 20, or 40 g whole egg protein. Protein synthesis and whole-body leucine oxidation were measured over 4 h after exercise by a primed constant infusion of [1-(13)C]leucine.\n\n\nRESULTS\nMPS displayed a dose response to dietary protein ingestion and was maximally stimulated at 20 g. The phosphorylation of ribosomal protein S6 kinase (Thr(389)), ribosomal protein S6 (Ser(240/244)), and the epsilon-subunit of eukaryotic initiation factor 2B (Ser(539)) were unaffected by protein ingestion. APS increased in a dose-dependent manner and also reached a plateau at 20 g ingested protein. Leucine oxidation was significantly increased after 20 and 40 g protein were ingested.\n\n\nCONCLUSIONS\nIngestion of 20 g intact protein is sufficient to maximally stimulate MPS and APS after resistance exercise. Phosphorylation of candidate signaling proteins was not enhanced with any dose of protein ingested, which suggested that the stimulation of MPS after resistance exercise may be related to amino acid availability. Finally, dietary protein consumed after exercise in excess of the rate at which it can be incorporated into tissue protein stimulates irreversible oxidation.",
"title": ""
}
] |
scidocsrr
|
f0f85bae430e60dd956bde1be80ee925
|
Deep Android Malware Detection
|
[
{
"docid": "4ca5fec568185d3699c711cc86104854",
"text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.",
"title": ""
},
{
"docid": "3024ca1aad7ec2f965117e3e943126f6",
"text": "Automatically generated malware is a significant problem for computer users. Analysts are able to manually investigate a small number of unknown files, but the best large-scale defense for detecting malware is automated malware classification. Malware classifiers often use sparse binary features, and the number of potential features can be on the order of tens or hundreds of millions. Feature selection reduces the number of features to a manageable number for training simpler algorithms such as logistic regression, but this number is still too large for more complex algorithms such as neural networks. To overcome this problem, we used random projections to further reduce the dimensionality of the original input space. Using this architecture, we train several very large-scale neural network systems with over 2.6 million labeled samples thereby achieving classification results with a two-class error rate of 0.49% for a single neural network and 0.42% for an ensemble of neural networks.",
"title": ""
}
] |
[
{
"docid": "d4793c300bca8137d0da7ffdde75a72b",
"text": "The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalized-likelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.",
"title": ""
},
{
"docid": "1c6898744a7d662f20a92f37ce6b7cf2",
"text": "Object-class segmentation is a computer vision task which requires labeling each pixel of an image with the class of the object it belongs to. Deep convolutional neural networks (DNN) are able to learn and exploit local spatial correlations required for this task. They are, however, restricted by their small, fixed-sized filters, which limits their ability to learn long-range dependencies. Recurrent Neural Networks (RNN), on the other hand, do not suffer from this restriction. Their iterative interpretation allows them to model long-range dependencies by propagating activity. This property might be especially useful when labeling video sequences, where both spatial and temporal long-range dependencies occur. In this work, we propose novel RNN architectures for object-class segmentation. We investigate three ways to consider past and future context in the prediction process by comparing networks that process the frames one by one with networks that have access to the whole sequence. We evaluate our models on the challenging NYU Depth v2 dataset for object-class segmentation and obtain competitive results.",
"title": ""
},
{
"docid": "d48ea163dd0cd5d80ba95beecee5102d",
"text": "Foodborne pathogens (FBP) represent an important threat to the consumers' health as they are able to cause different foodborne diseases. In order to eliminate the potential risk of those pathogens, lactic acid bacteria (LAB) have received a great attention in the food biotechnology sector since they play an essential function to prevent bacterial growth and reduce the biogenic amines (BAs) formation. The foodborne illnesses (diarrhea, vomiting, and abdominal pain, etc.) caused by those microbial pathogens is due to various reasons, one of them is related to the decarboxylation of available amino acids that lead to BAs production. The formation of BAs by pathogens in foods can cause the deterioration of their nutritional and sensory qualities. BAs formation can also have toxicological impacts and lead to different types of intoxications. The growth of FBP and their BAs production should be monitored and prevented to avoid such problems. LAB is capable of improving food safety by preventing foods spoilage and extending their shelf-life. LAB are utilized by the food industries to produce fermented products with their antibacterial effects as bio-preservative agents to extent their storage period and preserve their nutritive and gustative characteristics. Besides their contribution to the flavor for fermented foods, LAB secretes various antimicrobial substances including organic acids, hydrogen peroxide, and bacteriocins. Consequently, in this paper, the impact of LAB on the growth of FBP and their BAs formation in food has been reviewed extensively.",
"title": ""
},
{
"docid": "787377fc8e1f9da5ec2b6ea77bcc0725",
"text": "We show that the counting class LWPP [8] remains unchanged even if one allows a polynomial number of gap values rather than one. On the other hand, we show that it is impossible to improve this from polynomially many gap values to a superpolynomial number of gap values by relativizable proof techniques. The first of these results implies that the Legitimate Deck Problem (from the study of graph reconstruction) is in LWPP (and thus low for PP, i.e., PPLegitimate Deck = PP) if the weakened version of the Reconstruction Conjecture holds in which the number of nonisomorphic preimages is assumed merely to be polynomially bounded. This strengthens the 1992 result of Köbler, Schöning, and Torán [15] that the Legitimate Deck Problem is in LWPP if the Reconstruction Conjecture holds, and provides strengthened evidence that the Legitimate Deck Problem is not NP-hard. We additionally show on the one hand that our main LWPP robustness result also holds for WPP, and also holds even when one allows both the rejectionand acceptancegap-value targets to simultaneously be polynomial-sized lists; yet on the other hand, we show that for the #P-based analog of LWPP the behavior much differs in that, in some relativized worlds, even two target values already yield a richer class than one value does. 2012 ACM Subject Classification Theory of computation → Complexity classes",
"title": ""
},
{
"docid": "71b132b4c056e71c7f3ec5a600d1368e",
"text": "In this paper we have shown emotion recognition through EEG processing. Based on the literature work, we have made the best selection of the techniques available so far. This paper is easy to perceive, understand, and interpret the proposed method and the techniques like signal preprocessing, feature extraction and, signal classification in BCI. Our aim in this paper is to come up with an efficient and reliable emotion recognition system. Keywords— Computer-Interface, Signal Processing, Feature Extraction, EEG signals.",
"title": ""
},
{
"docid": "f9b7965888e180c6b07764dae8433a9d",
"text": "Job recommender systems are designed to suggest a ranked list of jobs that could be associated with employee's interest. Most of existing systems use only one approach to make recommendation for all employees, while a specific method normally is good enough for a group of employees. Therefore, this study proposes an adaptive solution to make job recommendation for different groups of user. The proposed methods are based on employee clustering. Firstly, we group employees into different clusters. Then, we select a suitable method for each user cluster based on empirical evaluation. The proposed methods include CB-Plus, CF-jFilter and HyR-jFilter have applied for different three clusters. Empirical results show that our proposed methods is outperformed than traditional methods.",
"title": ""
},
{
"docid": "4244db44909f759b2acdb1bd9d23632e",
"text": "This paper implements of a three phase grid synchronization for doubly-fed induction generators (DFIG) in wind generation system. A stator flux oriented vector is used to control the variable speed DFIG for the utility grid synchronization, active power and reactive power. Before synchronization, the stator voltage is adjusted equal to the amplitude of the grid voltage by controlling the d-axis rotor current. The frequency of stator voltage is synchronized with the grid by controlling the rotor flux angle equal to the difference between the rotor angle (mechanical speed in electrical degree) and the grid angle. The phase shift between stator voltage and the grid voltage is compensated by comparing the d-axis stator voltage and the grid voltage to generate a compensation angle. After the synchronization is achieved, the active power and reactive power are controlled to extract the optimum energy capture and fulfilled with the standard of utility grid requirements for wind turbine. The q-axis and d-axis rotor current are used to control the active and reactive power respectively. The implementation was conducted on a 1 kW conventional induction wound rotor controlled the digital signal controller board. The experimentation results confirm that the DFIG can be synchronized to the utility grid and the active power and the reactive power can be independently controlled.",
"title": ""
},
{
"docid": "e83439783fa90da0a14dac23f17c1825",
"text": "Counterfactual Analysis in Macroeconometrics: An Empirical Investigation into the Effects of Quantitative Easing This paper is concerned with ex ante and ex post counterfactual analyses in the case of macroeconometric applications where a single unit is observed before and after a given policy intervention. It distinguishes between cases where the policy change affects the model’s parameters and where it does not. It is argued that for ex post policy evaluation it is important that outcomes are conditioned on ex post realized variables that are invariant to the policy change but nevertheless influence the outcomes. The effects of the control variables that are determined endogenously with the policy outcomes can be solved out for the policy evaluation exercise. An ex post policy ineffectiveness test statistic is proposed. The analysis is applied to the evaluation of the effects of the quantitative easing (QE) in the UK after March 2009. It is estimated that a 100 basis points reduction in the spread due to QE has an impact effect on output growth of about one percentage point, but the policy impact is very quickly reversed with no statistically significant effects remaining within 9-12 months of the policy intervention. JEL Classification: C18, C54, E65",
"title": ""
},
{
"docid": "77f408e456970e32551767e847ca1c19",
"text": "Many graph analytics problems can be solved via iterative algorithms where the solutions are often characterized by a set of steady-state conditions. Different algorithms respect to different set of fixed point constraints, so instead of using these traditional algorithms, can we learn an algorithm which can obtain the same steady-state solutions automatically from examples, in an effective and scalable way? How to represent the meta learner for such algorithm and how to carry out the learning? In this paper, we propose an embedding representation for iterative algorithms over graphs, and design a learning method which alternates between updating the embeddings and projecting them onto the steadystate constraints. We demonstrate the effectiveness of our framework using a few commonly used graph algorithms, and show that in some cases, the learned algorithm can handle graphs with more than 100,000,000 nodes in a single machine.",
"title": ""
},
{
"docid": "bae2f948eca1dc88cbcd5cb2e6165d3b",
"text": "Important attributes of 3D brain cortex segmentation algorithms include robustness, accuracy, computational efficiency, and facilitation of user interaction, yet few algorithms incorporate all of these traits. Manual segmentation is highly accurate but tedious and laborious. Most automatic techniques, while less demanding on the user, are much less accurate. It would be useful to employ a fast automatic segmentation procedure to do most of the work but still allow an expert user to interactively guide the segmentation to ensure an accurate final result. We propose a novel 3D brain cortex segmentation procedure utilizing dual-front active contours which minimize image-based energies in a manner that yields flexibly global minimizers based on active regions. Region-based information and boundary-based information may be combined flexibly in the evolution potentials for accurate segmentation results. The resulting scheme is not only more robust but much faster and allows the user to guide the final segmentation through simple mouse clicks which add extra seed points. Due to the flexibly global nature of the dual-front evolution model, single mouse clicks yield corrections to the segmentation that extend far beyond their initial locations, thus minimizing the user effort. Results on 15 simulated and 20 real 3D brain images demonstrate the robustness, accuracy, and speed of our scheme compared with other methods.",
"title": ""
},
{
"docid": "e63a8b6595e1526a537b0881bc270542",
"text": "The CTD which stands for “Conductivity-Temperature-Depth” is one of the most used instruments for the oceanographic measurements. MEMS based CTD sensor components consist of a conductivity sensor (C), temperature sensor (T) and a piezo resistive pressure sensor (D). CTDs are found in every marine related institute and navy throughout the world as they are used to produce the salinity profile for the area of the ocean under investigation and are also used to determine different oceanic parameters. This research paper provides the design, fabrication and initial test results on a prototype CTD sensor.",
"title": ""
},
{
"docid": "df3ef3feeaf787315188db2689dc6fb9",
"text": "Multi-class weather classification from single images is a fundamental operation in many outdoor computer vision applications. However, it remains difficult and the limited work is carried out for addressing the difficulty. Moreover, existing method is based on the fixed scene. In this paper we present a method for any scenario multi-class weather classification based on multiple weather features and multiple kernel learning. Our approach extracts multiple weather features and takes properly processing. By combining these features into high dimensional vectors, we utilize multiple kernel learning to learn an adaptive classifier. We collect an outdoor image set that contains 20K images called MWI (Multi-class Weather Image) set. Experimental results show that the proposed method can efficiently recognize weather on MWI dataset.",
"title": ""
},
{
"docid": "8321eecac6f8deb25ffd6c1b506c8ee3",
"text": "Propelled by a fast evolving landscape of techniques and datasets, data science is growing rapidly. Against this background, topological data analysis (TDA) has carved itself a niche for the analysis of datasets that present complex interactions and rich structures. Its distinctive feature, topology, allows TDA to detect, quantify and compare the mesoscopic structures of data, while also providing a language able to encode interactions beyond networks. Here we briefly present the TDA paradigm and some applications, in order to highlight its relevance to the data science community.",
"title": ""
},
{
"docid": "fac9465df30dd5d9ba5bc415b2be8172",
"text": "In the Railway System, Railway Signalling System is the vital control equipment responsible for the safe operation of trains. In Railways, the system of communication from railway stations and running trains is by the means of signals through wired medium. Once the train leaves station, there is no communication between the running train and the station or controller. Hence, in case of failures or in emergencies in between stations, immediate information cannot be given and a particular problem will escalate with valuable time lost. Because of this problem only a single train can run in between two nearest stations. Now a days, Railway all over the world is using Optical Fiber cable for communication between stations and to send signals to trains. The usage of optical fibre cables does not lend itself for providing trackside communication as in the case of copper cable. Hence, another transmission medium is necessary for communication outside the station limits with drivers, guards, maintenance gangs, gateman etc. Obviously the medium of choice for such communication is wireless. With increasing speed and train density, adoption of train control methods such as Automatic warning system, (AWS) or, Automatic train stop (ATS), or Positive train separation (PTS) is a must. Even though, these methods traditionally pick up their signals from track based beacons, Wireless Sensor Network based systems will suit the Railways much more. In this paper, we described a new and innovative medium for railways that is Wireless Sensor Network (WSN) based Railway Signalling System and conclude that Introduction of WSN in Railways will not only achieve economy but will also improve the level of safety and efficiency of train operations.",
"title": ""
},
{
"docid": "948295ca3a97f7449548e58e02dbdd62",
"text": "Neural computations are often compared to instrument-measured distance or duration, and such relationships are interpreted by a human observer. However, neural circuits do not depend on human-made instruments but perform computations relative to an internally defined rate-of-change. While neuronal correlations with external measures, such as distance or duration, can be observed in spike rates or other measures of neuronal activity, what matters for the brain is how such activity patterns are utilized by downstream neural observers. We suggest that hippocampal operations can be described by the sequential activity of neuronal assemblies and their internally defined rate of change without resorting to the concept of space or time.",
"title": ""
},
{
"docid": "18c30c601e5f52d5117c04c85f95105b",
"text": "Crohn's disease is a relapsing systemic inflammatory disease, mainly affecting the gastrointestinal tract with extraintestinal manifestations and associated immune disorders. Genome wide association studies identified susceptibility loci that--triggered by environmental factors--result in a disturbed innate (ie, disturbed intestinal barrier, Paneth cell dysfunction, endoplasmic reticulum stress, defective unfolded protein response and autophagy, impaired recognition of microbes by pattern recognition receptors, such as nucleotide binding domain and Toll like receptors on dendritic cells and macrophages) and adaptive (ie, imbalance of effector and regulatory T cells and cytokines, migration and retention of leukocytes) immune response towards a diminished diversity of commensal microbiota. We discuss the epidemiology, immunobiology, amd natural history of Crohn's disease; describe new treatment goals and risk stratification of patients; and provide an evidence based rational approach to diagnosis (ie, work-up algorithm, new imaging methods [ie, enhanced endoscopy, ultrasound, MRI and CT] and biomarkers), management, evolving therapeutic targets (ie, integrins, chemokine receptors, cell-based and stem-cell-based therapies), prevention, and surveillance.",
"title": ""
},
{
"docid": "0cf81998c0720405e2197c62afa08ee7",
"text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media",
"title": ""
},
{
"docid": "b6941ca6cf4103f7608000ea5f8c838e",
"text": "The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for an expert’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk metrics, which allow us to capture an entire spectrum of risk preferences from risk-neutral to worst-case. We propose efficient algorithms based on Linear Programming for inferring an expert’s underlying risk metric and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with ten human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk-averse to risk-neutral in a data-efficient manner. Moreover, comparisons of the RiskSensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively.",
"title": ""
},
{
"docid": "2fb5f1e17e888049bd0f506f3a37f377",
"text": "While the Semantic Web has evolved to support the meaningful exchange of heterogeneous data through shared and controlled conceptualisations, Web 2.0 has demonstrated that large-scale community tagging sites can enrich the semantic web with readily accessible and valuable knowledge. In this paper, we investigate the integration of a movies folksonomy with a semantic knowledge base about usermovie rentals. The folksonomy is used to enrich the knowledge base with descriptions and categorisations of movie titles, and user interests and opinions. Using tags harvested from the Internet Movie Database, and movie rating data gathered by Netflix, we perform experiments to investigate the question that folksonomy-generated movie tag-clouds can be used to construct better user profiles that reflect a user’s level of interest in different kinds of movies, and therefore, provide a basis for prediction of their rating for a previously unseen movie.",
"title": ""
},
{
"docid": "93388c2897ec6ec7141bcc820ab6734c",
"text": "We address the task of single depth image inpainting. Without the corresponding color images, previous or next frames, depth image inpainting is quite challenging. One natural solution is to regard the image as a matrix and adopt the low rank regularization just as color image inpainting. However, the low rank assumption does not make full use of the properties of depth images. A shallow observation inspires us to penalize the nonzero gradients by sparse gradient regularization. However, statistics show that though most pixels have zero gradients, there is still a non-ignorable part of pixels, whose gradients are small but nonzero. Based on this property of depth images, we propose a low gradient regularization method in which we reduce the penalty for small gradients while penalizing the nonzero gradients to allow for gradual depth changes. The proposed low gradient regularization is integrated with the low rank regularization into the low rank low gradient approach for depth image inpainting. We compare our proposed low gradient regularization with the sparse gradient regularization. The experimental results show the effectiveness of our proposed approach.",
"title": ""
}
] |
scidocsrr
|
d816d007b773fb8eac90261bfde83fe0
|
The Internet of Things: Opportunities and Challenges for Distributed Data Analysis
|
[
{
"docid": "4f7fbc3f313e68456e57a2d6d3c90cd0",
"text": "This survey paper describes a focused literature survey of machine learning (ML) and data mining (DM) methods for cyber analytics in support of intrusion detection. Short tutorial descriptions of each ML/DM method are provided. Based on the number of citations or the relevance of an emerging method, papers representing each method were identified, read, and summarized. Because data are so important in ML/DM approaches, some well-known cyber data sets used in ML/DM are described. The complexity of ML/DM algorithms is addressed, discussion of challenges for using ML/DM for cyber security is presented, and some recommendations on when to use a given method are provided.",
"title": ""
}
] |
[
{
"docid": "245204d71a7ba2f56897ccb67f26b595",
"text": "The objective of the study is to describe distinguishing characteristics of commercial sexual exploitation of children/child sex trafficking victims (CSEC) who present for health care in the pediatric setting. This is a retrospective study of patients aged 12-18 years who presented to any of three pediatric emergency departments or one child protection clinic, and who were identified as suspected victims of CSEC. The sample was compared with gender and age-matched patients with allegations of child sexual abuse/sexual assault (CSA) without evidence of CSEC on variables related to demographics, medical and reproductive history, high-risk behavior, injury history and exam findings. There were 84 study participants, 27 in the CSEC group and 57 in the CSA group. Average age was 15.7 years for CSEC patients and 15.2 years for CSA patients; 100% of the CSEC and 94.6% of the CSA patients were female. The two groups significantly differed in 11 evaluated areas with the CSEC patients more likely to have had experiences with violence, substance use, running away from home, and involvement with child protective services and/or law enforcement. CSEC patients also had a longer history of sexual activity. Adolescent CSEC victims differ from sexual abuse victims without evidence of CSEC in their reproductive history, high risk behavior, involvement with authorities, and history of violence.",
"title": ""
},
{
"docid": "91382399e6341aed45a00b8fa3203005",
"text": "This paper presents a circularly polarized antenna on thin and flexible Denim substrate for Industrial, Scientific and Medical (ISM) band and Wireless Body Area Network (WBAN) applications at 2.45 GHz. Copper tape is used as the conductive material on 1 mm thick Denim substrate. Circular polarization is achieved by introducing rectangular slot along diagonal axes at the center of the circular patch radiator. Bandwidth enhancement is done using partial and slotted ground plane. The measured impedance bandwidth of the proposed antenna is 6.4 % (2.42 GHz to 2.58 GHz) or 160 MHz. The antenna exhibits good radiation characteristics with gain of 2.25 dB. Simulated and measured results are presented to validate the operability of antenna within the proposed frequency bands.",
"title": ""
},
{
"docid": "df701752c19f1b0ff56555a89201d0a9",
"text": "This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristically. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.",
"title": ""
},
{
"docid": "6fb0459adccd26015ee39897da52d349",
"text": "Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.",
"title": ""
},
{
"docid": "aff616ace9df0421d72d56b3ea37c851",
"text": "We investigate the pertinence of methods from algebraic topology for text data analysis. These methods enable the development of mathematically-principled isometric-invariant mappings from a set of vectors to a document embedding, which is stable with respect to the geometry of the document in the selected metric space. In this work, we evaluate the utility of these topology-based document representations in traditional NLP tasks, specifically document clustering and sentiment classification. We find that the embeddings do not benefit text analysis. In fact, performance is worse than simple techniques like tf-idf, indicating that the geometry of the document does not provide enough variability for classification on the basis of topic or sentiment in the chosen",
"title": ""
},
{
"docid": "96bb4155000096c1cba6285ad82c9a4d",
"text": "0747-5632/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.chb.2011.10.002 ⇑ Corresponding author. Tel.: +65 6790 6636; fax: + E-mail addresses: leecs@ntu.edu.sg (C.S. Lee), malo 1 Tel.: +65 67905772; fax: +65 6791 5214. Recent events indicate that sharing news in social media has become a phenomenon of increasing social, economic and political importance because individuals can now participate in news production and diffusion in large global virtual communities. Yet, knowledge about factors influencing news sharing in social media remains limited. Drawing from the uses and gratifications (U&G) and social cognitive theories (SCT), this study explored the influences of information seeking, socializing, entertainment, status seeking and prior social media sharing experience on news sharing intention. A survey was designed and administered to 203 students in a large local university. Results from structural equation modeling (SEM) analysis revealed that respondents who were driven by gratifications of information seeking, socializing, and status seeking were more likely to share news in social media platforms. Prior experience with social media was also a significant determinant of news sharing intention. Implications and directions for future work are discussed. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f33147619ba2d24efcea9e32f70c7695",
"text": "The wide use of micro bloggers such as Twitter offers a valuable and reliable source of information during natural disasters. The big volume of Twitter data calls for a scalable data management system whereas the semi-structured data analysis requires full-text searching function. As a result, it becomes challenging yet essential for disaster response agencies to take full advantage of social media data for decision making in a near-real-time fashion. In this work, we use Lucene to empower HBase with full-text searching ability to build a scalable social media data analytics system for observing and analyzing human behaviors during the Hurricane Sandy disaster. Experiments show the scalability and efficiency of the system. Furthermore, the discovery of communities has the benefit of identifying influential users and tracking the topical changes as the disaster unfolds. We develop a novel approach to discover communities in Twitter by applying spectral clustering algorithm to retweet graph. The topics and influential users of each community are also analyzed and demonstrated using Latent Semantic Indexing (LSI).",
"title": ""
},
{
"docid": "ba58efc16a48e8a2203189781d58cb03",
"text": "Introduction The typical size of large networks such as social network services, mobile phone networks or the web now counts in millions when not billions of nodes and these scales demand new methods to retrieve comprehensive information from their structure. A promising approach consists in decomposing the networks into communities of strongly connected nodes, with the nodes belonging to different communities only sparsely connected. Finding exact optimal partitions in networks is known to be computationally intractable, mainly due to the explosion of the number of possible partitions as the number of nodes increases. It is therefore of high interest to propose algorithms to find reasonably “good” solutions of the problem in a reasonably “fast” way. One of the fastest algorithms consists in optimizing the modularity of the partition in a greedy way (Clauset et al, 2004), a method that, even improved, does not allow to analyze more than a few millions nodes (Wakita et al, 2007).",
"title": ""
},
{
"docid": "d1fd4d535052a1c2418259c9b2abed66",
"text": "BACKGROUND\nSit-to-stand tests (STST) have recently been developed as easy-to-use field tests to evaluate exercise tolerance in COPD patients. As several modalities of the test exist, this review presents a synthesis of the advantages and limitations of these tools with the objective of helping health professionals to identify the STST modality most appropriate for their patients.\n\n\nMETHOD\nSeventeen original articles dealing with STST in COPD patients have been identified and analysed including eleven on 1min-STST and four other versions of the test (ranging from 5 to 10 repetitions and from 30 s to 3 min). In these studies the results obtained in sit-to-stand tests and the recorded physiological variables have been correlated with the results reported in other functional tests.\n\n\nRESULTS\nA good set of correlations was achieved between STST performances and the results reported in other functional tests, as well as quality of life scores and prognostic index. According to the different STST versions the processes involved in performance are different and consistent with more or less pronounced associations with various physical qualities. These tests are easy to use in a home environment, with excellent metrological properties and responsiveness to pulmonary rehabilitation, even though repetition of the same movement remains a fragmented and restrictive approach to overall physical evaluation.\n\n\nCONCLUSIONS\nThe STST appears to be a relevant and valid tool to assess functional status in COPD patients. While all versions of STST have been tested in COPD patients, they should not be considered as equivalent or interchangeable.",
"title": ""
},
{
"docid": "ff4a3a0c5288c69023c0d97a32ee5d6a",
"text": "1 We present a software tool for simulations of flow and multi‐component solute transport in 2 two and three‐dimensional domains in combination with comprehensive intra‐phase and 3 inter‐phase geochemistry. The software uses IPhreeqc as a reaction engine to the multi‐ 4 purpose, multidimensional finite element solver COMSOL Multiphysics® for flow and 5 transport simulations. Here we used COMSOL to solve Richards' equation for aqueous phase 6 flow in variably saturated porous media. The coupling procedure presented is in principle 7 applicable to any simulation of aqueous phase flow and solute transport in COMSOL. The 8 coupling with IPhreeqc gives major advantages over COMSOL's built‐in reaction capabilities, 9 i.e., the soil solution is speciated from its element composition according to thermodynamic 10 mass action equations with ion activity corrections. State‐of‐the‐art adsorption models such 11 as surface complexation with diffuse double layer calculations are accessible. In addition, 12 IPhreeqc provides a framework to integrate user‐defined kinetic reactions with possible 13 dependencies on solution speciation (i.e., pH, saturation indices, and ion activities), allowing 14 for modelling of microbially mediated reactions. Extensive compilations of geochemical 15 reactions and their parameterization are accessible through associated databases. Research highlights 20 Coupling of COMSOL and PHREEQC facilitates simulation of variably saturated flow 21 with comprehensive geochemical reactions. 22 The use of finite elements allows for the simulation of flow and solute transport in 23 complex 2 and 3D domains. 24 Geochemical reactions are coupled via sequential non‐iterative operator splitting. 25 The software tool provides novel capabilities for investigations of contaminant 26 behaviour in variably saturated porous media and agricultural management. 27 3 Software requirements 28 COMSOL Multiphysics® including Earth Science Module (tested version: 3.5a; due to a 29 memory leak in versions 4.0 and 4.0a, these are not suitable for the presented coupling) 30 Price for single user academic license including Earth Science Module ca. 2000 € 31 Matlab® (tested versions: 7.9, 7.10) 32 Price for single user academic license including Parallel Computing Toolbox ca. 650 € 33 IPhreeqc (COM‐version, available free of charge at 34 The coupling files together with animations of the presented simulations are available at 36",
"title": ""
},
{
"docid": "8fbbeeae48118cfd2f77e6a7bb224c0c",
"text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. American Educational Research Association is collaborating with JSTOR to digitize, preserve and extend access to Educational Researcher.",
"title": ""
},
{
"docid": "9006f257d25a9ba4dd2ae07eccccb0c2",
"text": "Using memoization and various other optimization techniques, the number of dissections of the n × n square into n polyominoes of size n is computed for n ≤ 8. On this task our method outperforms Donald Knuth’s Algorithm X with Dancing Links. The number of jigsaw sudoku puzzle solutions is computed for n ≤ 7. For every jigsaw sudoku puzzle polyomino cover with n ≤ 6 the size of its smallest critical sets is determined. Furthermore it is shown that for every n ≥ 4 there exists a polyomino cover that does not allow for any sudoku puzzle solution. We give a closed formula for the number of possible ways to fill the border of an n × n square with numbers while obeying Latin square constraints. We define a cannibal as a nonempty hyperpolyomino that disconnects its exterior from its interior, where the interior is exactly the size of the hyperpolyomino itself, and we present the smallest found cannibals in two and three dimensions.",
"title": ""
},
{
"docid": "e433da4c3128a48c4c2fad39ddb55ac1",
"text": "Vector field design on surfaces is necessary for many graphics applications: example-based texture synthesis, nonphotorealistic rendering, and fluid simulation. For these applications, singularities contained in the input vector field often cause visual artifacts. In this article, we present a vector field design system that allows the user to create a wide variety of vector fields with control over vector field topology, such as the number and location of singularities. Our system combines basis vector fields to make an initial vector field that meets user specifications.The initial vector field often contains unwanted singularities. Such singularities cannot always be eliminated due to the Poincaré-Hopf index theorem. To reduce the visual artifacts caused by these singularities, our system allows the user to move a singularity to a more favorable location or to cancel a pair of singularities. These operations offer topological guarantees for the vector field in that they only affect user-specified singularities. We develop efficient implementations of these operations based on Conley index theory. Our system also provides other editing operations so that the user may change the topological and geometric characteristics of the vector field.To create continuous vector fields on curved surfaces represented as meshes, we make use of the ideas of geodesic polar maps and parallel transport to interpolate vector values defined at the vertices of the mesh. We also use geodesic polar maps and parallel transport to create basis vector fields on surfaces that meet the user specifications. These techniques enable our vector field design system to work for both planar domains and curved surfaces.We demonstrate our vector field design system for several applications: example-based texture synthesis, painterly rendering of images, and pencil sketch illustrations of smooth surfaces.",
"title": ""
},
{
"docid": "bca883795052e1c14553600f40a0046b",
"text": "The SEIR model with nonlinear incidence rates in epidemiology is studied. Global stability of the endemic equilibrium is proved using a general criterion for the orbital stability of periodic orbits associated with higher-dimensional nonlinear autonomous systems as well as the theory of competitive systems of differential equations.",
"title": ""
},
{
"docid": "ba1cbd5fcd98158911f4fb6f677863f9",
"text": "Classical approaches to clean data have relied on using integrity constraints, statistics, or machine learning. These approaches are known to be limited in the cleaning accuracy, which can usually be improved by consulting master data and involving experts to resolve ambiguity. The advent of knowledge bases KBs both general-purpose and within enterprises, and crowdsourcing marketplaces are providing yet more opportunities to achieve higher accuracy at a larger scale. We propose KATARA, a knowledge base and crowd powered data cleaning system that, given a table, a KB, and a crowd, interprets table semantics to align it with the KB, identifies correct and incorrect data, and generates top-k possible repairs for incorrect data. Experiments show that KATARA can be applied to various datasets and KBs, and can efficiently annotate data and suggest possible repairs.",
"title": ""
},
{
"docid": "15cb8a43e4b6b2f30218fe994d1db51e",
"text": "In this paper, we present a home-monitoring oriented human activity recognition benchmark database, based on the combination of a color video camera and a depth sensor. Our contributions are two-fold: 1) We have created a publicly releasable human activity video database (i.e., named as RGBD-HuDaAct), which contains synchronized color-depth video streams, for the task of human daily activity recognition. This database aims at encouraging more research efforts on human activity recognition based on multi-modality sensor combination (e.g., color plus depth). 2) Two multi-modality fusion schemes, which naturally combine color and depth information, have been developed from two state-of-the-art feature representation methods for action recognition, i.e., spatio-temporal interest points (STIPs) and motion history images (MHIs). These depth-extended feature representation methods are evaluated comprehensively and superior recognition performances over their uni-modality (e.g., color only) counterparts are demonstrated.",
"title": ""
},
{
"docid": "ba695228c0fbaf91d6db972022095e98",
"text": "This study evaluated the critical period hypothesis for second language (L2) acquisition. The participants were 240 native speakers of Korean who differed according to age of arrival (AOA) in the United States (1 to 23 years), but were all experienced in English (mean length of residence 5 15 years). The native Korean participants’ pronunciation of English was evaluated by having listeners rate their sentences for overall degree of foreign accent; knowledge of English morphosyntax was evaluated using a 144-item grammaticality judgment test. As AOA increased, the foreign accents grew stronger, and the grammaticality judgment test scores decreased steadily. However, unlike the case for the foreign accent ratings, the effect of AOA on the grammaticality judgment test scores became nonsignificant when variables confounded with AOA were controlled. This suggested that the observed decrease in morphosyntax scores was not the result of passing a maturationally defined critical period. Additional analyses showed that the score for sentences testing knowledge of rule based, generalizable aspects of English morphosyntax varied as a function of how much education the Korean participants had received in the United States. The scores for sentences testing lexically based aspects of English morphosyntax, on the other hand, depended on how much the Koreans used English. © 1999 Academic Press",
"title": ""
},
{
"docid": "aa2ddbfc3bb1aa854d1c576927dc2d30",
"text": "B-scan ultrasound provides a non-invasive low-cost imaging solution to primary care diagnostics. The inherent speckle noise in the images produced by this technique introduces uncertainty in the representation of their textural characteristics. To cope with the uncertainty, we propose a novel fuzzy feature extraction method to encode local texture. The proposed method extends the Local Binary Pattern (LBP) approach by incorporating fuzzy logic in the representation of local patterns of texture in ultrasound images. Fuzzification allows a Fuzzy Local Binary Pattern (FLBP) to contribute to more than a single bin in the distribution of the LBP values used as a feature vector. The proposed FLBP approach was experimentally evaluated for supervised classification of nodular and normal samples from thyroid ultrasound images. The results validate its effectiveness over LBP and other common feature extraction methods.",
"title": ""
},
{
"docid": "b33b2abdc858b25d3aae1e789bca535c",
"text": "Rapid urbanization creates new challenges and issues, and the smart city concept offers opportunities to rise to these challenges, solve urban problems and provide citizens with a better living environment. This paper presents an exhaustive literature survey of smart cities. First, it introduces the origin and main issues facing the smart city concept, and then presents the fundamentals of a smart city by analyzing its definition and application domains. Second, a data-centric view of smart city architectures and key enabling technologies is provided. Finally, a survey of recent smart city research is presented. This paper provides a reference to researchers who intend to contribute to smart city research and implementation. 世界范围内的快速城镇化给城市发展带来了很多新的问题和挑战, 智慧城市概念的出现, 为解决当前城市难题、提供更好的城市环境提供了有效的解决途径。论文介绍了智慧城市的起源, 总结了智慧城市领域的三个主要问题, 通过详细的综述性文献研究展开对这些问题的探讨。论文首先对智慧城市的定义和应用领域进行了归纳和分析, 然后研究了智慧城市的体系架构, 提出了智慧城市以数据为中心、多领域融合的相关特征, 并定义了以数据活化技术为核心的层次化体系架构, 并介绍了其中的关键技术, 最后选取了城市交通、城市群体行为、城市规划三个具有代表性的应用领域介绍了城市数据分析与处理的最新研究进展和存在问题。",
"title": ""
},
{
"docid": "c3941aeb0f82c005faeffac020cc8954",
"text": "We propose a new optimization framework for summarization by generalizing the submodular framework of (Lin and Bilmes, 2011). In our framework the summarization desideratum is expressed as a sum of a submodular function and a nonsubmodular function, which we call dispersion; the latter uses inter-sentence dissimilarities in different ways in order to ensure non-redundancy of the summary. We consider three natural dispersion functions and show that a greedy algorithm can obtain an approximately optimal summary in all three cases. We conduct experiments on two corpora—DUC 2004 and user comments on news articles—and show that the performance of our algorithm outperforms those that rely only on submodularity.",
"title": ""
}
] |
scidocsrr
|
c3e7162af86c1f5040938b92aae9d708
|
Repetitive High-Voltage All-solid-state Marx Generator for Excimer DBD UV Sources
|
[
{
"docid": "4df4781c1ce5aae2a7b903daa05d298f",
"text": "Repetitive high voltage pulsed power system proposed in this study originates from conventional Marx generators. This newly developed Marx modulator employs high voltage (HV) insulated gate bipolar transistors (IGBT) as switches and series- connected diodes as isolated components. Self-supplied IGBT drivers and optic signals are used in the system to avoid insulation problem. Experimental results of 20 stages generating pulses with 60 kV, 20-100 mus and 50~500 Hz are presented to validate the performance of the system in the paper.",
"title": ""
}
] |
[
{
"docid": "4129881d5ff6f510f6deb23fd5b29afa",
"text": "Childbirth is an intricate process which is marked by an increased cervical dilation rate caused due to steady increments in the frequency and strength of uterine contractions. The contractions may be characterized by its strength, duration and frequency (count) - which are monitored through Tocography. However, the procedure is prone to subjectivity and an automated approach for the classification of the contractions is needed. In this paper, we use three different Weighted K-Nearest Neighbor classifiers and Decision Trees to classify the contractions into three types: Mild, Moderate and Strong. Further, we note the fact that our training data consists of fewer samples of Contractions as compared to those of Non-contractions - resulting in “Class Imbalance”. Hence, we use the Synthetic Minority Oversampling Technique (SMOTE) in conjunction with the K-NN classifier and Decision Trees to alleviate the problems of the same. The ground truth for Tocography signals was established by a doctor having an experience of 36 years in Obstetrics and Gynaecology. The annotations are in three categories: Mild (33 samples), Moderate (64 samples) and Strong (96 samples), amounting to a total of 193 contractions whereas the number of Non-contraction samples was 1217. Decision Trees using SMOTE performed the best with accuracies of 95%, 98.25% and 100% for the aforementioned categories, respectively. The sensitivities achieved for the same are 96.67%, 96.52% and 100% whereas the specificities amount to 93.33%, 100% and 100%, respectively. Our method may be used to monitor the labour progress efficiently.",
"title": ""
},
{
"docid": "361bdfcbe909788f674683c9d122dea4",
"text": "High frequency pulse-width modulation (PWM) converters generally suffer from excessive gate drive loss. This paper presents a resonant gate drive circuit that features efficient energy recovery at both charging and discharging transitions. Following a brief introduction of metal oxide semiconductor field effect transistor (MOSFET) gate drive loss, this paper discusses the gate drive requirements for high frequency PWM applications and common shortcomings of existing resonant gate drive techniques. To overcome the apparent disparity, a new resonant MOSFET gate drive circuit is then presented. The new circuit produces low gate drive loss, fast switching speed, clamped gate voltages, immunity to false trigger and has no limitation on the duty cycle. Experimental results further verify its functionality.",
"title": ""
},
{
"docid": "9b1874fb7e440ad806aa1da03f9feceb",
"text": "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added domain, typically as many as the original network. We propose a method called Deep Adaptation Modules (DAM) that constrains newly learned filters to be linear combinations of existing ones. DAMs precisely preserve performance on the original domain, require a fraction (typically 13%, dependent on network architecture) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.",
"title": ""
},
{
"docid": "e1355a9b33932cc2b6da5a55c5524c2f",
"text": "In this study, the thermal resistance of a typical LED package was analyzed and a compact thermal model (CTM) was obtained by considering the configuration of each layer of LED package. The building of the CTM was presented in detail. Experiments and finite element simulations were conducted to validate the CTM. The error of the thermal resistance obtained by the CTM and simulations were less than 2% when being referred to the experiments. The comparison results demonstrate that the present CTM can be used to obtain the thermal resistance of the package and to predict the junction temperature.",
"title": ""
},
{
"docid": "c0a8134a2b815398689eaac7fb9de8e3",
"text": "A forward design process applicable to the specification of flight simulator cueing systems is presented in this paper. This process is based on the analysis of the pilotvehicle control loop by using a pilot model incorporating both visual and vestibular feedback, and the aircraft dynamics. After substituting the model for the simulated aircraft, the analysis tools are used to adjust the washout filter parameters with the goal of restoring pilot control behaviour. This process allows the specification of the motion cueing algorithm. Then, based on flight files representative for the operational flight envelope, the required motion system space is determined. The motion-base geometry is established based on practical limitations, as well as criteria for the stability of the platform with respect to singular conditions. With this process the characteristics of the aircraft, the tasks to be simulated, and the missions themselves are taken into account in defining the simulator motion cueing system.",
"title": ""
},
{
"docid": "355e2f91de69e507738b520899298155",
"text": "In this paper we introduce a mathematical model that captures some of the salient features of recommender systems that are based on popularity and that try to exploit social ties among the users. We show that, under very general conditions, the market always converges to a steady state, for which we are able to give an explicit form. Thanks to this we can tell rather precisely how much a market is altered by a recommendation system, and determine the power of users to influence others. Our theoretical results are complemented by experiments with real world social networks showing that social graphs prevent large market distortions in spite of the presence of highly influential users.",
"title": ""
},
{
"docid": "fbb5a86992438d630585462f8626e13f",
"text": "As a basic task in computer vision, semantic segmentation can provide fundamental information for object detection and instance segmentation to help the artificial intelligence better understand real world. Since the proposal of fully convolutional neural network (FCNN), it has been widely used in semantic segmentation because of its high accuracy of pixel-wise classification as well as high precision of localization. In this paper, we apply several famous FCNN to brain tumor segmentation, making comparisons and adjusting network architectures to achieve better performance measured by metrics such as precision, recall, mean of intersection of union (mIoU) and dice score coefficient (DSC). The adjustments to the classic FCNN include adding more connections between convolutional layers, enlarging decoders after up sample layers and changing the way shallower layers’ information is reused. Besides the structure modification, we also propose a new classifier with a hierarchical dice loss. Inspired by the containing relationship between classes, the loss function converts multiple classification to multiple binary classification in order to counteract the negative effect caused by imbalance data set. Massive experiments have been done on the training set and testing set in order to assess our refined fully convolutional neural networks and new types of loss function. Competitive figures prove they are more effective than their predecessors.",
"title": ""
},
{
"docid": "7cf14ea5044b95df4f618c4e2506f397",
"text": "0.18μm BCD technology with the best-in-class nLDMOS is presented. The drift of nLDMOS is optimized to ensure lowest Rsp by using multi-implants and appropriate thermal recipe. The optimized 24V nLDMOS has BV<inf>DSS</inf>=36V and Rsp=14.5 mΩ-mm<sup>2</sup>. Electrical SOA and long-term hot electron (HE) SOA are also evaluated. The maximum operating voltage less than 10% degradation of on-resistance is 24.4V.",
"title": ""
},
{
"docid": "1798e8eb49dd309e3ec4e787e157776b",
"text": "We propose a method of using clustering techniques to partition a set of orders. We define the term order as a sequence of objects that are sorted according to some property, such as size, preference, or price. These orders are useful for, say, carrying out a sensory survey. We propose a method called the k-o’means method, which is a modified version of a k-means method, adjusted to handle orders. We compared our method with the traditional clustering methods, and analyzed its characteristics. We also applied our method to a questionnaire survey data on people’s preferences in types of sushi (a Japanese food).",
"title": ""
},
{
"docid": "6838838d54136dbf8ff57b999576feba",
"text": "Display ads on the Internet are often sold in bundles of thousands or millions of impressions over a particular time period, typically weeks or months. Ad serving systems that assign ads to pages on behalf of publishers must satisfy these contracts, but at the same time try to maximize overall quality of placement. This is usually modeled in the literature as an online allocation problem, where contracts are represented by overall delivery constraints over a finite time horizon. However this model misses an important aspect of ad delivery: time homogeneity. Advertisers who buy these packages expect their ad to be shown smoothly throughout the purchased time period, in order to reach a wider audience, to have a sustained impact, and to support the ads they are running on other media (e.g., television). In this paper we formalize this problem using several nested packing constraints, and develop a tight (1-1/e)-competitive online algorithm for this problem. Our algorithms and analysis require novel techniques as they involve online computation of multiple dual variables per ad. We then show the effectiveness of our algorithms through exhaustive simulation studies on real data sets.",
"title": ""
},
{
"docid": "05edf6dc5d4b9726773f56dafc620619",
"text": "Software systems running continuously for a long time tend to show degrading performance and an increasing failure occurrence rate, due to error conditions that accrue over time and eventually lead the system to failure. This phenomenon is usually referred to as \\textit{Software Aging}. Several long-running mission and safety critical applications have been reported to experience catastrophic aging-related failures. Software aging sources (i.e., aging-related bugs) may be hidden in several layers of a complex software system, ranging from the Operating System (OS) to the user application level. This paper presents a software aging analysis at the Operating System level, investigating software aging sources inside the Linux kernel. Linux is increasingly being employed in critical scenarios; this analysis intends to shed light on its behaviour from the aging perspective. The study is based on an experimental campaign designed to investigate the kernel internal behaviour over long running executions. By means of a kernel tracing tool specifically developed for this study, we collected relevant parameters of several kernel subsystems. Statistical analysis of collected data allowed us to confirm the presence of aging sources in Linux and to relate the observed aging dynamics to the monitored subsystems behaviour. The analysis output allowed us to infer potential sources of aging in the kernel subsystems.",
"title": ""
},
{
"docid": "57d3505a655e9c0efdc32101fd09b192",
"text": "POX is a Python based open source OpenFlow/Software Defined Networking (SDN) Controller. POX is used for faster development and prototyping of new network applications. POX controller comes pre installed with the mininet virtual machine. Using POX controller you can turn dumb openflow devices into hub, switch, load balancer, firewall devices. The POX controller allows easy way to run OpenFlow/SDN experiments. POX can be passed different parameters according to real or experimental topologies, thus allowing you to run experiments on real hardware, testbeds or in mininet emulator. In this paper, first section will contain introduction about POX, OpenFlow and SDN, then discussion about relationship between POX and Mininet. Final Sections will be regarding creating and verifying behavior of network applications in POX.",
"title": ""
},
{
"docid": "f4d9190ad9123ddcf809f47c71225162",
"text": "Please cite this article in press as: Tseng, M Industrial Engineering (2009), doi:10.1016/ Selection of appropriate suppliers in supply chain management strategy (SCMS) is a challenging issue because it requires battery of evaluation criteria/attributes, which are characterized with complexity, elusiveness, and uncertainty in nature. This paper proposes a novel hierarchical evaluation framework to assist the expert group to select the optimal supplier in SCMS. The rationales for the evaluation framework are based upon (i) multi-criteria decision making (MCDM) analysis that can select the most appropriate alternative from a finite set of alternatives with reference to multiple conflicting criteria, (ii) analytic network process (ANP) technique that can simultaneously take into account the relationships of feedback and dependence of criteria, and (iii) choquet integral—a non-additive fuzzy integral that can eliminate the interactivity of expert subjective judgment problems. A case PCB manufacturing firm is studied and the results indicated that the proposed evaluation framework is simple and reasonable to identify the primary criteria influencing the SCMS, and it is effective to determine the optimal supplier even with the interactive and interdependent criteria/attributes. This hierarchical evaluation framework provides a complete picture in SCMS contexts to both researchers and practitioners. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "edb7adc3e665aa2126be1849431c9d7f",
"text": "This study evaluated the exploitation of unprocessed agricultural discards in the form of fresh vegetable leaves as a diet for the sea urchin Paracentrotus lividus through the assessment of their effects on gonad yield and quality. A stock of wild-caught P. lividus was fed on discarded leaves from three different species (Beta vulgaris, Brassica oleracea, and Lactuca sativa) and the macroalga Ulva lactuca for 3 months under controlled conditions. At the beginning and end of the experiment, total and gonad weight were measured, while gonad and diet total carbon (C%), nitrogen (N%), δ13C, δ15N, carbohydrates, lipids, and proteins were analyzed. The results showed that agricultural discards provided for the maintenance of gonad index and nutritional value (carbohydrate, lipid, and protein content) of initial specimens. L. sativa also improved gonadic color. The results of this study suggest that fresh vegetable discards may be successfully used in the preparation of more balanced diets for sea urchin aquaculture. The use of agricultural discards in prepared diets offers a number of advantages, including an abundant resource, the recycling of discards into new organic matter, and reduced pressure on marine organisms (i.e., macroalgae) in the production of food for cultured organisms.",
"title": ""
},
{
"docid": "80d9439987b7eac8cf021be7dc533ec9",
"text": "While previous studies have investigated the determinants and consequences of online trust, online distrust has seldom been studied. Assuming that the positive antecedents of online trust are necessarily negative antecedents of online distrust or that positive consequences of online trust are necessarily negatively affected by online distrust is inappropriate. This study examines the different antecedents of online trust and distrust in relation to consumer and website characteristics. Moreover, this study further examines whether online trust and distrust asymmetrically affect behaviors with different risk levels. A model is developed and tested using a survey of 1,153 online consumers. LISREL was employed to test the proposed model. Overall, different consumer and website characteristics influence online trust and distrust, and online trust engenders different behavioral outcomes to online distrust. The authors also discuss the theoretical and managerial implications of the study findings.",
"title": ""
},
{
"docid": "5f591115988053560935576f6ef48899",
"text": "Electric vehicles are a new and upcoming technology in the transportation and power sector that have many benefits in terms of economic and environmental. This study presents a comprehensive review and evaluation of various types of electric vehicles and its associated equipment in particular battery charger and charging station. A comparison is made on the commercial and prototype electric vehicles in terms of electric range, battery size, charger power and charging time. The various types of charging stations and standards used for charging electric vehicles have been outlined and the impact of electric vehicle charging on utility distribution system is also discussed.",
"title": ""
},
{
"docid": "a7b2241396aeae1223433ba4f07b3eee",
"text": "The paper presents a preliminary research on possible relations between the syntactic structure and the polarity of a Czech sentence by means of the so-called sentiment analysis of a computer corpus. The main goal of sentiment analysis is the detection of a positive or negative polarity, or neutrality of a sentence (or, more broadly, a text). Most often this process takes place by looking for the polarity items, i.e. words or phrases inherently bearing positive or negative values. These words (phrases) are collected in the subjectivity lexicons and implemented into a computer corpus. However, when using sentences as the basic units to which sentiment analysis is applied, it is always important to look at their semantic and morphological analysis, since polarity items may be influenced by their morphological context. It is expected that some syntactic (and hypersyntactic) relations are useful for the identification of sentence polarity, such as negation, discourse relations or the level of embeddedness of the polarity item in the structure. Thus, we will propose such an analysis for a convenient source of data, the richly annotated Prague Dependency Treebank.",
"title": ""
},
{
"docid": "ba437029a227329f54e89754ecb38578",
"text": "We present a synthesis of techniques for rotorcraft UAV navigation through unknown environments which may contain obstacles. D* Lite and probabilistic roadmaps are combined for path planning, together with stereo vision for obstacle detection and dynamic path updating. A 3D occupancy map is used to represent the environment, and is updated online using stereo data. The target application is autonomous helicopter-based structure inspections, which require the UAV to fly safely close to the structures it is inspecting. Results are presented from simulation and with real flight hardware mounted onboard a cable array robot, demonstrating successful navigation through unknown environments containing obstacles.",
"title": ""
},
{
"docid": "2c02e09d5c73b4bbf46ae0f74e305c58",
"text": "Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving (regularized) TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined “errors-in-variables” models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.