query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
d1fed528c5a08bb4995f74ffe1391fa8
|
Structure and function of auditory cortex: music and speech
|
[
{
"docid": "a411780d406e8b720303d18cd6c9df68",
"text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.",
"title": ""
}
] |
[
{
"docid": "a24b4546eb2da7ce6ce70f45cd16e07d",
"text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.",
"title": ""
},
{
"docid": "6d1f374686b98106ab4221066607721b",
"text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …",
"title": ""
},
{
"docid": "e0c71e449f4c155a993ae04ece4bc822",
"text": "This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.",
"title": ""
},
{
"docid": "f4b6f3b281a420999b60b38c245113a6",
"text": "There is growing interest in using intranasal oxytocin (OT) to treat social dysfunction in schizophrenia and bipolar disorders (i.e., psychotic disorders). While OT treatment results have been mixed, emerging evidence suggests that OT system dysfunction may also play a role in the etiology of metabolic syndrome (MetS), which appears in one-third of individuals with psychotic disorders and associated with increased mortality. Here we examine the evidence for a potential role of the OT system in the shared risk for MetS and psychotic disorders, and its prospects for ameliorating MetS. Using several studies to demonstrate the overlapping neurobiological profiles of metabolic risk factors and psychiatric symptoms, we show that OT system dysfunction may be one common mechanism underlying MetS and psychotic disorders. Given the critical need to better understand metabolic dysregulation in these disorders, future OT trials assessing behavioural and cognitive outcomes should additionally include metabolic risk factor parameters.",
"title": ""
},
{
"docid": "8612b5e8f00fd8469ba87f1514b69fd0",
"text": "Online gaming is one of the most profitable businesses on the Internet. Among various threats to continuous player subscriptions, network lags are particularly notorious. It is widely known that frequent and long lags frustrate game players, but whether the players actually take action and leave a game is unclear. Motivated to answer this question, we apply survival analysis to a 1, 356-million-packet trace from a sizeable MMORPG, called ShenZhou Online. We find that both network delay and network loss significantly affect a player’s willingness to continue a game. For ShenZhou Online, the degrees of player “intolerance” of minimum RTT, RTT jitter, client loss rate, and server loss rate are in the proportion of 1:2:11:6. This indicates that 1) while many network games provide “ping time,” i.e., the RTT, to players to facilitate server selection, it would be more useful to provide information about delay jitters; and 2) players are much less tolerant of network loss than delay. This is due to the game designer’s decision to transfer data in TCP, where packet loss not only results in additional packet delays due to in-order delivery and retransmission, but also a lower sending rate.",
"title": ""
},
{
"docid": "63663dbc320556f7de09b5060f3815a6",
"text": "There has been a long history of applying AI technologies to address software engineering problems especially on tool automation. On the other hand, given the increasing importance and popularity of AI software, recent research efforts have been on exploring software engineering solutions to improve the productivity of developing AI software and the dependability of AI software. The emerging field of intelligent software engineering is to focus on two aspects: (1) instilling intelligence in solutions for software engineering problems; (2) providing software engineering solutions for intelligent software. This extended abstract shares perspectives on these two aspects of intelligent software engineering.",
"title": ""
},
{
"docid": "ddc56e9f2cbe9c086089870ccec7e510",
"text": "Serotonin is an ancient monoamine neurotransmitter, biochemically derived from tryptophan. It is most abundant in the gastrointestinal tract, but is also present throughout the rest of the body of animals and can even be found in plants and fungi. Serotonin is especially famous for its contributions to feelings of well-being and happiness. More specifically it is involved in learning and memory processes and is hence crucial for certain behaviors throughout the animal kingdom. This brief review will focus on the metabolism, biological role and mode-of-action of serotonin in insects. First, some general aspects of biosynthesis and break-down of serotonin in insects will be discussed, followed by an overview of the functions of serotonin, serotonin receptors and their pharmacology. Throughout this review comparisons are made with the vertebrate serotonergic system. Last but not least, possible applications of pharmacological adjustments of serotonin signaling in insects are discussed.",
"title": ""
},
{
"docid": "83aa2a89f8ecae6a84134a2736a5bb22",
"text": "The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neuron's preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control. The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.",
"title": ""
},
{
"docid": "7d8884a7f6137068f8ede464cf63da5b",
"text": "Object detection and localization is a crucial step for inspection and manipulation tasks in robotic and industrial applications. We present an object detection and localization scheme for 3D objects that combines intensity and depth data. A novel multimodal, scale- and rotation-invariant feature is used to simultaneously describe the object's silhouette and surface appearance. The object's position is determined by matching scene and model features via a Hough-like local voting scheme. The proposed method is quantitatively and qualitatively evaluated on a large number of real sequences, proving that it is generic and highly robust to occlusions and clutter. Comparisons with state of the art methods demonstrate comparable results and higher robustness with respect to occlusions.",
"title": ""
},
{
"docid": "850becfa308ce7e93fea77673db8ab50",
"text": "Controlled generation of text is of high practical use. Recent efforts have made impressive progress in generating or editing sentences with given textual attributes (e.g., sentiment). This work studies a new practical setting of text content manipulation. Given a structured record, such as (PLAYER: Lebron, POINTS: 20, ASSISTS: 10), and a reference sentence, such as Kobe easily dropped 30 points, we aim to generate a sentence that accurately describes the full content in the record, with the same writing style (e.g., wording, transitions) of the reference. The problem is unsupervised due to lack of parallel data in practice, and is challenging to minimally yet effectively manipulate the text (by rewriting/adding/deleting text portions) to ensure fidelity to the structured content. We derive a dataset from a basketball game report corpus as our testbed, and develop a neural method with unsupervised competing objectives and explicit content coverage constraints. Automatic and human evaluations show superiority of our approach over competitive methods including a strong rule-based baseline and prior approaches designed for style transfer.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a",
"text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.",
"title": ""
},
{
"docid": "d4a96cc393a3f1ca3bca94a57e07941e",
"text": "With the increasing number of scientific publications, research paper recommendation has become increasingly important for scientists. Most researchers rely on keyword-based search or following citations in other papers, in order to find relevant research articles. And usually they spend a lot of time without getting satisfactory results. This study aims to propose a personalized research paper recommendation system, that facilitate this task by recommending papers based on users' explicit and implicit feedback. The users will be allowed to explicitly specify the papers of interest. In addition, user activities (e.g., viewing abstracts or full-texts) will be analyzed in order to enhance users' profiles. Most of the current research paper recommendation and information retrieval systems use the classical bag-of-words methods, which don't consider the context of the words and the semantic similarity between the articles. This study will use Recurrent Neural Networks (RNNs) to discover continuous and latent semantic features of the papers, in order to improve the recommendation quality. The proposed approach utilizes PubMed so far, since it is frequently used by physicians and scientists, but it can easily incorporate other datasets in the future.",
"title": ""
},
{
"docid": "188c55ef248f7021a66c1f2e05c2fc98",
"text": "The objective of the proposed study is to explore the performance of credit scoring using a two-stage hybrid modeling procedure with artificial neural networks and multivariate adaptive regression splines (MARS). The rationale under the analyses is firstly to use MARS in building the credit scoring model, the obtained significant variables are then served as the input nodes of the neural networks model. To demonstrate the effectiveness and feasibility of the proposed modeling procedure, credit scoring tasks are performed on one bank housing loan dataset using cross-validation approach. As the results reveal, the proposed hybrid approach outperforms the results using discriminant analysis, logistic regression, artificial neural networks and MARS and hence provides an alternative in handling credit scoring tasks. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "70b6abe2cb82eead9235612c1a1998d7",
"text": "PURPOSE\nThe aim of the study was to investigate white blood cell counts and neutrophil to lymphocyte ratio (NLR) as markers of systemic inflammation in the diagnosis of localized testicular cancer as a malignancy with initially low volume.\n\n\nMATERIALS AND METHODS\nThirty-six patients with localized testicular cancer with a mean age of 34.22±14.89 years and 36 healthy controls with a mean age of 26.67±2.89 years were enrolled in the study. White blood cell counts and NLR were calculated from complete blood cell counts.\n\n\nRESULTS\nWhite blood cell counts and NLR were statistically significantly higher in patients with testicular cancer compared with the control group (p<0.0001 for all).\n\n\nCONCLUSIONS\nBoth white blood cell counts and NLR can be used as a simple test in the diagnosis of testicular cancer besides the well-known accurate serum tumor markers as AFP (alpha fetoprotein), hCG (human chorionic gonadotropin) and LDH (lactate dehydrogenase).",
"title": ""
},
{
"docid": "655413f10d0b99afd15d54d500c9ffb6",
"text": "Herbal medicine (phytomedicine) uses remedies possessing significant pharmacological activity and, consequently, potential adverse effects and drug interactions. The explosion in sales of herbal therapies has brought many products to the marketplace that do not conform to the standards of safety and efficacy that physicians and patients expect. Unfortunately, few surgeons question patients regarding their use of herbal medicines, and 70% of patients do not reveal their use of herbal medicines to their physicians and pharmacists. All surgeons should question patients about the use of the following common herbal remedies, which may increase the risk of bleeding during surgical procedures: feverfew, garlic, ginger, ginkgo, and Asian ginseng. Physicians should exercise caution in prescribing retinoids or advising skin resurfacing in patients using St John's wort, which poses a risk of photosensitivity reaction. Several herbal medicines, such as aloe vera gel, contain pharmacologically active ingredients that may aid in wound healing. Practitioners who wish to recommend herbal medicines to patients should counsel them that products labeled as supplements have not been evaluated by the US Food and Drug Administration and that no guarantee of product quality can be made.",
"title": ""
},
{
"docid": "5c45aa22bb7182259f75260c879f81d6",
"text": "This paper presents an approach to parsing the Manhattan structure of an indoor scene from a single RGBD frame. The problem of recovering the floor plan is recast as an optimal labeling problem which can be solved efficiently using Dynamic Programming.",
"title": ""
},
{
"docid": "0bba0afb68f80afad03d0ba3d1ce9c89",
"text": "The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.",
"title": ""
},
{
"docid": "89ed5dc0feb110eb3abc102c4e50acaf",
"text": "Automatic object detection in infrared images is a vital task for many military defense systems. The high detection rate and low false detection rate of this phase directly affect the performance of the following algorithms in the system as well as the general performance of the system. In this work, a fast and robust algorithm is proposed for detection of small and high intensity objects in infrared scenes. Top-hat transformation and mean filter was used to increase the visibility of the objects, and a two-layer thresholding algorithm was introduced to calculate the object sizes more accurately. Finally, small objects extracted by using post processing methods.",
"title": ""
}
] |
scidocsrr
|
af6b29a103dba800f2fec5f4f879c16a
|
Most liked, fewest friends: patterns of enterprise social media use
|
[
{
"docid": "d489bd0fbf14fdad30b5a59190c86078",
"text": "This research investigates two competing hypotheses from the literature: 1) the Social Enhancement (‘‘Rich Get Richer’’) hypothesis that those more popular offline augment their popularity by increasing it on Facebook , and 2) the ‘‘Social Compensation’’ (‘‘Poor Get Richer’’) hypothesis that users attempt to increase their Facebook popularity to compensate for inadequate offline popularity. Participants (n= 614) at a large, urban university in the Midwestern United States completed an online survey. Results are that a subset of users, those more extroverted and with higher self-esteem, support the Social Enhancement hypothesis, being more popular both offline and on Facebook . Another subset of users, those less popular offline, support the Social Compensation hypotheses because they are more introverted, have lower self-esteem and strive more to look popular on Facebook . Semantic network analysis of open-ended responses reveals that these two user subsets also have different meanings for offline and online popularity. Furthermore, regression explains nearly twice the variance in offline popularity as in Facebook popularity, indicating the latter is not as socially grounded or defined as offline popularity.",
"title": ""
}
] |
[
{
"docid": "f3471acc1405bbd9546cc8ec42267053",
"text": "The authors examined the association between semen quality and caffeine intake among 2,554 young Danish men recruited when they were examined to determine their fitness for military service in 2001-2005. The men delivered a semen sample and answered a questionnaire including information about caffeine intake from various sources, from which total caffeine intake was calculated. Moderate caffeine and cola intakes (101-800 mg/day and < or =14 0.5-L bottles of cola/week) compared with low intake (< or =100 mg/day, no cola intake) were not associated with semen quality. High cola (>14 0.5-L bottles/week) and/or caffeine (>800 mg/day) intake was associated with reduced sperm concentration and total sperm count, although only significant for cola. High-intake cola drinkers had an adjusted sperm concentration and total sperm count of 40 mill/mL (95% confidence interval (CI): 32, 51) and 121 mill (95% CI: 92, 160), respectively, compared with 56 mill/mL (95% CI: 50, 64) and 181 mill (95% CI: 156, 210) in non-cola-drinkers, which could not be attributed to the caffeine they consumed because it was <140 mg/day. Therefore, the authors cannot exclude the possibility of a threshold above which cola, and possibly caffeine, negatively affects semen quality. Alternatively, the less healthy lifestyle of these men may explain these findings.",
"title": ""
},
{
"docid": "f68b11af8958117f75fc82c40c51c395",
"text": "Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world. While aleatory uncertainty refers to the inherent randomness in nature, derived from natural variability of the physical world (e.g., random show of a flipped coin), epistemic uncertainty origins from human's lack of knowledge of the physical world, as well as ability of measuring and modeling the physical world (e.g., computation of the distance between two cities). Different kinds of uncertainty call for different handling methods. Aggarwal, Yu, Sarma, and Zhang et al. have made good surveys on uncertain database management based on the probability theory. This paper reviews multidisciplinary uncertainty processing activities in diverse fields. Beyond the dominant probability theory and fuzzy theory, we also review information-gap theory and recently derived uncertainty theory. Practices of these uncertainty handling theories in the domains of economics, engineering, ecology, and information sciences are also described. It is our hope that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.",
"title": ""
},
{
"docid": "d8e1410ec6573bd1fa09091e123f53be",
"text": "In the last years the protection and safeguarding of cultural heritage has become a key issue of European cultural policy and this applies not only to tangible artefacts (monuments, sites, etc.), but also to intangible cultural expressions (singing, dancing, etc.). The i-Treasures project focuses on some Intangible Cultural Heritages (ICH) and investigates whether and to what extent new technology can play a role in the preservation and dissemination of these expressions. To this aim, the project will develop a system, based on cutting edge technology and sensors, that digitally captures the performances of living human treasures, analyses the digital information to semantically index the performances and their constituting elements, and builds an educational platform on top of the semantically indexed content. The main purpose of this paper is to describe how the user requirements of this system were defined. The requirements definition process was based on a participatory approach, where ICH experts, performers and users were actively involved through surveys and interviews, and extensively collaborated in the complex tasks of identifying specificities of rare traditional know-how, discovering existing teaching and learning practices and finally identifying the most cutting edge technologies able to support innovative teaching and learning approaches to ICH.",
"title": ""
},
{
"docid": "b7189c1b1dc625fb60a526d81c0d0a89",
"text": "This paper presents a development of an anthropomorphic robot hand, `KITECH Hand' that has 4 full-actuated fingers. Most robot hands have small size simultaneously many joints as compared with robot manipulators. Components of actuator, gear, and sensors used for building robots are not small and are expensive, and those make it difficult to build a small sized robot hand. Differently from conventional development of robot hands, KITECH hand adopts a RC servo module that is cheap, easily obtainable, and easy to handle. The RC servo module that have been already used for several small sized humanoid can be new solution of building small sized robot hand with many joints. The feasibility of KITECH hand in object manipulation is shown through various experimental results. It is verified that the modified RC servo module is one of effective solutions in the development of a robot hand.",
"title": ""
},
{
"docid": "3b2376110b0e6949379697b7ba6730b5",
"text": "............................................................................................................................... i Acknowledgments............................................................................................................... ii Table of",
"title": ""
},
{
"docid": "40fbee18e4b0eca3f2b9ad69119fec5d",
"text": "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.",
"title": ""
},
{
"docid": "75642d6a79f6b9bb8b02f6d8ded6a370",
"text": "Spectral indices as a selection tool in plant breeding could improve genetic gains for different important traits. The objectives of this study were to assess the potential of using spectral reflectance indices (SRI) to estimate genetic variation for in-season biomass production, leaf chlorophyll, and canopy temperature (CT) in wheat (Triticum aestivum L.) under irrigated conditions. Three field experiments, GHIST (15 CIMMYT globally adapted historic genotypes), RILs1 (25 recombinant inbred lines [RILs]), and RILs2 (36 RILs) were conducted under irrigated conditions at the CIMMYT research station in northwest Mexico in three different years. Five SRI were evaluated to differentiate genotypes for biomass production. In general, genotypic variation for all the indices was significant. Near infrared radiation (NIR)–based indices gave the highest levels of associationwith biomass production and the higher associations were observed at heading and grainfilling, rather than at booting. Overall, NIR-based indices were more consistent and differentiated biomass more effectively compared to the other indices. Indices based on ratio of reflection spectra correlatedwith SPADchlorophyll values, and the associationwas stronger at the generative growth stages. These SRI also successfully differentiated the SPAD values at the genotypic level. The NIR-based indices showed a strong and significant association with CT at the heading and grainfilling stages. These results demonstrate the potential of using SRI as a breeding tool to select for increased genetic gains in biomass and chlorophyll content, plus for cooler canopies. SIGNIFICANT PROGRESS in grain yield of spring wheat under irrigated conditions has been made through the classical breeding approach (Slafer et al., 1994), even though the genetic basis of yield improvement in wheat is not well established (Reynolds et al., 1999). Several authors have reported that progress in grain yield is mainly attributed to better partitioning of photosynthetic products (Waddington et al., 1986; Calderini et al., 1995; Sayre et al., 1997). The systematic increase in the partitioning of assimilates (harvest index) has a theoretical upper limit of approximately 60% (Austin et al., 1980). Further yield increases in wheat through improvement in harvest index will be limited without a further increase in total crop biomass (Austin et al., 1980; Slafer and Andrade, 1991; Reynolds et al., 1999). Though until relatively recently biomass was not commonly associated with yield gains, increases in biomass of spring wheat have been reported (Waddington et al., 1986; Sayre et al., 1997) and more recently in association with yield increases (Singh et al., 1998; Reynolds et al., 2005; Shearman et al., 2005). Thus, a breeding approach is needed that will select genotypes with higher biomass capacity, while maintaining the high partitioning rate of photosynthetic products. Direct estimation of biomass is a timeand laborintensive undertaking. Moreover, destructive in-season sampling involves large sampling errors (Whan et al., 1991) and reduces the final area for estimation of grain yield and final biomass. Regan et al. (1992) demonstrated a method to select superior genotypes of spring wheat for early vigor under rainfed conditions using a destructive sampling technique, but such sampling is impossible for breeding programs where a large number of genotypes are being screened for various desirable traits. Spectral reflectance indices are a potentially rapid technique that could assess biomass at the genotypic level without destructive sampling (Elliott and Regan, 1993; Smith et al., 1993; Bellairs et al., 1996; Peñuelas et al., 1997). Canopy light reflectance properties based mainly on the absorption of light at a specific wavelength are associated with specific plant characteristics. The spectral reflectance in the visible (VIS) wavelengths (400–700 nm) depends on the absorption of light by leaf chlorophyll and associated pigments such as carotenoid and anthocyanins. The reflectance of the VIS wavelengths is relatively low because of the high absorption of light energy by these pigments. In contrast, the reflectance of theNIR wavelengths (700–1300 nm) is high, since it is not absorbed by plant pigments and is scattered by plant tissue at different levels in the canopy, such that much of it is reflected back rather than being absorbed by the soil (Knipling, 1970). Spectral reflectance indices were developed on the basis of simple mathematical formula, such as ratios or differences between the reflectance at given wavelengths (Araus et al., 2001). Simple ratio (SR 5 NIR/VIS) and the normalized difference vegetation M.A. Babar, A.R. Klatt, and W.R. Raun, Department of Plant and Soil Sciences, 368 Ag. Hall, Oklahoma State University, Stillwater, OK 74078, USA; M.P. Reynolds, International Maize and Wheat Improvement Center (CIMMYT), Km. 45, Carretera Mexico, El Batan, Texcoco, Mexico; M. van Ginkel, Department of Primary Industries (DPI), Private Bag 260, Horsham, Victoria, Postcode: 3401, DX Number: 216515, Australia; M.L. Stone, Department of Biosystems and Agricultural Engineering, Oklahoma State University, Stillwater, OK 74078, USA. This research was partially funded by the Oklahoma Wheat Research Foundation (OWRF), Oklahoma Wheat Commission, and CIMMYT (International Maize and Wheat Improvement Center), Mexico. Received 11 Mar. 2005. *Corresponding author (mreynolds@cgiar.org). Published in Crop Sci. 46:1046–1057 (2006). Crop Breeding & Genetics doi:10.2135/cropsci2005.0211 a Crop Science Society of America 677 S. Segoe Rd., Madison, WI 53711 USA Abbreviations: CT, canopy temperature; CTD, canopy temperature depression; GHIST, global historic; NDVI, normalized difference vegetation index; NIR, near infrared radiation; NWI-1, normalized water index-1; NWI-2, normalized water index-2; PSSRa, pigment specific simple ratio-chlorophyll a; RARSa, ratio analysis of reflectance spectra-chlorophyll a; RARSb, ratio analysis of reflectance spectra-chlorophyll b; RARSc, ratio analysis of reflectance spectracarotenoids; RILs, recombinant inbred lines; SR, simple ratio; SRI, spectral reflectance indices; WI, water index. R e p ro d u c e d fr o m C ro p S c ie n c e . P u b lis h e d b y C ro p S c ie n c e S o c ie ty o f A m e ri c a . A ll c o p y ri g h ts re s e rv e d . 1046 Published online March 27, 2006",
"title": ""
},
{
"docid": "d725c63647485fd77412f16e1f6485f2",
"text": "The ongoing discussions about a „digital revolution― and ―disruptive competitive advantages‖ have led to the creation of such a business vision as ―Industry 4.0‖. Yet, the term and even more its actual impact on businesses is still unclear.This paper addresses this gap and explores more specifically, the consequences and potentials of Industry 4.0 for the procurement, supply and distribution management functions. A blend of literature-based deductions and results from a qualitative study are used to explore the phenomenon.The findings indicate that technologies of Industry 4.0 legitimate the next level of maturity in procurement (Procurement &Supply Management 4.0). Empirical findings support these conceptual considerations, revealing the ambitious expectations.The sample comprises seven industries and the employed method is qualitative (telephone and face-to-face interviews). The empirical findings are only a basis for further quantitative investigation , however, they support the necessity and existence of the maturity level. The findings also reveal skepticism due to high investment costs but also very high expectations. As recent studies about digitalization are rather rare in the context of single company functions, this research work contributes to the understanding of digitalization and supply management.",
"title": ""
},
{
"docid": "bf57a5fcf6db7a9b26090bd9a4b65784",
"text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.",
"title": ""
},
{
"docid": "a9709367bc84ececd98f65ed7359f6b0",
"text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.",
"title": ""
},
{
"docid": "fc55bae802e8b82f79bbb381f7bcf30b",
"text": "In order to improve the efficiency of Apriori algorithm for mining frequent item sets, MH-Apriori algorithm was designed for big data to address the poor efficiency problem. MH-Apriori takes advantages of MapReduce and HBase together to optimize Apriori algorithm. Compared with the improved Apriori algorithm simply based on MapReduce framework, timestamp of HBase is utilized in this algorithm to avoid generating a large number of key/value pairs. It saves the pattern matching time and scans the database only once. Also, to obtain transaction marks automatically, transaction mark column is added to set list for computing support numbers. MH-Apriori was executed on Hadoop platform. The experimental results show that MH-Apriori has higher efficiency and scalability.",
"title": ""
},
{
"docid": "565efa7a51438990b3d8da6222dca407",
"text": "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.",
"title": ""
},
{
"docid": "310aa0a02f8fc8b7b6d31c987a12a576",
"text": "We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.",
"title": ""
},
{
"docid": "eb5f2e4a1b01a67516089bbfecc0ab8a",
"text": "With the fast development of digital systems and concomitant information technologies, there is certainly an incipient spirit in the extensive overall economy to put together digital Customer Relationship Management (CRM) systems. This slanting is further more palpable in the telecommunications industry, in which businesses turn out to be increasingly digitalized. Customer churn prediction is a foremost aspect of a contemporary telecom CRM system. Churn prediction model leads the customer relationship management to retain the customers who will be possible to give up. Currently scenario, a lot of outfit and monitored classifiers and data mining techniques are employed to model the churn prediction in telecom. Within this paper, Kernelized Extreme Learning Machine (KELM) algorithm is proposed to categorize customer churn patterns in telecom industry. The primary strategy of proposed work is organized the data from telecommunication mobile customer’s dataset. The data preparation is conducted by using preprocessing with Expectation Maximization (EM) clustering algorithm. After that, customer churn behavior is examined by using Naive Bayes Classifier (NBC) in accordance with the four conditions like customer dissatisfaction (H1), switching costs (H2), service usage (H3) and customer status (H4). The attributes originate from call details and customer profiles which is enhanced the precision of customer churn prediction in the telecom industry. The attributes are measured using BAT algorithm and KELM algorithm used for churn prediction. The experimental results prove that proposed model is better than AdaBoost and Hybrid Support Vector Machine (HSVM) models in terms of the performance of ROC, sensitivity, specificity, accuracy and processing time.",
"title": ""
},
{
"docid": "5d1fbf1b9f0529652af8d28383ce9a34",
"text": "Automatic License Plate Recognition (ALPR) is one of the most prominent tools in intelligent transportation system applications. In ALPR algorithm implementation, License Plate Detection (LPD) is a critical stage. Despite many state-of-the-art researches, some parameters such as low/high illumination, type of camera, or a different style of License Plate (LP) causes LPD step is still a challenging problem. In this paper, we propose a new style-free method based on the cross power spectrum. Our method has three steps; designing adaptive binarized filter, filtering using cross power spectrum and verification. Experimental results show that the recognition accuracy of the proposed approach is 98% among 2241 Iranian cars images including two styles of the LP. In addition, the process of the plate detection takes 44 milliseconds, which is suitable for real-time processing.",
"title": ""
},
{
"docid": "11c7fba6fcbf36cc1187c1cfd07c91f9",
"text": "We describe a real-time bidding algorithm for performance-based display ad allocation. A central issue in performance display advertising is matching campaigns to ad impressions, which can be formulated as a constrained optimization problem that maximizes revenue subject to constraints such as budget limits and inventory availability. The current practice is to solve the optimization problem offline at a tractable level of impression granularity (e.g., the page level), and to serve ads online based on the precomputed static delivery scheme. Although this offline approach takes a global view to achieve optimality, it fails to scale to ad allocation at the individual impression level. Therefore, we propose a real-time bidding algorithm that enables fine-grained impression valuation (e.g., targeting users with real-time conversion data), and adjusts value-based bids according to real-time constraint snapshots (e.g., budget consumption levels). Theoretically, we show that under a linear programming (LP) primal-dual formulation, the simple real-time bidding algorithm is indeed an online solver to the original primal problem by taking the optimal solution to the dual problem as input. In other words, the online algorithm guarantees the offline optimality given the same level of knowledge an offline optimization would have. Empirically, we develop and experiment with two real-time bid adjustment approaches to adapting to the non-stationary nature of the marketplace: one adjusts bids against real-time constraint satisfaction levels using control-theoretic methods, and the other adjusts bids also based on the statistically modeled historical bidding landscape. Finally, we show experimental results with real-world ad delivery data that support our theoretical conclusions.",
"title": ""
},
{
"docid": "5158b5da8a561799402cb1ef3baa3390",
"text": "We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3% phone error rate (PER) from the first-pass decoding — the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.",
"title": ""
},
{
"docid": "c04065ff9cbeba50c0d70e30ab2e8b53",
"text": "A linear model is suggested for the influence of covariates on the intensity function. This approach is less vulnerable than the Cox model to problems of inconsistency when covariates are deleted or the precision of covariate measurements is changed. A method of non-parametric estimation of regression functions is presented. This results in plots that may give information on the change over time in the influence of covariates. A test method and two goodness of fit plots are also given. The approach is illustrated by simulation as well as by data from a clinical trial of treatment of carcinoma of the oropharynx.",
"title": ""
},
{
"docid": "8e071cfeaf33444e9f85f6bfcb8fa51b",
"text": "BACKGROUND\nLutein is a carotenoid that may play a role in eye health. Human milk typically contains higher concentrations of lutein than infant formula. Preliminary data suggest there are differences in serum lutein concentrations between breastfed and formula-fed infants.\n\n\nAIM OF THE STUDY\nTo measure the serum lutein concentrations among infants fed human milk or formulas with and without added lutein.\n\n\nMETHODS\nA prospective, double-masked trial was conducted in healthy term formula-fed infants (n = 26) randomized between 9 and 16 days of age to study formulas containing 20 (unfortified), 45, 120, and 225 mcg/l of lutein. A breastfed reference group was studied (n = 14) and milk samples were collected from their mothers. Primary outcome was serum lutein concentration at week 12.\n\n\nRESULTS\nGeometric mean lutein concentration of human milk was 21.1 mcg/l (95% CI 14.9-30.0). At week 12, the human milk group had a sixfold higher geometric mean serum lutein (69.3 mcg/l; 95% CI 40.3-119) than the unfortified formula group (11.3 mcg/l; 95% CI 8.1-15.8). Mean serum lutein increased from baseline in each formula group except the unfortified group. Linear regression equation indicated breastfed infants had a greater increase in serum lutein (slope 3.7; P < 0.001) per unit increase in milk lutein than formula-fed infants (slope 0.9; P < 0.001).\n\n\nCONCLUSIONS\nBreastfed infants have higher mean serum lutein concentrations than infants who consume formula unfortified with lutein. These data suggest approximately 4 times more lutein is needed in infant formula than in human milk to achieve similar serum lutein concentrations among breastfed and formula fed infants.",
"title": ""
}
] |
scidocsrr
|
255658b6d0b767c989cb50d2bd0b6bd9
|
Single Image Super-resolution Using Deformable Patches
|
[
{
"docid": "7cb6582bf81aea75818eef2637c95c79",
"text": "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results.",
"title": ""
},
{
"docid": "d4c7493c755a3fde5da02e3f3c873d92",
"text": "Edge-directed image super resolution (SR) focuses on ways to remove edge artifacts in upsampled images. Under large magnification, however, textured regions become blurred and appear homogenous, resulting in a super-resolution image that looks unnatural. Alternatively, learning-based SR approaches use a large database of exemplar images for “hallucinating” detail. The quality of the upsampled image, especially about edges, is dependent on the suitability of the training images. This paper aims to combine the benefits of edge-directed SR with those of learning-based SR. In particular, we propose an approach to extend edge-directed super-resolution to include detail from an image/texture example provided by the user (e.g., from the Internet). A significant benefit of our approach is that only a single exemplar image is required to supply the missing detail – strong edges are obtained in the SR image even if they are not present in the example image due to the combination of the edge-directed approach. In addition, we can achieve quality results at very large magnification, which is often problematic for both edge-directed and learning-based approaches.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "60120375949f36157d73748af5c3231a",
"text": "This paper describes REVIEW, a new retinal vessel reference dataset. This dataset includes 16 images with 193 vessel segments, demonstrating a variety of pathologies and vessel types. The vessel edges are marked by three observers using a special drawing tool. The paper also describes the algorithm used to process these segments to produce vessel profiles, against which vessel width measurement algorithms can be assessed. Recommendations are given for use of the dataset in performance assessment. REVIEW can be downloaded from http://ReviewDB.lincoln.ac.uk.",
"title": ""
},
{
"docid": "660bc85f84d37a98e78a34ccf1c8b1ab",
"text": "In this paper, we evaluate the performance and experience differences between direct touch and mouse input on horizontal and vertical surfaces using a simple application and several validated scales. We find that, not only are both speed and accuracy improved when using the multi-touch display over a mouse, but that participants were happier and more engaged. They also felt more competent, in control, related to other people, and immersed. Surprisingly, these results cannot be explained by the intuitiveness of the controller, and the benefits of touch did not come at the expense of perceived workload. Our work shows the added value of considering experience in addition to traditional measures of performance, and demonstrates an effective and efficient method for gathering experience during inter-action with surface applications. We conclude by discussing how an understanding of this experience can help in designing touch applications.",
"title": ""
},
{
"docid": "a32a359ad54d69466d267cad6e182ae9",
"text": "The Sign System for Indonesian Language (SIBI) is a rather complex sign language. It has four components that distinguish the meaning of the sign language and it follows the syntax and the grammar of the Indonesian language. This paper proposes a model for recognizing the SIBI words by using Microsoft Kinect as the input sensor. This model is a part of automatic translation from SIBI to text. The features for each word are extracted from skeleton and color-depth data produced by Kinect. Skeleton data features indicate the angle between human joints and Cartesian axes. Color images are transformed to gray-scale and their features are extracted by using Discrete Cosine Transform (DCT) with Cross Correlation (CC) operation. The image's depth features are extracted by running MATLAB regionprops function to get its region properties. The Generalized Learning Vector Quantization (GLVQ) and Random Forest (RF) training algorithm from WEKA data mining tools are used as the classifier of the model. Several experiments with different scenarios have shown that the highest accuracy (96,67%) is obtained by using 30 frames for skeleton combined with 20 frames for region properties image classified by Random Forest.",
"title": ""
},
{
"docid": "a212a2969c0c72894dcde880bbf29fa7",
"text": "Machine learning is useful for building robust learning models, and it is based on a set of features that identify a state of an object. Unfortunately, some data sets may contain a large number of features making, in some cases, the learning process time consuming and the generalization capability of machine learning poor. To make a data set easy to learn and understand, it is typically recommended to remove the most irrelevant features from the set. However, choosing what data should be kept or eliminated may be performed by complex selection algorithms, and optimal feature selection may require an exhaustive search of all possible subsets of features which is computationally expensive. This paper proposes a simple method to perform feature selection using artificial neural networks. It is shown experimentally that genetic algorithms in combination with artificial neural networks can easily be used to extract those features that are required to produce a desired result. Experimental results show that very few hidden neurons are required for feature selection as artificial neural networks are only used to assess the quality of an individual, which is a chosen subset of features.",
"title": ""
},
{
"docid": "4b354edbd555b6072ae04fb9befc48eb",
"text": "We present a generative method for the creation of geometrically complex andmaterially heterogeneous objects. By combining generative design and additive manufacturing, we demonstrate a unique formfinding approach and method for multi-material 3D printing. The method offers a fast, automated and controllable way to explore an expressive set of symmetrical, complex and colored objects, which makes it a useful tool for design exploration andprototyping.Wedescribe a recursive grammar for the generation of solid boundary surfacemodels suitable for a variety of design domains.We demonstrate the generation and digital fabrication ofwatertight 2-manifold polygonalmeshes, with feature-aligned topology that can be produced on a wide variety of 3D printers, as well as post-processed with traditional 3D modeling tools. To date, objects with intricate spatial patterns and complex heterogeneous material compositions generated by this method can only be produced through 3D printing. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c174b7f1f6267ec75b1a9cac4bcaf2f7",
"text": "Issue tracking systems such as Bugzilla, Mantis and JIRA are Process Aware Information Systems to support business process of issue (defect and feature enhancement) reporting and resolution. The process of issue reporting to resolution consists of several steps or activities performed by various roles (bug reporter, bug triager, bug fixer, developers, and quality assurance manager) within the software maintenance team. Project teams define a workflow or a business process (design time process model and guidelines) to streamline and structure the issue management activities. However, the runtime process (reality) may not conform to the design time model and can have imperfections or inefficiencies. We apply business process mining tools and techniques to analyze the event log data (bug report history) generated by an issue tracking system with the objective of discovering runtime process maps, inefficiencies and inconsistencies. We conduct a case-study on data extracted from Bugzilla issue tracking system of the popular open-source Firefox browser project. We present and implement a process mining framework, Nirikshan, consisting of various steps: data extraction, data transformation, process discovery, performance analysis and conformance checking. We conduct a series of process mining experiments to study self-loops, back-and-forth, issue reopen, unique traces, event frequency, activity frequency, bottlenecks and present an algorithm and metrics to compute the degree of conformance between the design time and the runtime process.",
"title": ""
},
{
"docid": "8ea0ac6401d648e359fc06efa59658e6",
"text": "Different neural networks have exhibited excellent performance on various speech processing tasks, and they usually have specific advantages and disadvantages. We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependance in the spectrogram of the speech in an efficient way. The model is tested on speech corpus TIMIT for phoneme recognition and IEMOCAP for emotion recognition. Experimental results show that the model is competitive with previous methods in terms of accuracy and efficiency.",
"title": ""
},
{
"docid": "03356f32b78ae68603a59c23e8f4a01c",
"text": "1. Introduction The problem of estimating the dimensionality of a model occurs in various forms in applied statistics. There is estimating the number of factor in factor analysis, estimating the degree of a polynomial describing the data, selecting the variables to be introduced in a multiple regression equation, estimating the order of an AR or MA time series model, and so on. In factor analysis this problem was traditionally solved by eyeballing residual eigen-values, or by applying some other kind of heuristic procedure. When maximum likelihood factor analysis became computationally feasible the likelihoods for diierent dimensionalities could be compared. Most statisticians were aware of the fact that comparison of successive chi squares was not optimal in any well deened decision theoretic sense. With the advent of the electronic computer the forward and backward stepwise selection procedures in multiple regression also became quite popular, but again there were plenty of examples around showing that the procedures were not optimal and could easily lead one astray. When even more computational power became available one could solve the best subset selection problem for up to 20 or 30 variables, but choosing an appropriate criterion on the basis of which to compare the many models remains a problem. But exactly because of these advances in computation, nding a solution of the problem became more and more urgent. In the linear regression situation the C p criterion of Mallows (1973), which had already been around much longer, and the PRESS criterion of Allen (1971) were suggested. Although they seemed to work quite well, they were too limited in scope. The structural covariance models of Joreskog and others, and the log linear models of Goodman and others, made search over a much more complicated set of models necessary, and the model choice problems in those contexts could not be attacked by inherently linear methods. Three major closely related developments occurred around 1974. Akaike (1973) introduced the information criterion for model selection, generalizing his earlier work on time series analysis and factor analysis. Stone (1974) reintroduced and systematized cross",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "5cc7f7aae87d95ea38c2e5a0421e0050",
"text": "Scrum is a structured framework to support complex product development. However, Scrum methodology faces a challenge of managing large teams. To address this challenge, in this paper we propose a solution called Scrum of Scrums. In Scrum of Scrums, we divide the Scrum team into teams of the right size, and then organize them hierarchically into a Scrum of Scrums. The main goals of the proposed solution are to optimize communication between teams in Scrum of Scrums; to make the system work after integration of all parts; to reduce the dependencies between the parts of system; and to prevent the duplication of parts in the system. [Qurashi SA, Qureshi MRJ. Scrum of Scrums Solution for Large Size Teams Using Scrum Methodology. Life Sci J 2014;11(8):443-449]. (ISSN:1097-8135). http://www.lifesciencesite.com. 58",
"title": ""
},
{
"docid": "db9f6e58adc2a3ce423eed3223d88b19",
"text": "The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. When the number of SOM units is large, to facilitate quantitative analysis of the map and the data, similar units need to be grouped, i.e., clustered. In this paper, different approaches to clustering of the SOM are considered. In particular, the use of hierarchical agglomerative clustering and partitive clustering using k-means are investigated. The two-stage procedure--first using SOM to produce the prototypes that are then clustered in the second stage--is found to perform well when compared with direct clustering of the data and to reduce the computation time.",
"title": ""
},
{
"docid": "5adaee6e03fdd73ebed40804b9cad326",
"text": "Quantum circuits exhibit several features of large-scale distributed systems. They have a concise design formalism but behavior that is challenging to represent let alone predict. Issues of scalability—both in the yet-to-be-engineered quantum hardware and in classical simulators—are paramount. They require sparse representations for efficient modeling. Whereas simulators represent both the system’s current state and its operations directly, emulators manipulate the images of system states under a mapping to a different formalism. We describe three such formalisms for quantum circuits. The first two extend the polynomial construction of Dawson et al. [1] to (i) work for any set of quantum gates obeying a certain “balance” condition and (ii) produce a single polynomial over any sufficiently structured field or ring. The third appears novel and employs only simple Boolean formulas, optionally limited to a form we call “parity-of-AND” equations. Especially the third can combine with off-the-shelf state-of-the-art third-party software, namely model counters and #SAT solvers, that we show capable of vast improvements in the emulation time in natural instances. We have programmed all three constructions to proof-of-concept level and report some preliminary tests and applications. These include algebraic analysis of special quantum circuits and the possibility of a new classical attack on the factoring problem. Preliminary comparisons are made with the libquantum simulator[2–4]. 1 A Brief But Full QC Introduction A quantum circuit is a compact representation of a computational system. It consists of some number m of qubits represented by lines resembling a musical staff, and some number s of gates arrayed like musical notes and chords. Here is an example created using the popular visual simulator [5]: Fig. 1. A five-qubit quantum circuit that computes a Fourier transform on the first four qubits. The circuit C operates on m = 5 qubits. The input is the binary string x = 10010. The first n = 4 qubits see most of the action and hold the nominal input x0 = 1001 of length n = 4, while the fifth qubit is an ancilla initialized to 0 whose purpose here is to hold the nominal output bit. The circuit has thirteen gates. Six of them have a single control represented by a black dot; they activate if and only if the control receives a 1 signal. The last gate has two controls and a target represented by the parity symbol ⊕ rather than a labeled box. Called a Toffoli gate, it will set the output bit if and only if both controls receive a 1 signal. The two gates before it merely swap the qubits 2 and 3 and 1 and 4, respectively. They have no effect on the output and are included here only to say that the first twelve gates combine to compute the quantum Fourier transform QFT4. This is just the ordinary discrete Fourier transform F16 on 2 4 = 16 coordinates. The actual output C(x) of the circuit is a quantum state Z that belongs to the complex vector space C. Nine of its entries in the standard basis are shown in Figure 1; seven more were cropped from the screenshot. Sixteen of the components are absent, meaning Z has 0 in the corresponding coordinates. Despite the diversity of the nine complex entries ZL shown, each has magnitude |ZL| = 0.0625. In general, |ZL| represents the probability that a measurement—of all qubits—will yield the binary string z ∈ { 0, 1 } corresponding to the coordinate L under the standard ordered enumeration of { 0, 1 }. Here we are interested in those z whose final entry z5 is a 1. Two of them are shown; two others (11101 and 11111) are possible and also have probability 1 16 each, making a total of 1 4 probability for getting z5 = 1. Owing to the “cylindrical” nature of the set B of strings ending in 1, a measurement of just the fifth qubit yields 1 with probability 1 4 . Where does the probability come from? The physical answer is that it is an indelible aspect of nature as expressed by quantum mechanics. For our purposes the computational answer is that it comes from the four gates labeled H, for Hadamard gate. Each supplies one bit of nondeterminism, giving four bits in all, which govern the sixteen possible outcomes of this particular example. It is a mistake to think that the probabilities must be equally spread out and must be multiples of 1/2 where h is the number of Hadamard gates. Appending just one more Hadamard gate at the right end of the third qubit line creates nonzero probabilities as low as 0.0183058 . . . and as high as 0.106694 . . . , each appearing for four outcomes of 24 nonzero possibilities. This happens because the component values follow wave equations that can amplify some values while reducing or zeroing the amplitude of others via interference. Indeed, the goal of quantum computing is to marshal most of the amplitude onto a small set of desired outcomes, so that measurements— that is to say, quantum sampling—will reveal one of them with high probability. All of this indicates the burgeoning complexity of quantum systems. Our original circuit has 5 qubits, 4 nondeterministic gates, and 9 other gates, yet there are 2 = 32 components of the vectors representing states, 32 basic inputs and outputs, and 2 = 16 branchings to consider. Adding the fifth Hadamard gate creates a new fork in every path through the system, giving 32 branchings. The whole circuit C defines a 32× 32 matrix UC in which the I-th row encodes the quantum state ΦI resulting from computation on the standard basis vector x = eI . The matrix is unitary, meaning that UC multiplied by its conjugate transpose U∗ C gives the 32× 32 identity matrix. Indeed, UC is the product of thirteen simpler matrices U` representing the respective gates (` = 1, . . . , s with s = 13). Here each gate engages only a subset of the qubits of arity r < m, so that U` decomposes into its 2 r × 2 unitary gate matrix and the identity action (represented by the 2× 2 identity matrix I) on the other m− r lines. Here are some single-qubit gate matrices: H = 1 √ 2 [ 1 1 1 −1 ]",
"title": ""
},
{
"docid": "f17b3a6c31daeee0ae0a8ebc7a14e16c",
"text": "In full-duplex (FD) radios, phase noise leads to random phase mismatch between the self-interference (SI) and the reconstructed cancellation signal, resulting in possible performance degradation during SI cancellation. To explicitly analyze its impacts on the digital SI cancellation, an orthogonal frequency division multiplexing (OFDM)-modulated FD radio is considered with phase noises at both the transmitter and receiver. The closed-form expressions for both the digital cancellation capability and its limit for the large interference-to-noise ratio (INR) case are derived in terms of the power of the common phase error, INR, desired signal-to-noise ratio (SNR), channel estimation error and transmission delay. Based on the obtained digital cancellation capability, the achievable rate region of a two-way FD OFDM system with phase noise is characterized. Then, with a limited SI cancellation capability, the maximum outer bound of the rate region is proved to exist for sufficiently large transmission power. Furthermore, a minimum transmission power is obtained to achieve $\\beta$ -portion of the cancellation capability limit and to ensure that the outer bound of the rate region is close to its maximum.",
"title": ""
},
{
"docid": "a86bc0970dba249e1e53f9edbad3de43",
"text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.",
"title": ""
},
{
"docid": "12ba0cd3db135168b48e062cca1d1d32",
"text": "We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.",
"title": ""
},
{
"docid": "1277b7b45f5a54eec80eb8ab47ee3fbb",
"text": "Latent variable models, and probabilistic graphical models more generally, provide a declarative language for specifying prior knowledge and structural relationships in complex datasets. They have a long and rich history in natural language processing, having contributed to fundamental advances such as statistical alignment for translation (Brown et al., 1993), topic modeling (Blei et al., 2003), unsupervised part-of-speech tagging (Brown et al., 1992), and grammar induction (Klein and Manning, 2004), among others. Deep learning, broadly construed, is a toolbox for learning rich representations (i.e., features) of data through numerical optimization. Deep learning is the current dominant paradigm in natural language processing, and some of the major successes include language modeling (Bengio et al., 2003; Mikolov et al., 2010; Zaremba et al., 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), and natural language understanding tasks such as question answering and natural language inference.",
"title": ""
},
{
"docid": "c8ebf32413410a5d91defbb19a73b6f3",
"text": "BACKGROUND\nAudit and feedback is widely used as a strategy to improve professional practice either on its own or as a component of multifaceted quality improvement interventions. This is based on the belief that healthcare professionals are prompted to modify their practice when given performance feedback showing that their clinical practice is inconsistent with a desirable target. Despite its prevalence as a quality improvement strategy, there remains uncertainty regarding both the effectiveness of audit and feedback in improving healthcare practice and the characteristics of audit and feedback that lead to greater impact.\n\n\nOBJECTIVES\nTo assess the effects of audit and feedback on the practice of healthcare professionals and patient outcomes and to examine factors that may explain variation in the effectiveness of audit and feedback.\n\n\nSEARCH METHODS\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) 2010, Issue 4, part of The Cochrane Library. www.thecochranelibrary.com, including the Cochrane Effective Practice and Organisation of Care (EPOC) Group Specialised Register (searched 10 December 2010); MEDLINE, Ovid (1950 to November Week 3 2010) (searched 09 December 2010); EMBASE, Ovid (1980 to 2010 Week 48) (searched 09 December 2010); CINAHL, Ebsco (1981 to present) (searched 10 December 2010); Science Citation Index and Social Sciences Citation Index, ISI Web of Science (1975 to present) (searched 12-15 September 2011).\n\n\nSELECTION CRITERIA\nRandomised trials of audit and feedback (defined as a summary of clinical performance over a specified period of time) that reported objectively measured health professional practice or patient outcomes. In the case of multifaceted interventions, only trials in which audit and feedback was considered the core, essential aspect of at least one intervention arm were included.\n\n\nDATA COLLECTION AND ANALYSIS\nAll data were abstracted by two independent review authors. For the primary outcome(s) in each study, we calculated the median absolute risk difference (RD) (adjusted for baseline performance) of compliance with desired practice compliance for dichotomous outcomes and the median percent change relative to the control group for continuous outcomes. Across studies the median effect size was weighted by number of health professionals involved in each study. We investigated the following factors as possible explanations for the variation in the effectiveness of interventions across comparisons: format of feedback, source of feedback, frequency of feedback, instructions for improvement, direction of change required, baseline performance, profession of recipient, and risk of bias within the trial itself. We also conducted exploratory analyses to assess the role of context and the targeted clinical behaviour. Quantitative (meta-regression), visual, and qualitative analyses were undertaken to examine variation in effect size related to these factors.\n\n\nMAIN RESULTS\nWe included and analysed 140 studies for this review. In the main analyses, a total of 108 comparisons from 70 studies compared any intervention in which audit and feedback was a core, essential component to usual care and evaluated effects on professional practice. After excluding studies at high risk of bias, there were 82 comparisons from 49 studies featuring dichotomous outcomes, and the weighted median adjusted RD was a 4.3% (interquartile range (IQR) 0.5% to 16%) absolute increase in healthcare professionals' compliance with desired practice. Across 26 comparisons from 21 studies with continuous outcomes, the weighted median adjusted percent change relative to control was 1.3% (IQR = 1.3% to 28.9%). For patient outcomes, the weighted median RD was -0.4% (IQR -1.3% to 1.6%) for 12 comparisons from six studies reporting dichotomous outcomes and the weighted median percentage change was 17% (IQR 1.5% to 17%) for eight comparisons from five studies reporting continuous outcomes. Multivariable meta-regression indicated that feedback may be more effective when baseline performance is low, the source is a supervisor or colleague, it is provided more than once, it is delivered in both verbal and written formats, and when it includes both explicit targets and an action plan. In addition, the effect size varied based on the clinical behaviour targeted by the intervention.\n\n\nAUTHORS' CONCLUSIONS\nAudit and feedback generally leads to small but potentially important improvements in professional practice. The effectiveness of audit and feedback seems to depend on baseline performance and how the feedback is provided. Future studies of audit and feedback should directly compare different ways of providing feedback.",
"title": ""
},
{
"docid": "8222f8eae81c954e8e923cbd883f8322",
"text": "Work stealing is a promising approach to constructing multithreaded program runtimes of parallel programming languages. This paper presents HERMES, an energy-efficient work-stealing language runtime. The key insight is that threads in a work-stealing environment -- thieves and victims - have varying impacts on the overall program running time, and a coordination of their execution \"tempo\" can lead to energy efficiency with minimal performance loss. The centerpiece of HERMES is two complementary algorithms to coordinate thread tempo: the workpath-sensitive algorithm determines tempo for each thread based on thief-victim relationships on the execution path, whereas the workload-sensitive algorithm selects appropriate tempo based on the size of work-stealing deques. We construct HERMES on top of Intel Cilk Plus's runtime, and implement tempo adjustment through standard Dynamic Voltage and Frequency Scaling (DVFS). Benchmarks running on HERMES demonstrate an average of 11-12% energy savings with an average of 3-4% performance loss through meter-based measurements over commercial CPUs.",
"title": ""
},
{
"docid": "0c832dde1c268ec32e7fca64158abb31",
"text": "For many years, the clinical laboratory's focus on analytical quality has resulted in an error rate of 4-5 sigma, which surpasses most other areas in healthcare. However, greater appreciation of the prevalence of errors in the pre- and post-analytical phases and their potential for patient harm has led to increasing requirements for laboratories to take greater responsibility for activities outside their immediate control. Accreditation bodies such as the Joint Commission International (JCI) and the College of American Pathologists (CAP) now require clear and effective procedures for patient/sample identification and communication of critical results. There are a variety of free on-line resources available to aid in managing the extra-analytical phase and the recent publication of quality indicators and proposed performance levels by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) working group on laboratory errors and patient safety provides particularly useful benchmarking data. Managing the extra-laboratory phase of the total testing cycle is the next challenge for laboratory medicine. By building on its existing quality management expertise, quantitative scientific background and familiarity with information technology, the clinical laboratory is well suited to play a greater role in reducing errors and improving patient safety outside the confines of the laboratory.",
"title": ""
},
{
"docid": "3e63c8a5499966f30bd3e6b73494ff82",
"text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.",
"title": ""
}
] |
scidocsrr
|
aeda9eaa4612e415ab71fee3c89a3959
|
Object Tracking Pose Estimation Object Detection Action Recognition / / / / Autonomous Navigation 3 D Reconstruction Crowd
|
[
{
"docid": "79d6aa27e761b25348481ffed15a8bd9",
"text": "Correlation filter (CF) based trackers have recently gained a lot of popularity due to their impressive performance on benchmark datasets, while maintaining high frame rates. A significant amount of recent research focuses on the incorporation of stronger features for a richer representation of the tracking target. However, this only helps to discriminate the target from background within a small neighborhood. In this paper, we present a framework that allows the explicit incorporation of global context within CF trackers. We reformulate the original optimization problem and provide a closed form solution for single and multi-dimensional features in the primal and dual domain. Extensive experiments demonstrate that this framework significantly improves the performance of many CF trackers with only a modest impact on frame rate.",
"title": ""
}
] |
[
{
"docid": "f11db33a0eb2ab985189866e2a57c7e2",
"text": "Age estimation based on the human face remains a significant problem in computer vision and pattern recognition. In order to estimate an accurate age or age group of a facial image, most of the existing algorithms require a huge face data set attached with age labels. This imposes a constraint on the utilization of the immensely unlabeled or weakly labeled training data, e.g., the huge amount of human photos in the social networks. These images may provide no age label, but it is easy to derive the age difference for an image pair of the same person. To improve the age estimation accuracy, we propose a novel learning scheme to take advantage of these weakly labeled data through the deep convolutional neural networks. For each image pair, Kullback–Leibler divergence is employed to embed the age difference information. The entropy loss and the cross entropy loss are adaptively applied on each image to make the distribution exhibit a single peak value. The combination of these losses is designed to drive the neural network to understand the age gradually from only the age difference information. We also contribute a data set, including more than 100 000 face images attached with their taken dates. Each image is both labeled with the timestamp and people identity. Experimental results on two aging face databases show the advantages of the proposed age difference learning system, and the state-of-the-art performance is gained.",
"title": ""
},
{
"docid": "99c0aca4e244fab49344723b95a913d7",
"text": "The functional identity of centromeres arises from a set of specific nucleoprotein particle subunits of the centromeric chromatin fibre. These include CENP-A and histone H3 nucleosomes and a novel nucleosome-like complex of CENPs -T, -W, -S and -X. Fluorescence cross-correlation spectroscopy and Förster resonance energy transfer (FRET) revealed that human CENP-S and -X exist principally in complex in soluble form and retain proximity when assembled at centromeres. Conditional labelling experiments show that they both assemble de novo during S phase and G2, increasing approximately three- to fourfold in abundance at centromeres. Fluorescence recovery after photobleaching (FRAP) measurements documented steady-state exchange between soluble and assembled pools, with CENP-X exchanging approximately 10 times faster than CENP-S (t1/2 ∼ 10 min versus 120 min). CENP-S binding to sites of DNA damage was quite distinct, with a FRAP half-time of approximately 160 s. Fluorescent two-hybrid analysis identified CENP-T as a uniquely strong CENP-S binding protein and this association was confirmed by FRET, revealing a centromere-bound complex containing CENP-S, CENP-X and CENP-T in proximity to histone H3 but not CENP-A. We propose that deposition of the CENP-T/W/S/X particle reveals a kinetochore-specific chromatin assembly pathway that functions to switch centromeric chromatin to a mitosis-competent state after DNA replication. Centromeres shuttle between CENP-A-rich, replication-competent and H3-CENP-T/W/S/X-rich mitosis-competent compositions in the cell cycle.",
"title": ""
},
{
"docid": "c446a98b6fd9fca75bb9255c6c3aadc7",
"text": "This paper describes the development of a video/game art project being produced by media artist Bill Viola in collaboration with a team from the USC Game Innovation Lab, which uses a combination of both video and game technologies to explore the universal experience of an individual's journey towards enlightenment. Here, we discuss both the creative and technical approaches to achieving the project's goals of evoking in the player the sense of undertaking a spiritual journey.",
"title": ""
},
{
"docid": "8655f49c0fe74d0a13bb312846f2c7df",
"text": "In this paper, we propose a novel two-stream framework based on combinational deep neural networks. The framework is mainly composed of two components: one is a parallel two-stream encoding component which learns video encoding from multiple sources using 3D convolutional neural networks and the other is a long-short-term-memory (LSTM)-based decoding language model which transfers the input encoded video representations to text descriptions. The merits of our proposed model are: 1) It extracts both temporal and spatial features by exploring the usage of 3D convolutional networks on both raw RGB frames and motion history images. 2) Our model can dynamically tune the weights of different feature channels since the network is trained end-to-end from learning combinational encoding of multiple features to LSTM-based language model. Our model is evaluated on three public video description datasets: one YouTube clips dataset (Microsoft Video Description Corpus) and two large movie description datasets (MPII Corpus and Montreal Video Annotation Dataset) and achieves comparable or better performance than the state-of-the-art approaches in video caption generation.",
"title": ""
},
{
"docid": "89eb311b9d901118dd89fa42e50fef2a",
"text": "Many studies have been conducted so far to build systems for recommending fashion items and outfits. Although they achieve good performances in their respective tasks, most of them cannot explain their judgments to the users, which compromises their usefulness. Toward explainable fashion recommendation, this study proposes a system that is able not only to provide a goodness score for an outfit but also to explain the score by providing reason behind it. For this purpose, we propose a method for quantifying how influential each feature of each item is to the score. Using this influence value, we can identify which item and what feature make the outfit good or bad. We represent the image of each item with a combination of human-interpretable features, and thereby the identification of the most influential item-feature pair gives useful explanation of the output score. To evaluate the performance of this approach, we design an experiment that can be performed without human annotation; we replace a single item-feature pair in an outfit so that the score will decrease, and then we test if the proposed method can detect the replaced item correctly using the above influence values. The experimental results show that the proposed method can accurately detect bad items in outfits lowering their scores.",
"title": ""
},
{
"docid": "e724d4405f50fd74a2184187dcc52401",
"text": "This paper presents security of Internet of things. In the Internet of Things vision, every physical object has a virtual component that can produce and consume services Such extreme interconnection will bring unprecedented convenience and economy, but it will also require novel approaches to ensure its safe and ethical use. The Internet and its users are already under continual attack, and a growing economy-replete with business models that undermine the Internet's ethical use-is fully focused on exploiting the current version's foundational weaknesses.",
"title": ""
},
{
"docid": "4b3576e6451fa78886ce440e55b04979",
"text": "In this paper, we model the document revision detection problem as a minimum cost branching problem that relies on computing document distances. Furthermore, we propose two new document distance measures, word vector-based Dynamic Time Warping (wDTW) and word vector-based Tree Edit Distance (wTED). Our revision detection system is designed for a large scale corpus and implemented in Apache Spark. We demonstrate that our system can more precisely detect revisions than state-of-the-art methods by utilizing the Wikipedia revision dumps 1 and simulated data sets.",
"title": ""
},
{
"docid": "ca5f251364ddf21e4cecf25cda5b575d",
"text": "This paper discusses \"bioink\", bioprintable materials used in three dimensional (3D) bioprinting processes, where cells and other biologics are deposited in a spatially controlled pattern to fabricate living tissues and organs. It presents the first comprehensive review of existing bioink types including hydrogels, cell aggregates, microcarriers and decellularized matrix components used in extrusion-, droplet- and laser-based bioprinting processes. A detailed comparison of these bioink materials is conducted in terms of supporting bioprinting modalities and bioprintability, cell viability and proliferation, biomimicry, resolution, affordability, scalability, practicality, mechanical and structural integrity, bioprinting and post-bioprinting maturation times, tissue fusion and formation post-implantation, degradation characteristics, commercial availability, immune-compatibility, and application areas. The paper then discusses current limitations of bioink materials and presents the future prospects to the reader.",
"title": ""
},
{
"docid": "7c98d4c1ab375526c426f8156650cb22",
"text": "Online privacy remains an ongoing source of debate in society. Sensitive to this, many web platforms are offering users greater, more granular control over how and when their information is revealed. However, recent research suggests that information control mechanisms of this sort are not necessarily of economic benefit to the parties involved. We examine the use of these mechanisms and their economic consequences, leveraging data from one of the world's largest global crowdfunding platforms, where contributors can conceal their identity or contribution amounts from public display. We find that information hiding is more likely when contributors are under greater scrutiny or exhibiting “undesirable” behavior. We also identify an anchoring effect from prior contributions, which is eliminated when earlier contributors conceal their amounts. Subsequent analyses indicate that a nuanced approach to the design and provision of information control mechanisms, such as varying default settings based on contribution amounts, can help promote larger contributions.",
"title": ""
},
{
"docid": "d401630481d725ae3d853b126710da31",
"text": "Combinatory Category Grammar (CCG) supertagging is a task to assign lexical categories to each word in a sentence. Almost all previous methods use fixed context window sizes to encode input tokens. However, it is obvious that different tags usually rely on different context window sizes. This motivates us to build a supertagger with a dynamic window approach, which can be treated as an attention mechanism on the local contexts. We find that applying dropout on the dynamic filters is superior to the regular dropout on word embeddings. We use this approach to demonstrate the state-ofthe-art CCG supertagging performance on the standard test set. Introduction Combinatory Category Grammar (CCG) provides a connection between syntax and semantics of natural language. The syntax can be specified by derivations of the lexicon based on the combinatory rules, and the semantics can be recovered from a set of predicate-argument relations. CCG provides an elegant solution for a wide range of semantic analysis, such as semantic parsing (Zettlemoyer and Collins 2007; Kwiatkowski et al. 2010; 2011; Artzi, Lee, and Zettlemoyer 2015), semantic representations (Bos et al. 2004; Bos 2005; 2008; Lewis and Steedman 2013), and semantic compositions, all of which heavily depend on the supertagging and parsing performance. All these motivate us to build a more accurate CCG supertagger. CCG supertagging is the task to predict the lexical categories for each word in a sentence. Existing algorithms on CCG supertagging range from point estimation (Clark and Curran 2007; Lewis and Steedman 2014) to sequential estimation (Xu, Auli, and Clark 2015; Lewis, Lee, and Zettlemoyer 2016; Vaswani et al. 2016), which predict the most probable supertag of the current word according to the context in a fixed size window. This fixed size window assumption is too strong to generalize. We argue this from two perspectives. One perspective comes from the inputs. For a particular word, the number of its categories may vary from 1 to 130 in CCGBank 02-21 (Hockenmaier and Steedman 2007). We ∗Corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. on a warm autumn day ...",
"title": ""
},
{
"docid": "048081246f39fc80273d08493c770016",
"text": "Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other’s thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others. Keyword: Skin segmentation; Thresholding technique; Skin detection; Color space",
"title": ""
},
{
"docid": "5b675ea7554dc8bf1707ecb4c4055de7",
"text": "Researchers have highlighted that the main factors that contribute to IT service failure are the people, process and technology. However, relatively few empirical studies examine to what degree these factors contribute to service disruptions in the public sector organizations. This study explores the IT service management (ITSM) at eight (8) Front-end Agencies, four (4) Ministries and six (6) Departments in Malaysian public service to identify the level of contribution of each factor to the public IT service disruptions. This study was undertaken using questionnaires via stratified sampling. The empirical results reveal that human action, decision, management, error and failure are the major causes to the IT service disruptions followed by an improper process or procedures and technology failure. In addition, we can conclude that human is an important factor and need to give more attention by the management since human is the creator, who uses, manages and maintains the technology and process to enable the delivery of services as specified in the objectives, vision and mission of the organization. Although the literature states that human failure was due to knowledge, skill, attitude and behavior of an individual and the organization environment, but no literature was found studies on what characteristics of human and environmental organizations that make up the resilience service delivery and the creation of an organization that is resilient. Future research on what characteristics on human and organization environmental that contribute to organizational and business resilience is suggested at the end of the paper. However, this paper only covers literature that discussed in depth the type of human failure and the cause of failure. Nevertheless, it is believed that the findings provide a valuable understanding of the current situation in this research field.",
"title": ""
},
{
"docid": "5bf4a17592eca1881a93cd4930f4187d",
"text": "The problem of automatically generating a computer program from some specification has been studied since the early days of AI. Recently, two competing approaches for automatic program learning have received significant attention: (1) neural program synthesis, where a neural network is conditioned on input/output (I/O) examples and learns to generate a program, and (2) neural program induction, where a neural network generates new outputs directly using a latent program representation. Here, for the first time, we directly compare both approaches on a large-scale, real-world learning task and we additionally contrast to rule-based program synthesis, which uses hand-crafted semantics to guide the program generation. Our neural models use a modified attention RNN to allow encoding of variable-sized sets of I/O pairs, which achieve 92% accuracy on a real-world test set, compared to the 34% accuracy of the previous best neural synthesis approach. The synthesis model also outperforms a comparable induction model on this task, but we more importantly demonstrate that the strength of each approach is highly dependent on the evaluation metric and end-user application. Finally, we show that we can train our neural models to remain very robust to the type of noise expected in real-world data (e.g., typos), while a highly-engineered rule-based system fails entirely.",
"title": ""
},
{
"docid": "05edf6dc5d4b9726773f56dafc620619",
"text": "Software systems running continuously for a long time tend to show degrading performance and an increasing failure occurrence rate, due to error conditions that accrue over time and eventually lead the system to failure. This phenomenon is usually referred to as \\textit{Software Aging}. Several long-running mission and safety critical applications have been reported to experience catastrophic aging-related failures. Software aging sources (i.e., aging-related bugs) may be hidden in several layers of a complex software system, ranging from the Operating System (OS) to the user application level. This paper presents a software aging analysis at the Operating System level, investigating software aging sources inside the Linux kernel. Linux is increasingly being employed in critical scenarios; this analysis intends to shed light on its behaviour from the aging perspective. The study is based on an experimental campaign designed to investigate the kernel internal behaviour over long running executions. By means of a kernel tracing tool specifically developed for this study, we collected relevant parameters of several kernel subsystems. Statistical analysis of collected data allowed us to confirm the presence of aging sources in Linux and to relate the observed aging dynamics to the monitored subsystems behaviour. The analysis output allowed us to infer potential sources of aging in the kernel subsystems.",
"title": ""
},
{
"docid": "38cf4762ce867ff39a3e0f892758ddfd",
"text": "Quality control of food inventories in the warehouse is complex as well as challenging due to the fact that food can easily deteriorate. Currently, this difficult storage problem is managed mostly by using a human dependent quality assurance and decision making process. This has however, occasionally led to unimaginative, arduous and inconsistent decisions due to the injection of subjective human intervention into the process. Therefore, it could be said that current practice is not powerful enough to support high-quality inventory management. In this paper, the development of an integrative prototype decision support system, namely, Intelligent Food Quality Assurance System (IFQAS) is described which will assist the process by automating the human based decision making process in the quality control of food storage. The system, which is composed of a Case-based Reasoning (CBR) engine and a Fuzzy rule-based Reasoning (FBR) engine, starts with the receipt of incoming food inventory. With the CBR engine, certain quality assurance operations can be suggested based on the attributes of the food received. Further of this, the FBR engine can make suggestions on the optimal storage conditions of inventory by systematically evaluating the food conditions when the food is receiving. With the assistance of the system, a holistic monitoring in quality control of the receiving operations and the storage conditions of the food in the warehouse can be performed. It provides consistent and systematic Quality Assurance Guidelines for quality control which leads to improvement in the level of customer satisfaction and minimization of the defective rate. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "12b15731e6ad4798cca1d04c4217e0e0",
"text": "Bed surface particle size patchiness may play a central role in bedload and morphologic response to changes in sediment supply in gravel-bed rivers. Here we test a 1-D model (from Parker ebook) of bedload transport, surface grain size, and channel profile with two previously published flume studies that documented bed surface response, and specifically patch development, to reduced sediment supply. The model over predicts slope changes and under predicts average bed surface grain size changes because it does not account for patch dynamics. Field studies reported here using painted rocks as tracers show that fine patches and coarse patches may initiate transport at the same stage, but that much greater transport occurs in the finer patches. A theory for patch development should include grain interactions (similar size grains stopping each other, fine ones mobilizing coarse particles), effects of boundary shear stress divergence, and sorting due to cross-stream sloping bed surfaces.",
"title": ""
},
{
"docid": "bd5784e7f8565382dae997d6930cc3b1",
"text": "Multivariate time series (MTS) data are widely used in a very broad range of fields, including medicine, finance, multimedia and engineering. In this paper a new approach for MTS classification, using a parametric derivative dynamic time warping distance, is proposed. Our approach combines two distances: the DTW distance between MTS and the DTW distance between derivatives of MTS. The new distance is used in classification with the nearest neighbor rule. Experimental results performed on 18 data sets demonstrate the effectiveness of the proposed approach for MTS classification. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c70b485a1ba4c7bb1e3b5ab273cc9156",
"text": "Evidence has amassed from both animal intracranial recordings and human electrophysiology that neural oscillatory mechanisms play a critical role in a number of cognitive functions such as learning, memory, feature binding and sensory gating. The wide availability of high-density electrical and magnetic recordings (64-256 channels) over the past two decades has allowed for renewed efforts in the characterization and localization of these rhythms. A variety of cognitive effects that are associated with specific brain oscillations have been reported, which range in spectral, temporal, and spatial characteristics depending on the context. Our laboratory has focused on investigating the role of alpha-band oscillatory activity (8-14 Hz) as a potential attentional suppression mechanism, and this particular oscillatory attention mechanism will be the focus of the current review. We discuss findings in the context of intersensory selective attention as well as intrasensory spatial and feature-based attention in the visual, auditory, and tactile domains. The weight of evidence suggests that alpha-band oscillations can be actively invoked within cortical regions across multiple sensory systems, particularly when these regions are involved in processing irrelevant or distracting information. That is, a central role for alpha seems to be as an attentional suppression mechanism when objects or features need to be specifically ignored or selected against.",
"title": ""
},
{
"docid": "d1b6007cfb2f8d6227817ab482758bc5",
"text": "Patient Health Monitoring is the one of the field that is rapidly growing very fast nowadays with the advancement of technologies many researchers have come with differentdesigns for patient health monitoring systems as per the technological development. With the widespread of internet, Internet of things is among of the emerged field recently in which many have been able to incorporate it into different applications. In this paper we introduce the system called Iot based patient health monitoring system using LabVIEW and Wireless Sensor Network (WSN).The system will be able to take patients physiological parameters and transmit it wirelessly via Xbees, displays the sensor data onLabVIEW and publish on webserver to enable other health care givers from far distance to visualize, control and monitor continuously via internet connectivity.",
"title": ""
},
{
"docid": "88c1ab7e817118ee01fb28bf32ed2e23",
"text": "Field experiment was conducted on fodder maize to explore the potential of integrated use of chemical, organic and biofertilizers for improving maize growth, beneficial microflora in the rhizosphere and the economic returns. The treatments were designed to make comparison of NPK fertilizer with different combinations of half dose of NP with organic and biofertilizers viz. biological potassium fertilizer (BPF), Biopower, effective microorganisms (EM) and green force compost (GFC). Data reflected maximum crop growth in terms of plant height, leaf area and fresh biomass with the treatment of full NPK; and it was followed by BPF+full NP. The highest uptake of NPK nutrients by crop was recorded as: N under half NP+Biopower; P in BPF+full NP; and K from full NPK. The rhizosphere microflora enumeration revealed that Biopower+EM applied along with half dose of GFC soil conditioner (SC) or NP fertilizer gave the highest count of N-fixing bacteria (Azotobacter, Azospirillum, Azoarcus andZoogloea). Regarding the P-solubilizing bacteria,Bacillus was having maximum population with Biopower+BPF+half NP, andPseudomonas under Biopower+EM+half NP treatment. It was concluded that integration of half dose of NP fertilizer with Biopower+BPF / EM can give similar crop yield as with full rate of NP fertilizer; and through reduced use of fertilizers the production cost is minimized and the net return maximized. However, the integration of half dose of NP fertilizer with biofertilizers and compost did not give maize fodder growth and yield comparable to that from full dose of NPK fertilizers.",
"title": ""
}
] |
scidocsrr
|
c2b7219ee487c08205e9b6424260e0ec
|
T-Linkage: A Continuous Relaxation of J-Linkage for Multi-model Fitting
|
[
{
"docid": "4eaee8e140ccf216eba2eb60eb41d736",
"text": "In this paper, we study the problem of segmenting tracked feature point trajectories of multiple moving objects in an image sequence. Using the affine camera model, this problem can be cast as the problem of segmenting samples drawn from multiple linear subspaces. In practice, due to limitations of the tracker, occlusions, and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or corrupted entries. In this paper, we develop a robust subspace separation scheme that deals with these practical issues in a unified mathematical framework. Our methods draw strong connections between lossy compression, rank minimization, and sparse representation. We test our methods extensively on the Hopkins155 motion segmentation database and other motion sequences with outliers and missing data. We compare the performance of our methods to state-of-the-art motion segmentation methods based on expectation-maximization and spectral clustering. For data without outliers or missing information, the results of our methods are on par with the state-of-the-art results and, in many cases, exceed them. In addition, our methods give surprisingly good performance in the presence of the three types of pathological trajectories mentioned above. All code and results are publicly available at http://perception.csl.uiuc.edu/coding/motion/.",
"title": ""
}
] |
[
{
"docid": "452285eb334f8b4ecc17592e53d7080e",
"text": "Fathers are taking on more childcare and household responsibilities than they used to and many non-profit and government organizations have pushed for changes in policies to support fathers. Despite this effort, little research has explored how fathers go online related to their roles as fathers. Drawing on an interview study with 37 fathers, we find that they use social media to document and archive fatherhood, learn how to be a father, and access social support. They also go online to support diverse family needs, such as single fathers' use of Reddit instead of Facebook, fathers raised by single mothers' search for role models online, and stay-at-home fathers' use of father blogs. However, fathers are constrained by privacy concerns and perceptions of judgment relating to sharing content online about their children. Drawing on theories of fatherhood, we present theoretical and design ideas for designing online spaces to better support fathers and fatherhood. We conclude with a call for a research agenda to support fathers online.",
"title": ""
},
{
"docid": "1e3e52f584863903625a07aabd1517d3",
"text": "Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset.",
"title": ""
},
{
"docid": "3fb39e30092858b84291a85a719f97f0",
"text": "A spherical wrist of the serial type is said to be isotropic if it can attain a posture whereby the singular values of its Jacobian matrix are all identical and nonzero. What isotropy brings about is robustness to manufacturing, assembly, and measurement errors, thereby guaranteeing a maximum orientation accuracy. In this paper we investigate the existence of redundant isotropic architectures, which should add to the dexterity of the wrist under design by virtue of its extra degree of freedom. The problem formulation leads to a system of eight quadratic equations with eight unknowns. The Bezout number of this system is thus 2 = 256, its BKK bound being 192. However, the actual number of solutions is shown to be 32. We list all solutions of the foregoing algebraic problem. All these solutions are real, but distinct solutions do not necessarily lead to distinct manipulators. Upon discarding those algebraic solutions that yield no new wrists, we end up with exactly eight distinct architectures, the eight corresponding manipulators being displayed at their isotropic posture.",
"title": ""
},
{
"docid": "4ed1c4f2fb1922acc9ee781eb1f9524e",
"text": "Across HCI and social computing platforms, mobile applications that support citizen science, empowering non-experts to explore, collect, and share data have emerged. While many of these efforts have been successful, it remains difficult to create citizen science applications without extensive programming expertise. To address this concern, we present Sensr, an authoring environment that enables people without programming skills to build mobile data collection and management tools for citizen science. We demonstrate how Sensr allows people without technical skills to create mobile applications. Findings from our case study demonstrate that our system successfully overcomes technical constraints and provides a simple way to create mobile data collection tools.",
"title": ""
},
{
"docid": "c91ce9eb908d5a0fccc980f306ec0931",
"text": "Text Mining has become an important research area. Text Mining is the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. In this paper, a Survey of Text Mining techniques and applications have been s presented.",
"title": ""
},
{
"docid": "7de050ef4260ad858a620f9aa773b5a7",
"text": "We present DBToaster, a novel query compilation framework for producing high performance compiled query executors that incrementally and continuously answer standing aggregate queries using in-memory views. DBToaster targets applications that require efficient main-memory processing of standing queries (views) fed by high-volume data streams, recursively compiling view maintenance (VM) queries into simple C++ functions for evaluating database updates (deltas). While today’s VM algorithms consider the impact of single deltas on view queries to produce maintenance queries, we recursively consider deltas of maintenance queries and compile to thoroughly transform queries into code. Recursive compilation successively elides certain scans and joins, and eliminates significant query plan interpreter overheads. In this demonstration, we walk through our compilation algorithm, and show the significant performance advantages of our compiled executors over other query processors. We are able to demonstrate 1-3 orders of magnitude improvements in processing times for a financial application and a data warehouse loading application, both implemented across a wide range of database systems, including PostgreSQL, HSQLDB, a commercial DBMS ’A’, the Stanford STREAM engine, and a commercial stream processor ’B’.",
"title": ""
},
{
"docid": "e1efeca0d73be6b09f5cf80437809bdb",
"text": "Deep convolutional neural networks have been shown to be vulnerable to arbitrary geometric transformations. However, there is no systematic method to measure the invariance properties of deep networks to such transformations. We propose ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks. In particular, our algorithm measures the robustness of deep networks to geometric transformations in a worst-case regime as they can be problematic for sensitive applications. Our extensive experimental results show that ManiFool can be used to measure the invariance of fairly complex networks on high dimensional datasets and these values can be used for analyzing the reasons for it. Furthermore, we build on ManiFool to propose a new adversarial training scheme and we show its effectiveness on improving the invariance properties of deep neural networks.1",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "1eab5897252dae2313210c666c3dce8c",
"text": "Bone marrow angiogenesis plays an important role in the pathogenesis and progression in multiple myeloma. Recent studies have shown that proteasome inhibitor bortezomib (Velcade, formerly PS-341) can overcome conventional drug resistance in vitro and in vivo; however, its antiangiogenic activity in the bone marrow milieu has not yet been defined. In the present study, we examined the effects of bortezomib on the angiogenic phenotype of multiple myeloma patient-derived endothelial cells (MMEC). At clinically achievable concentrations, bortezomib inhibited the proliferation of MMECs and human umbilical vein endothelial cells in a dose-dependent and time-dependent manner. In functional assays of angiogenesis, including chemotaxis, adhesion to fibronectin, capillary formation on Matrigel, and chick embryo chorioallantoic membrane assay, bortezomib induced a dose-dependent inhibition of angiogenesis. Importantly, binding of MM.1S cells to MMECs triggered multiple myeloma cell proliferation, which was also abrogated by bortezomib in a dose-dependent fashion. Bortezomib triggered a dose-dependent inhibition of vascular endothelial growth factor (VEGF) and interleukin-6 (IL-6) secretion by the MMECs, and reverse transcriptase-PCR confirmed drug-related down-regulation of VEGF, IL-6, insulin-like growth factor-I, Angiopoietin 1 (Ang1), and Ang2 transcription. These data, therefore, delineate the mechanisms of the antiangiogenic effects of bortezomib on multiple myeloma cells in the bone marrow milieu.",
"title": ""
},
{
"docid": "374d058c8986dd2ace4d99ecc60cbcc6",
"text": "Subscapularis (SSC) lesions are often underdiagnosed in the clinical routine. This study establishes and compares the diagnostic values of various clinical signs and diagnostic tests for lesions of the SSC tendon. Fifty consecutive patients who were scheduled for an arthroscopic subacromial or rotator cuff procedure were clinically evaluated using the lift-off test (LOT), the internal rotation lag sign (IRLS), the modified belly-press test (BPT) and the belly-off sign (BOS) preoperatively. A modified classification system according to Fox et al. (Type I–IV) was used to classify the SSC lesion during diagnostic arthroscopy. SSC tendon tears occured with a prevalence of 30% (15 of 50). Five type I, six type II, three type IIIa and one type IIIb tears according to the modified classification system were found. Fifteen percent of the SSC tears were not predicted preoperatively by using all of the tests. In six cases (12%), the LOT and the IRLS could not be performed due to a painful restricted range of motion. The modified BPT and the BOS showed the greatest sensitivity (88 and 87%) followed by the IRLS (71%) and the LOT (40%). The BOS had the greatest specificity (91%) followed by the LOT (79%), mod. BPT (68%) and IRLS (45%). The BOS had the highest overall accuracy (90%). With the BOS and the modified BPT in particular, upper SSC lesions (type I and II) could be diagnosed preoperatively. A detailed physical exam using the currently available SSC tests allows diagnosing SSC lesions in the majority of cases preoperatively. However, some tears could not be predicted by preoperative assessment using all the tests.",
"title": ""
},
{
"docid": "1223a45c3a2cebe4ce2e94d4468be946",
"text": "In this paper, we present an overview of energy storage in renewable energy systems. In fact, energy storage is a dominant factor. It can reduce power fluctuations, enhances the system flexibility, and enables the storage and dispatching of the electricity generated by variable renewable energy sources such as wind and solar. Different storage technologies are used in electric power systems. They can be chemical, electrochemical, mechanical, electromagnetic or thermal. Energy storage facility is comprised of a storage medium, a power conversion system and a balance of plant. In this work, an application to photovoltaic and wind electric power systems is made. The results obtained under Matlab/Simulink are presented.",
"title": ""
},
{
"docid": "122ed18a623510052664996c7ef4b4bb",
"text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding",
"title": ""
},
{
"docid": "47a484d75b1635139f899d2e1875d8f4",
"text": "This work presents the concept and methodology as well as the architecture and physical implementation of an integrated node for smart-city applications. The presented integrated node lies on active RFID technology whereas the use case illustrated, with results from a small-scale verification of the presented node, refers to common-type waste-bins. The sensing units deployed for the use case are ultrasonic sensors that provide ranging information which is translated to fill-level estimations; however the use of a versatile active RFID tag within the node is able to afford multiple sensors for a variety of smart-city applications. The most important benefits of the presented node are power minimization, utilization of low-cost components and accurate fill-level estimation with a tiny data-load fingerprint, regarding the specific use case on waste-bins, whereas the node has to be deployed on public means of transportation or similar standard route vehicles within an urban or suburban context.",
"title": ""
},
{
"docid": "81a1504505fa4630af771ccf6ed8404d",
"text": "A method for the simultaneous co-registration and georeferencing of multiple 3D pointclouds and associated intensity information is proposed. It is a generalization of the 3D surface matching problem. The simultaneous co-registration provides for a strict solution to the problem, as opposed to sequential pairwise registration. The problem is formulated as the Least Squares matching of overlapping 3D surfaces. The parameters of 3D transformations of multiple surfaces are simultaneously estimated, using the Generalized GaussMarkoff model, minimizing the sum of squares of the Euclidean distances among the surfaces. An observation equation is written for each surface-to-surface correspondence. Each overlapping surface pair contributes a group of observation equations to the design matrix. The parameters are introduced into the system as stochastic variables, as a second type of (fictitious) observations. This extension allows to control the estimated parameters. Intensity information is introduced into the system in the form of quasisurfaces as the third type of observations. Reference points, defining an external (object) coordinate system, which are imaged in additional intensity images, or can be located in the pointcloud, serve as the fourth type of observations. They transform the whole block of “models” to a unique reference system. Furthermore, the given coordinate values of the control points are treated as observations. This gives the fifth type of observations. The total system is solved by applying the Least Squares technique, provided that sufficiently good initial values for the transformation parameters are given. This method can be applied to data sets generated from aerial as well as terrestrial laser scanning or other pointcloud generating methods. * Corresponding author. www.photogrammetry.ethz.ch",
"title": ""
},
{
"docid": "12818095167dbf85d5d717121f00f533",
"text": "Sarmento, H, Figueiredo, A, Lago-Peñas, C, Milanovic, Z, Barbosa, A, Tadeu, P, and Bradley, PS. Influence of tactical and situational variables on offensive sequences during elite football matches. J Strength Cond Res 32(8): 2331-2339, 2018-This study examined the influence of tactical and situational variables on offensive sequences during elite football matches. A sample of 68 games and 1,694 offensive sequences from the Spanish La Liga, Italian Serie A, German Bundesliga, English Premier League, and Champions League were analyzed using χ and logistic regression analyses. Results revealed that counterattacks (odds ratio [OR] = 1.44; 95% confidence interval [CI]: 1.13-1.83; p < 0.01) and fast attacks (OR = 1.43; 95% CI: 1.11-1.85; p < 0.01) increased the success of an offensive sequence by 40% compared with positional attacks. The chance of an offensive sequence ending effectively in games from the Spanish, Italian, and English Leagues were higher than that in the Champions League. Offensive sequences that started in the preoffensive or offensive zones were more successful than those started in the defensive zones. An increase of 1 second in the offensive sequence duration and an extra pass resulted in a decrease of 2% (OR = 0.98; 95% CI: 0.98-0.99; p < 0.001) and 7% (OR = 0.93; 95% CI: 0.91-0.96; p < 0.001), respectively, in the probability of its success. These findings could assist coaches in designing specific training situations that improve the effectiveness of the offensive process.",
"title": ""
},
{
"docid": "70ef6e69e811e3c66f1e73b3ad8c97b3",
"text": "The turnstile junction exhibits very low cross-polarization leakage and is suitable for low-noise millimeter-wave receivers. For use in a cryogenic receiver, it is best if the orthomode transducer (OMT) is implemented in waveguide, contains no additional assembly features, and may be directly machined. However, machined OMTs are prone to sharp signal drop-outs that are costly to overall performance since they show up directly as spikes in receiver noise. We explore the various factors contributing to this degradation and discuss how the current design mitigates each cause. Final performance is demonstrated at cryogenic temperatures.",
"title": ""
},
{
"docid": "3b6e3884a9d3b09d221d06f3dea20683",
"text": "Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data – as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN’s kernels. We approximate our model’s intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-theart results for CIFAR-10.",
"title": ""
},
{
"docid": "6d3e19c44f7af5023ef991b722b078c5",
"text": "Volatile substances are commonly misused with easy-to-obtain commercial products, such as glue, shoe polish, nail polish remover, butane lighter fluid, gasoline and computer duster spray. This report describes a case of sudden death of a 29-year-old woman after presumably inhaling gas cartridge butane from a plastic bag. Autopsy, pathological and toxicological analyses were performed in order to determine the cause of death. Pulmonary edema was observed pathologically, and the toxicological study revealed 2.1μL/mL of butane from the blood. The causes of death from inhalation of volatile substances have been explained by four mechanisms; cardiac arrhythmia, anoxia, respiratory depression, and vagal inhibition. In this case, the cause of death was determined to be asphyxia from anoxia. Additionally, we have gathered fatal butane inhalation cases with quantitative analyses of butane concentrations, and reviewed other reports describing volatile substance abuse worldwide.",
"title": ""
},
{
"docid": "c02d98d1cbda4447498c7d3e1993bae2",
"text": "We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including neural network and template-based models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with realworld users, where it performed significantly better than other systems. The results highlight the potential of coupling ensemble systems with deep reinforcement learning as a fruitful path for developing real-world, open-domain conversational agents.",
"title": ""
},
{
"docid": "46ac5e994ca0bf0c3ea5dd110810b682",
"text": "The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data. Introduction and Motivation While the Web has changed with the advent of the Social Web from mostly authoritative content towards increasing amounts of user generated information, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the evolving Data Web is about linking data, not documents. Such datasets are not bound to a specific document but can be easily combined and used outside of their original creation context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple, heterogeneous data sources from different scientific domains. However, this uncoupling of data from its creation context makes the interpretation of data challenging. Thus, research on semantic interoperability and ontologies is crucial to ensure consistency and meaningful results. Space and time are fundamental ordering principles to structure such data and provide an implicit context for their interpretation. Hence, it is not surprising that many linked datasets either contain spatiotemporal identifiers themselves or link out to such datasets, making them central hubs of the Linked Data cloud. Prominent examples include Geonames.org as well as the Linked Geo Data project, which provides a RDF serialization of Points Of Interest from Open Street Map [103]. Besides such Voluntary Geographic Information (VGI), governments 1570-0844/12/$27.50 c © 2012 – IOS Press and the authors. All rights reserved",
"title": ""
}
] |
scidocsrr
|
914d66c51630092e0ec3babd3a9a99d2
|
Improving network security monitoring for industrial control systems
|
[
{
"docid": "77302cf6a07ee1b6ffa27f8c12ab6ecf",
"text": "The increasing interconnectivity of SCADA (Supervisory Control and Data Acquisition) networks has exposed them to a wide range of network security problems. This paper provides an overview of all the crucial research issues that are involved in strengthening the cyber security of SCADA networks. The paper describes the general architecture of SCADA networks and the properties of some of the commonly used SCADA communication protocols. The general security threats and vulnerabilities in these networks are discussed followed by a survey of the research challenges facing SCADA networks. The paper discusses the ongoing work in several SCADA security areas such as improving access control, firewalls and intrusion detection systems, SCADA protocol analyses, cryptography and key management, device and operating system security. Many trade and research organizations are involved in trying to standardize SCADA security technologies. The paper concludes with an overview of these standardization efforts. a 2006 Elsevier Ltd. All rights reserved. Modern industrial facilities, such as oil refineries, chemical factories, electric power generation plants, and manufacturing facilities are large, distributed complexes. Plant operators must continuously monitor and control many different sections of the plant to ensure its proper operation. The development of networking technology has made this remote command and control feasible. The earliest control networks were simple point-to-point networks connecting a monitoring or command device to a remote sensor or actuator. These have since evolved into complex networks that support communication between a central control unit and multiple remote units on a common communication bus. The nodes on these networks are usually special purpose embedded computing devices such as sensors, actuators, and PLCs. These industrial command and control networks are commonly called SCADA (Supervisory Control and Data Acquisition) networks. In today’s competitive markets, it is essential for industries to modernize their digital SCADA networks to reduce costs and increase efficiency. Many of the current SCADA networks * Corresponding author. E-mail addresses: vmi5e@virginia.edu (V.M. Igure), sal4t@virginia 0167-4048/$ – see front matter a 2006 Elsevier Ltd. All rights reserve doi:10.1016/j.cose.2006.03.001 are also connected to the company’s corporate network and to the Internet. This improved connectivity can help to optimize manufacturing and distribution processes, but it also exposes the safety-critical industrial network to the myriad security problems of the Internet. If processes are monitored and controlled by devices connected over the SCADA network then a malicious attack over the SCADA network has the potential to cause significant damage to the plant. Apart from causing physical and economic loss to the company, an attack against a SCADA network might also adversely affect the environment and endanger public safety. Therefore, security of SCADA networks has become a prime concern. 1. SCADA network architecture A SCADA network provides an interconnection for field devices on the plant floor. These field devices, such as sensors and actuators, are monitored and controlled over the SCADA network by either a PC or a Programmable Logic Controller .edu (S.A. Laughter), rdw@virginia.edu (R.D. Williams). d. c o m p u t e r s & s e c u r i t y 2 5 ( 2 0 0 6 ) 4 9 8 – 5 0 6 499 (PLC). In many cases, the plants also have a dedicated control center to screen the entire plant. The control center is usually located in a separate physical part of the factory and typically has advanced computation and communication facilities. Modern control centers have data servers, Human–Machine Interface (HMI) stations and other servers to aid the operators in the overall management of the factory network. This SCADA network is usually connected to the outside corporate network and/or the Internet through specialized gateways (Sauter and Schwaiger, 2002; Schwaiger and Treytl, 2003). The gateways provide the interface between IP-based networks on the outside and the fieldbus protocol-based SCADA networks on the factory floor. The gateway provides the protocol conversion mechanisms to enable communication between the two different networks. It also provides cache mechanisms for data objects that are exchanged between the networks in order to improve the gateway performance (Sauter and Schwaiger, 2002). A typical example of SCADA network is shown in Fig. 1. Apart from performance considerations, the design requirements for a SCADA network are also shaped by the operating conditions of the network (Decotignie, 1996). These conditions influence the topology of the network and the network protocol. The resulting SCADA networks have certain unique characteristics. For example, most of the terminal devices in fieldbus networks are special purpose embedded computing systems with limited computing capability and functionality. Unlike highly populated corporate office networks, many utility industry applications of SCADA networks, such as electric power distribution, are usually sparse, yet geographically extensive. Similarly, the physical conditions of a factory floor are vastly different from that of a corporate office environment. Both the large utility and factory floor networks are often subjected to wide variations in temperature, electro-magnetic radiation, and even simple accumulation of large quantities of dust. All of these conditions increase the noise on the network and also reduce the lifetime of the wires. The specifications for the physical layer of the network must be able to withstand such harsh conditions and manage the noise on the network. Typical communications on a SCADA network include control messages exchanged between master and slave devices. A master device is one which can control the operation of another device. A PC or a PLC is an example of a master device. A slave device is usually a simple sensor or actuator which can send messages to the command device and carry out actions at the command of a master device. However, the network protocol should also provide features for communication between fieldbus devices that want to communicate as peers. To accommodate these requirements, protocols such as PROFIBUS have a hybrid communication model, which includes a peer-to-peer communication model between master devices and a client–server communication model between masters and slaves. The communication between devices can also be asymmetric (Carlson, 2002; Risley et al., 2003). For example, messages sent from the slave to the master are typically much larger than the messages sent from the master to the slave. Some devices may also communicate only through alarms and status messages. Since many devices share a common bus, the protocol must have features for assigning priorities to messages. This helps distinguish between critical and non-critical messages. For example, an alarm message about a possible safety violation should take precedence over a regular data update message. SCADA network protocols must also provide some degree of delivery assurance and stability. Many factory processes require realtime communication between field devices. The network protocol should have features that not only ensure that the Fig. 1 – Typical SCADA network architecture. c o m p u t e r s & s e c u r i t y 2 5 ( 2 0 0 6 ) 4 9 8 – 5 0 6 500 critical messages are delivered but that they are delivered within the time constraints.",
"title": ""
},
{
"docid": "11ed7e0742ddb579efe6e1da258b0d3c",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
},
{
"docid": "9f5024623c1366b4e3c997bcfb909707",
"text": "We needed data to help ourselves and our clients to decide when to expend the extra effort to use a real-time extension such as Xenomai; when it is sufficient to use mainline Linux with the PREEMPT RT patches applied; and when unpatched mainline Linux is sufficient. To gather this data, we set out to compare the performance of three kernels: a baseline Linux kernel; the same kernel with the PREEMPT RT patches; and the same kernel with the Xenomai patches. Xenomai is a set of patches to Linux that integrates real-time capabilities from the hardware interrupt level on up. The PREEMPT RT patches make sections of the Linux kernel preemptible that are ordinarily blocking. We measure the timing for performing two tasks. The first task is to toggle a General Purpose IO (GPIO) output at a fixed period. The second task is to respond to a changing input GPIO pin by causing an output GPIO pin’s value to follow it. For this task, rather than polling, we rely on an interrupt to notify us when the GPIO input changes. For each task, we have four distinct experiments: a Linux user-space process with real-time priority; a Linux kernel module; a Xenomai user-space process; and a Xenomai kernel module. The Linux experiments are run on both a stock Linux kernel and a PREEMPT RT-patched Linux kernel. The Xenomai experiments are run on a Xenomai-patched Linux kernel. To provide an objective metric, all timing measurements are taken with an external piece of hardware, running a small C program on bare metal. This paper documents our results. In particular, we begin with a detailed description of the set of tools we developed to test the kernel configurations. We then present details of a a specific hardware test platform, the BeagleBoard C4, an OMAP3 (Arm architecture) system, and the specific kernel configurations we built to test on that platform. We provide extensive numerical results from testing the BeagleBoard. For instance, the approximate highest external-stimulus frequency for which at least 95% of the time the latency does not exceed 1/2 the period is 31kHz. This frequency is achieved with a kernel module on stock Linux; the best that can be achieved with a userspace module is 8.4kHz, using a Xenomai userspace process. If the latency must not exceed 1/2 the frequency 100% of the time, then Xenomai is the best option for both kernelspace and userspace; a Xenomai kernel module can run at 13.5kHz, while a userspace process can hit 5.9kHz. In addition to the numerical results, we discuss the qualitative difficulties we experienced in trying to test these configurations on the BeagleBoard. Finally, we offer our recommendations for deciding when to use stock Linux vs. PREEMPT RTpatched Linux vs. Xenomai for real-time applications.",
"title": ""
}
] |
[
{
"docid": "e603b32746560887bdd6dbcfdc2e1c28",
"text": "A systematic review of self-report family assessment measures was conducted with reference to their psychometric properties, clinical utility and theoretical underpinnings. Eight instruments were reviewed: The McMaster Family Assessment Device (FAD); Circumplex Model Family Adaptability and Cohesion Evaluation Scales (FACES); Beavers Systems Model Self-Report Family Inventory (SFI); Family Assessment Measure III (FAM III); Family Environment Scale (FES); Family Relations Scale (FRS); and Systemic Therapy Inventory of Change (STIC); and the Systemic Clinical Outcome Routine Evaluation (SCORE). Results indicated that five family assessment measures are suitable for clinical use (FAD, FACES-IV, SFI, FAM III, SCORE), two are not (FES, FRS), and one is a new system currently under-going validation (STIC).",
"title": ""
},
{
"docid": "7f8ca7d8d2978bfc08ab259fba60148e",
"text": "Over the last few years, much online volunteered geographic information (VGI) has emerged and has been increasingly analyzed to understand places and cities, as well as human mobility and activity. However, there are concerns about the quality and usability of such VGI. In this study, we demonstrate a complete process that comprises the collection, unification, classification and validation of a type of VGI—online point-of-interest (POI) data—and develop methods to utilize such POI data to estimate disaggregated land use (i.e., employment size by category) at a very high spatial resolution (census block level) using part of the Boston metropolitan area as an example. With recent advances in activity-based land use, transportation, and environment (LUTE) models, such disaggregated land use data become important to allow LUTE models to analyze and simulate a person’s choices of work location and activity destinations and to understand policy impacts on future cities. These data can also be used as alternatives to explore economic activities at the local level, especially as government-published census-based disaggregated employment data have become less available in the recent decade. Our new approach provides opportunities for cities to estimate land use at high resolution with low cost by utilizing VGI while ensuring its quality with a certain accuracy threshold. The automatic classification of POI can also be utilized for other types of analyses on cities. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9377e5de9d7a440aa5e73db10aa630f4",
"text": ". Micro-finance programmes targeting women became a major plank of donor poverty alleviation and gender strategies in the 1990s. Increasing evidence of the centrality of gender equality to poverty reduction and women’s higher credit repayment rates led to a general consensus on the desirability of targeting women. Not only ‘reaching’ but also ‘empowering’ women became the second official goal of the Micro-credit Summit Campaign.",
"title": ""
},
{
"docid": "3ba9e91a4d2ff8cb1fe479f5dddc86c1",
"text": "Researchers have shown that program analyses that drive software development and maintenance tools supporting search, traceability and other tasks can benefit from leveraging the natural language information found in identifiers and comments. Accurate natural language information depends on correctly splitting the identifiers into their component words and abbreviations. While conventions such as camel-casing can ease this task, conventions are not well-defined in certain situations and may be modified to improve readability, thus making automatic splitting more challenging. This paper describes an empirical study of state-of-the-art identifier splitting techniques and the construction of a publicly available oracle to evaluate identifier splitting algorithms. In addition to comparing current approaches, the results help to guide future development and evaluation of improved identifier splitting approaches.",
"title": ""
},
{
"docid": "6247c827c6fdbc976b900e69a9eb275c",
"text": "Despite the fact that commercial computer systems have been in existence for almost three decades, many systems in the process of being implemented may be classed as failures. One of the factors frequently cited as important to successful system development is involving users in the design and implementation process. This paper reports the results of a field study, conducted on data from forty-two systems, that investigates the role of user involvement and factors affecting the employment of user involvement on the success of system development. Path analysis was used to investigate both the direct effects of the contingent variables on system success and the effect of user involvement as a mediating variable between the contingent variables and system success. The results show that high system complexity and constraints on the resources available for system development are associated with less successful systems.",
"title": ""
},
{
"docid": "93fcbdfe59015b67955246927d67a620",
"text": "The Emotion Recognition in the Wild (EmotiW) Challenge has been held for three years. Previous winner teams primarily focus on designing specific deep neural networks or fusing diverse hand-crafted and deep convolutional features. They all neglect to explore the significance of the latent relations among changing features resulted from facial muscle motions. In this paper, we study this recognition challenge from the perspective of analyzing the relations among expression-specific facial features in an explicit manner. Our method has three key components. First, we propose a pair-wise learning strategy to automatically seek a set of facial image patches which are important for discriminating two particular emotion categories. We found these learnt local patches are in part consistent with the locations of expression-specific Action Units (AUs), thus the features extracted from such kind of facial patches are named AU-aware facial features. Second, in each pair-wise task, we use an undirected graph structure, which takes learnt facial patches as individual vertices, to encode feature relations between any two learnt facial patches. Finally, a robust emotion representation is constructed by concatenating all task-specific graph-structured facial feature relations sequentially. Extensive experiments on the EmotiW 2015 Challenge testify the efficacy of the proposed approach. Without using additional data, our final submissions achieved competitive results on both sub-challenges including the image based static facial expression recognition (we got 55.38% recognition accuracy outperforming the baseline 39.13% with a margin of 16.25%) and the audio-video based emotion recognition (we got 53.80% recognition accuracy outperforming the baseline 39.33% and the 2014 winner team's final result 50.37% with the margins of 14.47% and 3.43%, respectively).",
"title": ""
},
{
"docid": "e5ec3cf10b6664642db6a27d7c76987c",
"text": "We present a protocol for payments across payment systems. It enables secure transfers between ledgers and allows anyone with accounts on two ledgers to create a connection between them. Ledger-provided escrow removes the need to trust these connectors. Connections can be composed to enable payments between any ledgers, creating a global graph of liquidity or Interledger. Unlike previous approaches, this protocol requires no global coordinating system or blockchain. Transfers are escrowed in series from the sender to the recipient and executed using one of two modes. In the Atomic mode, transfers are coordinated using an ad-hoc group of notaries selected by the participants. In the Universal mode, there is no external coordination. Instead, bounded execution windows, participant incentives and a “reverse” execution order enable secure payments between parties without shared trust in any system or institution.",
"title": ""
},
{
"docid": "da287113f7cdcb8abb709f1611c8d457",
"text": "The paper describes a completely new topology for a low-speed, high-torque permanent brushless magnet machine. Despite being naturally air-cooled, it has a significantly higher torque density than a liquid-cooled transverse-flux machine, whilst its power factor is similar to that of a conventional permanent magnet brushless machine. The high torque capability and low loss density are achieved by combining the actions of a speed reducing magnetic gear and a high speed PM brushless machine within a highly integrated magnetic circuit. In this way, the magnetic limit of the machine is reached before its thermal limit. The principle of operation of such a dasiapseudopsila direct-drive machine is described, and measured results from a prototype machine are presented.",
"title": ""
},
{
"docid": "f114e788557e8d734bd2a04a5b789208",
"text": "Adaptive content delivery is the state of the art in real-time multimedia streaming. Leading streaming approaches, e.g., MPEG-DASH and Apple HTTP Live Streaming (HLS), have been developed for classical IP-based networks, providing effective streaming by means of pure client-based control and adaptation. However, the research activities of the Future Internet community adopt a new course that is different from today's host-based communication model. So-called information-centric networks are of considerable interest and are advertised as enablers for intelligent networks, where effective content delivery is to be provided as an inherent network feature. This paper investigates the performance gap between pure client-driven adaptation and the theoretical optimum in the promising Future Internet architecture named data networking (NDN). The theoretical optimum is derived by modeling multimedia streaming in NDN as a fractional multi-commodity flow problem and by extending it taking caching into account. We investigate the multimedia streaming performance under different forwarding strategies, exposing the interplay of forwarding strategies and adaptation mechanisms. Furthermore, we examine the influence of network inherent caching on the streaming performance by varying the caching polices and the cache sizes.",
"title": ""
},
{
"docid": "48518bad41b1b422f698a1f09997960f",
"text": "Knowledge graph is powerful tool for knowledge based engineering. In this paper, a vertical knowledge graph is proposed for the non-traditional machining. Firstly, the definition and classification of the knowledge graph are proposed. Then, the construct flow and key techniques are discussed in details for the construction of vertical knowledge graph. Finally, a vertical knowledge graph of EDM (electrical discharge matching) is proposed as a case study to illustrate the feasibility of this method.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "defc7f4420ad99d410fa18c24b46ab24",
"text": "To determine a reference range of fetal transverse cerebellar diameter in Brazilian population. This was a retrospective cross-sectional study with 3772 normal singleton pregnancies between 18 and 24 weeks of pregnancy. The transverse cerebellar diameter was measured on the axial plane of the fetal head at the level of the lateral ventricles, including the thalamus, cavum septum pellucidum, and third ventricle. To assess the correlation between transverse cerebellar diameter and gestational age, polynomial equations were calculated, with adjustments by the determination coefficient (R2). The mean of fetal transverse cerebellar diameter ranged from 18.49 ± 1.24 mm at 18 weeks to 25.86 ± 1.66 mm at 24 weeks of pregnancy. We observed a good correlation between transverse cerebellar diameter and gestational age, which was best represented by a linear equation: transverse cerebellar diameter: -6.21 + 1.307*gestational age (R2 = 0.707). We determined a reference range of fetal transverse cerebellar diameter for the second trimester of pregnancy in Brazilian population.",
"title": ""
},
{
"docid": "0b41e6fde6fb9a1f685ceec59fc5abc9",
"text": "Reflector antennas are widely used on satellites to communicate with ground stations. They simultaneously transmit and receive RF signals using separate downlink and uplink frequency bands. These antennas require compact and high-performance feed assemblies with small size, low mass, low passive intermodulation (PIM) products [1], low insertion loss, high power handling, and low cross-polar levels. The feeds must also be insensitive to large thermal variations, and must survive the launch environment. In order to achieve these desirable features without prototyping and/or bench tuning, Custom Microwave Inc. (CMI) has combined integrated RF design, precision CAD, and a precision manufacturing technique known as electroforming to closely integrate the various components of a feed or feed network, thereby achieving small size while maintaining high RF performance [2]. In addition to close integration, electroforming eliminates split joints and minimizes flanges by allowing several components to be realized in a single piece, making it the ideal manufacturing technique for ultra-low passive-intermodulation applications. This paper describes the use of precision design CAD tools along with electroforming to realize high-performance feed assemblies for various communication frequency bands for fixed satellite, broadcast satellite, and broadband satellite services.",
"title": ""
},
{
"docid": "878bdefc419be3da8d9e18111d26a74f",
"text": "PURPOSE\nTo estimate prevalence and chronicity of insomnia and the impact of chronic insomnia on health and functioning of adolescents.\n\n\nMETHODS\nData were collected from 4175 youths 11-17 at baseline and 3134 a year later sampled from managed care groups in a large metropolitan area. Insomnia was assessed by youth-reported DSM-IV symptom criteria. Outcomes are three measures of somatic health, three measures of mental health, two measures of substance use, three measures of interpersonal problems, and three of daily activities.\n\n\nRESULTS\nOver one-fourth reported one or more symptoms of insomnia at baseline and about 5% met diagnostic criteria for insomnia. Almost 46% of those who reported one or more symptoms of insomnia in Wave 1 continued to be cases at Wave 2 and 24% met DSM-IV symptom criteria for chronic insomnia (cases in Wave 1 were also cases in Wave 2). Multivariate analyses found chronic insomnia increased subsequent risk for somatic health problems, interpersonal problems, psychological problems, and daily activities. Significant odds (p < .05) ranged from 1.6 to 5.6 for poor outcomes. These results are the first reported on chronic insomnia among youths, and corroborate, using prospective data, previous findings on correlates of disturbed sleep based on cross-sectional studies.\n\n\nCONCLUSIONS\nInsomnia is both common and chronic among adolescents. The data indicate that the burden of insomnia is comparable to that of other psychiatric disorders such as mood, anxiety, disruptive, and substance use disorders. Chronic insomnia severely impacts future health and functioning of youths. Those with chronic insomnia are more likely to seek medical care. These data suggest primary care settings might provide a venue for screening and early intervention for adolescent insomnia.",
"title": ""
},
{
"docid": "df3c5a848c66dbd5e804242a93cdb998",
"text": "Handwritten character recognition has been one of the most fascinating research among the various researches in field of image processing. In Handwritten character recognition method the input is scanned from images, documents and real time devices like tablets, tabloids, digitizers etc. which are then interpreted into digital text. There are basically two approaches - Online Handwritten recognition which takes the input at run time and Offline Handwritten Recognition which works on scanned images. In this paper we have discussed the architecture, the steps involved, and the various proposed methodologies of offline and online character recognition along with their comparison and few applications.",
"title": ""
},
{
"docid": "a87ed525f9732e66e6c172867ef8b189",
"text": "We examine corporate financial and investment decisions made by female executives compared with male executives. Male executives undertake more acquisitions and issue debt more often than female executives. Further, acquisitions made by firms with male executives have announcement returns approximately 2% lower than those made by female executive firms, and debt issues also have lower announcement returns for firms with male executives. Female executives place wider bounds on earnings estimates and are more likely to exercise stock options early. This evidence suggests men exhibit relative overconfidence in significant corporate decision making compared with women. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "87447383afe36c38a5f0a7066614336e",
"text": "The current study examined whether self-compassion, the tendency to treat oneself kindly during distress and disappointments, would attenuate the positive relationship between body mass index (BMI) and eating disorder pathology, and the negative relationship between BMI and body image flexibility. One-hundred and fifty-three female undergraduate students completed measures of self-compassion, self-esteem, eating disorder pathology, and body image flexibility, which refers to one's acceptance of negative body image experiences. Controlling for self-esteem, hierarchical regressions revealed that self-compassion moderated the relationships between BMI and the criteria. Specifically, the positive relationship between BMI and eating disorder pathology and the negative relationship between BMI and body image flexibility were weaker the higher women's levels of self-compassion. Among young women, self-compassion may help to protect against the greater eating disturbances that coincide with a higher BMI, and may facilitate the positive body image experiences that tend to be lower the higher one's BMI.",
"title": ""
},
{
"docid": "7da294f96055210548a1b9f33204c234",
"text": "ARGUS is a multi-agent visitor identification system distributed over several workstations. Human faces are extracted from security camera images by a neuralnetwork-based face detector, and identified as frequent visitors by ARENA, a memory-based face recognition system. ARGUS then uses a messaging system to notify hosts that their guests have arrived. An interface agent enables users to submit feedback, which is immediately incorporated by ARENA to improve its face recognition performance. The ARGUS components were rapidly developed using JGram, an agent framework that is also detailed in this paper. JGram automatically converts high-level agent specifications into Java source code, and assembles complex tasks by composing individual agent services into a JGram pipeline. ARGUS has been operating successfully in an outdoor environment for several months.",
"title": ""
},
{
"docid": "badb04b676d3dab31024e8033fc8aec4",
"text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.",
"title": ""
},
{
"docid": "7ce79a08969af50c1712f0e291dd026c",
"text": "Collaborative filtering (CF) is valuable in e-commerce, and for direct recommendations for music, movies, news etc. But today's systems have several disadvantages, including privacy risks. As we move toward ubiquitous computing, there is a great potential for individuals to share all kinds of information about places and things to do, see and buy, but the privacy risks are severe. In this paper we describe a new method for collaborative filtering which protects the privacy of individual data. The method is based on a probabilistic factor analysis model. Privacy protection is provided by a peer-to-peer protocol which is described elsewhere, but outlined in this paper. The factor analysis approach handles missing data without requiring default values for them. We give several experiments that suggest that this is most accurate method for CF to date. The new algorithm has other advantages in speed and storage over previous algorithms. Finally, we suggest applications of the approach to other kinds of statistical analyses of survey or questionaire data.",
"title": ""
}
] |
scidocsrr
|
b9a0ae6d41b1a9488ec56cf104a0337a
|
Learning Spatially Regularized Correlation Filters for Visual Tracking
|
[
{
"docid": "f25dfc98473b09744d237d85d9aec0b5",
"text": "Object tracking has been one of the most important and active research areas in the field of computer vision. A large number of tracking algorithms have been proposed in recent years with demonstrated success. However, the set of sequences used for evaluation is often not sufficient or is sometimes biased for certain types of algorithms. Many datasets do not have common ground-truth object positions or extents, and this makes comparisons among the reported quantitative results difficult. In addition, the initial conditions or parameters of the evaluated tracking algorithms are not the same, and thus, the quantitative results reported in literature are incomparable or sometimes contradictory. To address these issues, we carry out an extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria to understand how these methods perform within the same framework. In this work, we first construct a large dataset with ground-truth object positions and extents for tracking and introduce the sequence attributes for the performance analysis. Second, we integrate most of the publicly available trackers into one code library with uniform input and output formats to facilitate large-scale performance evaluation. Third, we extensively evaluate the performance of 31 algorithms on 100 sequences with different initialization settings. By analyzing the quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"title": ""
}
] |
[
{
"docid": "1afc103a3878d859ec15929433f49077",
"text": "Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy. For DNNs, the model size is an important factor affecting performance, scalability and energy efficiency. Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning, which affects performance and throughput; 2) the increased training complexity; and 3) the lack of rigirous guarantee of compression ratio and inference accuracy.\n To overcome these limitations, this paper proposes CirCNN, a principled approach to represent weights and process neural networks using block-circulant matrices. CirCNN utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) from O(n2) to O(n log n) and the storage complexity from O(n2) to O(n), with negligible accuracy loss. Compared to other approaches, CirCNN is distinct due to its mathematical rigor: the DNNs based on CirCNN can converge to the same \"effectiveness\" as DNNs without compression. We propose the CirCNN architecture, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc.). In CirCNN architecture: 1) Due to the recursive property, FFT can be used as the key computing kernel, which ensures universal and small-footprint implementations. 2) The compressed but regular network structure avoids the pitfalls of the network pruning and facilitates high performance and throughput with highly pipelined and parallel design. To demonstrate the performance and energy efficiency, we test CirCNN in FPGA, ASIC and embedded processors. Our results show that CirCNN architecture achieves very high energy efficiency and performance with a small hardware footprint. Based on the FPGA implementation and ASIC synthesis results, CirCNN achieves 6 - 102X energy efficiency improvements compared with the best state-of-the-art results.",
"title": ""
},
{
"docid": "09e19f24675cb22638df3f82d07686ac",
"text": "This letter discusses a miniature low profile ultrawideband (UWB) spiral. The antenna is miniaturized using a combination of dielectric and inductive loading. In addition, a ferrite coated ground plane is adopted in place of the traditional metallic ground plane for profile reduction. Using full-wave simulations and measurements, it is shown that the miniaturized spiral can achieve similar performance to a traditional planar spiral twice its size.",
"title": ""
},
{
"docid": "47df1bd26f99313cfcf82430cb98d442",
"text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical",
"title": ""
},
{
"docid": "32a597647795a7333b82827b55c209c9",
"text": "This study investigates the relationship between the extent to which employees have opportunities to voice dissatisfaction and voluntary turnover in 111 short-term, general care hospitals. Results show that, whether or not a union is present, high numbers of mechanisms for employee voice are associated with high retention rates. Implications for theory and research as well as management practice are discussed.",
"title": ""
},
{
"docid": "677b0bc4a8bf55b69062fe790332e107",
"text": "ive Sentence Summarization with Attentive Deep Recurrent Neural Networks Alex Alifimoff aja2015@cs.stanford.edu Author’s note: I liberally use plural pronouns. Most of my projects are group projects so it is now naturally my writing style. Sadly, I was the only one who worked on this project.",
"title": ""
},
{
"docid": "2a60c90b8f36645b3370d12f673b1c62",
"text": "This paper proposes a general approach of 3D reconstruction, a major problem arising in computer vision and virtual reality, based on a combination of Pseudo-Linearization and Errors-in-Variables model. The proposed approach concerns a bunch of corrupted measurements under nonlinear constraints, and optimizes the estimation by taking errors into account. Furthermore, we set a synthetic projective model and adopt a standard deviation-expectation criterion to evaluate the performance or our method applied in 3D reconstruction. Also, some test images are picked from an image database to give this method a chance to demonstrate its performance in our experiments. Finally, as a successful application, this method is used in a calibration-free augmented reality system.",
"title": ""
},
{
"docid": "e8478d17694b39bd252175139a5ca14d",
"text": "Building a computationally creative system is a challenging undertaking. While such systems are beginning to proliferate, and a good number of them have been reasonably well-documented, it may seem, especially to newcomers to the field, that each system is a bespoke design that bears little chance of revealing any general knowledge about CC system building. This paper seeks to dispel this concern by presenting an abstract CC system description, or, in other words a practical, general approach for constructing CC systems.",
"title": ""
},
{
"docid": "3d47cbee5b76ea68a12f6e026fbc2abf",
"text": "This paper presents the first realtime 3D eye gaze capture method that simultaneously captures the coordinated movement of 3D eye gaze, head poses and facial expression deformation using a single RGB camera. Our key idea is to complement a realtime 3D facial performance capture system with an efficient 3D eye gaze tracker. We start the process by automatically detecting important 2D facial features for each frame. The detected facial features are then used to reconstruct 3D head poses and large-scale facial deformation using multi-linear expression deformation models. Next, we introduce a novel user-independent classification method for extracting iris and pupil pixels in each frame. We formulate the 3D eye gaze tracker in the Maximum A Posterior (MAP) framework, which sequentially infers the most probable state of 3D eye gaze at each frame. The eye gaze tracker could fail when eye blinking occurs. We further introduce an efficient eye close detector to improve the robustness and accuracy of the eye gaze tracker. We have tested our system on both live video streams and the Internet videos, demonstrating its accuracy and robustness under a variety of uncontrolled lighting conditions and overcoming significant differences of races, genders, shapes, poses and expressions across individuals.",
"title": ""
},
{
"docid": "c1f095252c6c64af9ceeb33e78318b82",
"text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.",
"title": ""
},
{
"docid": "a64600e570e7465124fe763c4658ddb5",
"text": "There are several applications in VLSI technology that require high-speed shortest-path computations. The shortest path is a path between two nodes (or points) in a graph such that the sum of the weights of its constituent edges is minimum. Floyd-Warshall algorithm provides fastest computation of shortest path between all pair of nodes present in the graph. With rapid advances in VLSI technology, Field Programmable Gate Arrays (FPGAs) are receiving the attention of the Parallel and High Performance Computing community. This paper gives implementation outcome of Floyd-Warshall algorithm to solve the all pairs shortest-paths problem for directed graph in Verilog.",
"title": ""
},
{
"docid": "c59652c2166aefb00469517cd270dea2",
"text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM (Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection.",
"title": ""
},
{
"docid": "36484e10b0644f01e8adbb3268c20561",
"text": "Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural networks, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training framework with a graph-regularised objective, namely Neural Graph Machines, that can combine the power of neural networks and label propagation. This work generalises previous literature on graph-augmented training of neural networks, enabling it to be applied to multiple neural architectures (Feed-forward NNs, CNNs and LSTM RNNs) and a wide range of graphs. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a)~allowing the network to train using labeled data as in the supervised setting, (b)~biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs, with a runtime that is linear in the number of edges. The proposed joint training approach convincingly outperforms many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks.",
"title": ""
},
{
"docid": "acca339b2437da35ca75aecd411c7b86",
"text": "form to demonstrate potential theoretical causes for qualitatively assessed real-world phenomena? Alternatively, can they be used to create wellparameterized empirical simulations appropriate for scenario and policy analysis? How can these models be empirically parameterized, verified, and validated? What are some remaining challenges and open questions in this research area? By providing answers to these questions, we hope to offer guidance to researchers considering the utility of this new modeling approach. We also hope to spark a healthy debate among researchers as to the potential advantages, limitations, andmajor research challenges ofMAS/LUCC modeling.AsMASmodeling studies are beingundertaken by geographers in other research fields—including transportation, integrated assessment, recreation, and resource management—many of the issues raised in this articlemay be relevant for other applications as well. The remainder of this article sequentially addresses the questions outlined above. Approaches to Modeling Land-Use/Cover Change This section examines myriad LUCC modeling approaches and offers MAS as a means of complementing other techniques. We briefly discuss the strengths and weaknesses of seven broad, partly overlapping categories of models: mathematical equation-based, system dynamics, statistical, expert system, evolutionary, cellular, and hybrid. This review is not exhaustive and only serves to highlight ways in which present techniques are complemented by MAS/LUCC models that combine cellular and agent-based models. More comprehensive overviews of LUCCmodeling techniques focus on tropical deforestation (Lambin 1994; Kaimowitz and Angelsen 1998), economic models of land use (Plantinga 1999), ecological landscapes (Baker 1989), urban and regional community planning (U.S. EPA 2000), and LUCC dynamics (Briassoulis 2000; Agarwal et al. 2002; Veldkamp and Lambin 2001; Verburg et al. forthcoming). Equation-Based Models Most models are mathematical in some way, but some are especially so, in that they rely on equations that seek a static or equilibrium solution. The most common mathematical models are sets of equations based on theories of population growth and diffusion that specify cumulative LUCC over time (Sklar and Costanza 1991). More complex models, often grounded in economic theory, employ simultaneous joint equations (Kaimowitz and Angelsen 1998). One variant of such models is based on linear programming (Weinberg, Kling, and Wilen 1993; Howitt 1995), potentially linked to GIS information on land parcels (Chuvieco 1993; Longley, Higgs, and Martin 1994; Cromley and Hanink 1999). A major drawback of such models is that a numerical or analytical solution to the system of equations must be obtained, limiting the level of complexity that may practically be built into such models. Simulation models that combine mathematical equationswith other data structures are considered below.",
"title": ""
},
{
"docid": "956e8e1b1408263d7841832e7f0a0885",
"text": "High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets. Although the term UX is widely used, the methods and tools for evaluating UX are still inadequate. This SIG session collects information and experiences about UX evaluation methods used in both academia and industry, discusses the pros and cons of each method, and ideates on how to improve the methods.",
"title": ""
},
{
"docid": "6200d3c4435ae34e912fc8d2f92e904b",
"text": "The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter $\\alpha$ is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks.",
"title": ""
},
{
"docid": "097311a79871cb45051f6f3cf1b89f84",
"text": "Recommender systems typically leverage two types of signals to effectively recommend items to users: user activities and content matching between user and item profiles, and recommendation models in literature are usually categorized into collaborative filtering models, content-based models and hybrid models. In practice, when rich profiles about users and items are available, and user activities are sparse (cold-start), effective content matching signals become much more important in the relevance of the recommendation. The de-facto method to measure similarity between two pieces of text is computing the cosine similarity of the two bags of words, and each word is weighted by TF (term frequency within the document) × IDF (inverted document frequency of the word within the corpus). In general sense, TF can represent any local weighting scheme of the word within each document, and IDF can represent any global weighting scheme of the word across the corpus. In this paper, we focus on the latter, i.e., optimizing the global term weights, for a particular recommendation domain by leveraging supervised approaches. The intuition is that some frequent words (lower IDF, e.g. “database”) can be essential and predictive for relevant recommendation, while some rare words (higher IDF, e.g. the name of a small company) could have less predictive power. Given plenty of observed activities between users and items as training data, we should be able to learn better domain-specific global term weights, which can further improve the relevance of recommendation. We propose a unified method that can simultaneously learn the weights of multiple content matching signals, as well as global term weights for specific recommendation tasks. Our method is efficient to handle large-scale training data ∗This work was conducted during an internship at LinkedIn. Copyright is held by the International World Wide Web Conference Committee (IW3C2). IW3C2 reserves the right to provide a hyperlink to the author’s site if the Material is used in electronic media. WWW 2016, April 11–15, 2016, Montréal, Québec, Canada. ACM 978-1-4503-4143-1/16/04. http://dx.doi.org/10.1145/2872427.2883069 . generated by production recommender systems. And experiments on LinkedIn job recommendation data justify the effectiveness of our approach.",
"title": ""
},
{
"docid": "8372f42c70b3790757f4f1d5535cebc1",
"text": "WiFi positioning system has been studying in many fields since the past. Recently, a lot of mobile companies are competing for smartphones. Accordingly, this paper proposes an indoor WiFi positioning system using Android-based smartphones.",
"title": ""
},
{
"docid": "3c27569524cba921edd79555ffc57c01",
"text": "Two low-profile hybrid antennas are presented. One of the structures provides linearly polarized (LP) wave and the other one produces circularly polarized (CP) radiated field. half-mode substrate integrated waveguide (HMSIW) technique is used to make a semi-circular cavity for reducing the antenna size. Using proximity effect, TM010 mode of a rectangular patch is excited by the HMSIW cavity. An inset microstrip line is used to excite the structure which leads to capability of planar circuit integration. The effects of the HMSIW cavity-backed antenna with and without patches are investigated. The simulated results show that adding the patch increases bandwidth and improves antenna gain. Both proposed hybrid antennas are made on a single-layer substrate using printed circuit board (PCB) process. Two prototypes of the proposed antennas are designed, simulated and fabricated. It is shown that broadband impedance bandwidth of 10% and maximum gain of 7.5 dBi is obtained for the LP antenna. In the case of CP antenna, axial ratio (AR) bandwidth is at least 1% with maximum gain of 6.8 dBi. The proposed hybrid antennas are low profile and offer attractive features such as high gain, low cross polarization level and high front-to-back ratio.",
"title": ""
},
{
"docid": "998fe25641f4f6dc6649b02226c5e86a",
"text": "We present the malicious administrator problem, in which one or more network administrators attempt to damage routing, forwarding, or network availability by misconfiguring controllers. While this threat vector has been acknowledged in previous work, most solutions have focused on enforcing specific policies for forwarding rules. We present a definition of this problem and a controller design called Fleet that makes a first step towards addressing this problem. We present two protocols that can be used with the Fleet controller, and argue that its lower layer deployed on top of switches eliminates many problems of using multiple controllers in SDNs. We then present a prototype simulation and show that as long as a majority of non-malicious administrators exists, we can usually recover from link failures within several seconds (a time dominated by failure detection speed and inter-administrator latency).",
"title": ""
},
{
"docid": "cee9b099f6ea087376b56067620e1c64",
"text": "This paper presents a set of techniques for predicting aggressive comments in social media. In a time when cyberbullying has, unfortunately, made its entrance into society and Internet, it becomes necessary to find ways for preventing and overcoming this phenomenon. One of these concerns the use of machine learning techniques for automatically detecting cases of cyberbullying; a primary task within this cyberbullying detection consists of aggressive text detection. We concretely explore different computational techniques for carrying out this task, either as a classification or as a regression problem, and our results suggest that a key feature is the identification of profane words.",
"title": ""
}
] |
scidocsrr
|
191acb49442f6505c839606b130fa5ff
|
A simulation as a service cloud middleware
|
[
{
"docid": "e740e5ff2989ce414836c422c45570a9",
"text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.",
"title": ""
},
{
"docid": "9380bb09ffc970499931f063008c935f",
"text": "Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7f06370a81e7749970cd0359c5b5f993",
"text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.",
"title": ""
}
] |
[
{
"docid": "234fcc911f6d94b6bbb0af237ad5f34f",
"text": "Contamination of samples with DNA is still a major problem in microbiology laboratories, despite the wide acceptance of PCR and other amplification techniques for the detection of frequently low amounts of target DNA. This review focuses on the implications of contamination in the diagnosis and research of infectious diseases, possible sources of contaminants, strategies for prevention and destruction, and quality control. Contamination of samples in diagnostic PCR can have far-reaching consequences for patients, as illustrated by several examples in this review. Furthermore, it appears that the (sometimes very unexpected) sources of contaminants are diverse (including water, reagents, disposables, sample carry over, and amplicon), and contaminants can also be introduced by unrelated activities in neighboring laboratories. Therefore, lack of communication between researchers using the same laboratory space can be considered a risk factor. Only a very limited number of multicenter quality control studies have been published so far, but these showed false-positive rates of 9–57%. The overall conclusion is that although nucleic acid amplification assays are basically useful both in research and in the clinic, their accuracy depends on awareness of risk factors and the proper use of procedures for the prevention of nucleic acid contamination. The discussion of prevention and destruction strategies included in this review may serve as a guide to help improve laboratory practices and reduce the number of false-positive amplification results.",
"title": ""
},
{
"docid": "cff3b4f6db26e66893a9db95fb068ef1",
"text": "In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches.",
"title": ""
},
{
"docid": "417100b3384ec637b47846134bc6d1fd",
"text": "The electronic way of learning and communicating with students offers a lot of advantages that can be achieved through different solutions. Among them, the most popular approach is the use of a learning management system. Teachers and students do not have the possibility to use all of the available learning system tools and modules. Even for modules that are used it is necessary to find the most effective method of approach for any given situation. Therefore, in this paper we make a usability evaluation of standard modules in Moodle, one of the leading open source learning management systems. With this research, we obtain significant results and informationpsilas for administrators, teachers and students on how to improve effective usage of this system.",
"title": ""
},
{
"docid": "48317f6959b4a681e0ff001c7ce3e7ee",
"text": "We introduce the challenge of using machine learning effectively in space applications and motivate the domain for future researchers. Machine learning can be used to enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science return of space missions. In addition to the challenges provided by the nature of space itself, the requirements of a space mission severely limit the use of many current machine learning approaches, and we encourage researchers to explore new ways to address these challenges.",
"title": ""
},
{
"docid": "6746032bbd302a8c873ac437fc79b3fe",
"text": "This article examines the development of profitor revenue-sharing contracts in the motion picture industry. Contrary to much popular belief, such contracts have been in use since the start of the studio era. However, early contracts differed from those seen today. The evolution of the current contract is traced, and evidence regarding the increased use of sharing contracts after 1948 is examined. I examine competing theories of the economic function served by these contracts. I suggest that it is unlikely that these contracts are the result of a standard principal-agent problem.",
"title": ""
},
{
"docid": "defb837e866948e5e092ab64476d33b5",
"text": "Recent multicoil polarised pads called Double D pads (DDP) and Bipolar Pads (BPP) show excellent promise when used in lumped charging due to having single sided fields and high native Q factors. However, improvements to field leakage are desired to enable higher power transfer while keeping the leakage flux within ICNIRP levels. This paper proposes a method to reduce the leakage flux which a lumped inductive power transfer (IPT) system exhibits by modifying the ferrite structure of its pads. The DDP and BPP pads ferrite structures are both modified by extending them past the ends of the coils in each pad with the intention of attracting only magnetic flux generated by the primary pad not coupled onto the secondary pad. Simulated improved ferrite structures are validated through practical measurements.",
"title": ""
},
{
"docid": "4b057d86825e346291d675e0c1285fad",
"text": "We describe theclipmap, a dynamic texture representation that efficiently caches textures of arbitrarily large size in a finite amount of physical memory for rendering at real-time rates. Further, we describe a software system for managing clipmaps that supports integration into demanding real-time applications. We show the scale and robustness of this integrated hardware/software architecture by reviewing an application virtualizing a 170 gigabyte texture at 60 Hertz. Finally, we suggest ways that other rendering systems may exploit the concepts underlying clipmaps to solve related problems. CR",
"title": ""
},
{
"docid": "6be97ac80738519792c02b033563efa7",
"text": "Title of Document: SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT Stephan Charles Greene Doctor of Philosophy, 2007 Directed By: Professor Philip Resnik, Department of Linguistics and Institute for Advanced Computer Studies Current interest in automatic sentiment analysis i motivated by a variety of information requirements. The vast majority of work in sentiment analysis has been specifically targeted at detecting subjective state ments and mining opinions. This dissertation focuses on a different but related pro blem that to date has received relatively little attention in NLP research: detect ing implicit sentiment , or spin, in text. This text classification task is distinguished from ther sentiment analysis work in that there is no assumption that the documents to b e classified with respect to sentiment are necessarily overt expressions of opin ion. They rather are documents that might reveal a perspective . This dissertation describes a novel approach to t e identification of implicit sentiment, motivated by ideas drawn from the literature on lexical semantics and argument structure, supported and refined through psycholinguistic experimentation. A relationship pr edictive of sentiment is established for components of meaning that are thou g t to be drivers of verbal argument selection and linking and to be arbiters o f what is foregrounded or backgrounded in discourse. In computational experim nts employing targeted lexical selection for verbs and nouns, a set of features re flective of these components of meaning is extracted for the terms. As observable p roxies for the underlying semantic components, these features are exploited using mach ine learning methods for text classification with respect to perspective. After i nitial experimentation with manually selected lexical resources, the method is generaliz d to require no manual selection or hand tuning of any kind. The robustness of this lin gu stically motivated method is demonstrated by successfully applying it to three d istinct text domains under a number of different experimental conditions, obtain ing the best classification accuracies yet reported for several sentiment class ification tasks. A novel graph-based classifier combination method is introduced which f urther improves classification accuracy by integrating statistical classifiers wit h models of inter-document relationships. SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "6ce2991a68c7d4d6467ff2007badbaf0",
"text": "This paper investigates acoustic models for automatic speech recognition (ASR) using deep neural networks (DNNs) whose input is taken directly from windowed speech waveforms (WSW). After demonstrating the ability of these networks to automatically acquire internal representations that are similar to mel-scale filter-banks, an investigation into efficient DNN architectures for exploiting WSW features is performed. First, a modified bottleneck DNN architecture is investigated to capture dynamic spectrum information that is not well represented in the time domain signal. Second,the redundancies inherent in WSW based DNNs are considered. The performance of acoustic models defined over WSW features is compared to that obtained from acoustic models defined over mel frequency spectrum coefficient (MFSC) features on the Wall Street Journal (WSJ) speech corpus. It is shown that using WSW features results in a 3.0 percent increase in WER relative to that resulting from MFSC features on the WSJ corpus. However, when combined with MFSC features, a reduction in WER of 4.1 percent is obtained with respect to the best evaluated MFSC based DNN acoustic model.",
"title": ""
},
{
"docid": "e91310da7635df27b5c4056388cc6e52",
"text": "This paper presents a new metric for automated registration of multi-modal sensor data. The metric is based on the alignment of the orientation of gradients formed from the two candidate sensors. Data registration is performed by estimating the sensors’ extrinsic parameters that minimises the misalignment of the gradients. The metric can operate in a large range of applications working on both 2D and 3D sensor outputs and is suitable for both (i) single scan data registration and (ii) multi-sensor platform calibration using multiple scans. Unlike traditional calibration methods, it does not require markers or other registration aids to be placed in the scene. The effectiveness of the new method is demonstrated with experimental results on a variety of camera-lidar and camera-camera calibration problems. The novel metric is validated through comparisons with state of the art methods. Our approach is shown to give high quality registrations under all tested conditions. C © 2014 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "35b286999957396e1f5cab6e2370ed88",
"text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.",
"title": ""
},
{
"docid": "1d60437cbd2cec5058957af291ca7cde",
"text": "e behavior of users in certain services could be a clue that can be used to infer their preferences and may be used to make recommendations for other services they have never used. However, the cross-domain relationships between items and user consumption paerns are not simple, especially when there are few or no common users and items across domains. To address this problem, we propose a content-based cross-domain recommendation method for cold-start users that does not require userand itemoverlap. We formulate recommendation as extreme multi-class classication where labels (items) corresponding to the users are predicted. With this formulation, the problem is reduced to a domain adaptation seing, in which a classier trained in the source domain is adapted to the target domain. For this, we construct a neural network that combines an architecture for domain adaptation, Domain Separation Network, with a denoising autoencoder for item representation. We assess the performance of our approach in experiments on a pair of data sets collected from movie and news services of Yahoo! JAPAN and show that our approach outperforms several baseline methods including a cross-domain collaborative ltering method.",
"title": ""
},
{
"docid": "a89c53f4fbe47e7a5e49193f0786cd6d",
"text": "Although hundreds of studies have documented the association between family poverty and children's health, achievement, and behavior, few measure the effects of the timing, depth, and duration of poverty on children, and many fail to adjust for other family characteristics (for example, female headship, mother's age, and schooling) that may account for much of the observed correlation between poverty and child outcomes. This article focuses on a recent set of studies that explore the relationship between poverty and child outcomes in depth. By and large, this research supports the conclusion that family income has selective but, in some instances, quite substantial effects on child and adolescent well-being. Family income appears to be more strongly related to children's ability and achievement than to their emotional outcomes. Children who live in extreme poverty or who live below the poverty line for multiple years appear, all other things being equal, to suffer the worst outcomes. The timing of poverty also seems to be important for certain child outcomes. Children who experience poverty during their preschool and early school years have lower rates of school completion than children and adolescents who experience poverty only in later years. Although more research is needed on the significance of the timing of poverty on child outcomes, findings to date suggest that interventions during early childhood may be most important in reducing poverty's impact on children.",
"title": ""
},
{
"docid": "5e75a46c36e663791db0f8b45f685cb6",
"text": "This study provides one of very few experimental investigations into the impact of a musical soundtrack on the video gaming experience. Participants were randomly assigned to one of three experimental conditions: game-with-music, game-without-music, or music-only. After playing each of three segments of The Lord of the Rings: The Two Towers (Electronic Arts, 2002)--or, in the music-only condition, listening to the musical score that accompanies the scene--subjects responded on 21 verbal scales. Results revealed that some, but not all, of the verbal scales exhibited a statistically significant difference due to the presence of a musical score. In addition, both gender and age level were shown to be significant factors for some, but not all, of the verbal scales. Details of the specific ways in which music affects the gaming experience are provided in the body of the paper.",
"title": ""
},
{
"docid": "8a293b95b931f4f72fe644fdfe30564a",
"text": "Today, the concept of brain connectivity plays a central role in the neuroscience. While functional connectivity is defined as the temporal coherence between the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally between cortical sites. The most used method to estimate effective connectivity in neuroscience is the structural equation modeling (SEM), typically used on data related to the brain hemodynamic behavior. However, the use of hemodynamic measures limits the temporal resolution on which the brain process can be followed. The present research proposes the use of the SEM approach on the cortical waveforms estimated from the high-resolution EEG data, which exhibits a good spatial resolution and a higher temporal resolution than hemodynamic measures. We performed a simulation study, in which different main factors were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). Such factors were the signal-to-noise ratio and the duration of the simulated cortical activity. Since SEM technique is based on the use of a model formulated on the basis of anatomical and physiological constraints, different experimental conditions were analyzed, in order to evaluate the effect of errors made in the a priori model formulation on its performances. The feasibility of the proposed approach has been shown in a human study using high-resolution EEG recordings related to finger tapping movements.",
"title": ""
},
{
"docid": "3476246809afe4e6b7cef9bbbed1926e",
"text": "The aim of this study was to investigate the efficacy of a proposed new implant mediated drug delivery system (IMDDS) in rabbits. The drug delivery system is applied through a modified titanium implant that is configured to be implanted into bone. The implant is hollow and has multiple microholes that can continuously deliver therapeutic agents into the systematic body. To examine the efficacy and feasibility of the IMDDS, we investigated the pharmacokinetic behavior of dexamethasone in plasma after a single dose was delivered via the modified implant placed in the rabbit tibia. After measuring the plasma concentration, the areas under the curve showed that the IMDDS provided a sustained release for a relatively long period. The result suggests that the IMDDS can deliver a sustained release of certain drug components with a high bioavailability. Accordingly, the IMDDS may provide the basis for a novel approach to treating patients with chronic diseases.",
"title": ""
},
{
"docid": "bd21815804115f2c413265660a78c203",
"text": "Outsourcing, internationalization, and complexity characterize today's aerospace supply chains, making aircraft manufacturers structurally dependent on each other. Despite several complexity-related supply chain issues reported in the literature, aerospace supply chain structure has not been studied due to a lack of empirical data and suitable analytical toolsets for studying system structure. In this paper, we assemble a large-scale empirical data set on the supply network of Airbus and apply the new science of networks to analyze how the industry is structured. Our results show that the system under study is a network, formed by communities connected by hub firms. Hub firms also tend to connect to each other, providing cohesiveness, yet making the network vulnerable to disruptions in them. We also show how network science can be used to identify firms that are operationally critical and that are key to disseminating information.",
"title": ""
},
{
"docid": "dc207fb8426f468dde2cb1d804b33539",
"text": "This paper presents a webcam-based spherical coordinate conversion system using OpenCL massive parallel computing for panorama video image stitching. With multi-core architecture and its high-bandwidth data transmission rate of memory accesses, modern programmable GPU makes it possible to process multiple video images in parallel for real-time interaction. To get a panorama view of 360 degrees, we use OpenCL to stitch multiple webcam video images into a panorama image and texture mapped it to a spherical object to compose a virtual reality immersive environment. The experimental results show that when we use NVIDIA 9600GT to process eight 640×480 images, OpenCL can achieve ninety times speedups.",
"title": ""
},
{
"docid": "161c79eeb01624c497446cb2c51f3893",
"text": "In this article, results of a German nationwide survey (KFN schools survey 2007/2008) are presented. The controlled sample of 44,610 male and female ninth-graders was carried out in 2007 and 2008 by the Criminological Research Institute of Lower Saxony (KFN). According to a newly developed screening instrument (KFN-CSAS-II), which was presented to every third juvenile participant (N = 15,168), 3% of the male and 0.3% of the female students are diagnosed as dependent on video games. The data indicate a clear dividing line between extensive gaming and video game dependency (VGD) as a clinically relevant phenomenon. VGD is accompanied by increased levels of psychological and social stress in the form of lower school achievement, increased truancy, reduced sleep time, limited leisure activities, and increased thoughts of committing suicide. In addition, it becomes evident that personal risk factors are crucial for VGD. The findings indicate the necessity of additional research as well as the respective measures in the field of health care policies.",
"title": ""
}
] |
scidocsrr
|
f9535352b316cfc03772935e7a0af264
|
Tour the world: Building a web-scale landmark recognition engine
|
[
{
"docid": "368a3dd36283257c5573a7e1ab94e930",
"text": "This paper develops the multidimensional binary search tree (or <italic>k</italic>-d tree, where <italic>k</italic> is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The <italic>k</italic>-d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an <italic>n</italic> record file are: insertion, <italic>O</italic>(log <italic>n</italic>); deletion of the root, <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-1)/<italic>k</italic></supscrpt>); deletion of a random node, <italic>O</italic>(log <italic>n</italic>); and optimization (guarantees logarithmic performance of searches), <italic>O</italic>(<italic>n</italic> log <italic>n</italic>). Search algorithms are given for partial match queries with <italic>t</italic> keys specified [proven maximum running time of <italic>O</italic>(<italic>n</italic><supscrpt>(<italic>k</italic>-<italic>t</italic>)/<italic>k</italic></supscrpt>)] and for nearest neighbor queries [empirically observed average running time of <italic>O</italic>(log <italic>n</italic>).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that <italic>k</italic>-d trees could be quite useful in many applications, and examples of potential uses are given.",
"title": ""
}
] |
[
{
"docid": "2e0b2bc23117bbe8d41f400761410638",
"text": "Free radicals and other reactive species (RS) are thought to play an important role in many human diseases. Establishing their precise role requires the ability to measure them and the oxidative damage that they cause. This article first reviews what is meant by the terms free radical, RS, antioxidant, oxidative damage and oxidative stress. It then critically examines methods used to trap RS, including spin trapping and aromatic hydroxylation, with a particular emphasis on those methods applicable to human studies. Methods used to measure oxidative damage to DNA, lipids and proteins and methods used to detect RS in cell culture, especially the various fluorescent \"probes\" of RS, are also critically reviewed. The emphasis throughout is on the caution that is needed in applying these methods in view of possible errors and artifacts in interpreting the results.",
"title": ""
},
{
"docid": "5387c752db7b4335a125df91372099b3",
"text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.",
"title": ""
},
{
"docid": "41dfc6647b8937b161c00a1372e986c2",
"text": "Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.",
"title": ""
},
{
"docid": "a1b7f74caf4daea70c06dbb04646b769",
"text": "Estimating the 6-DoF pose of a camera from a single image relative to a pre-computed 3D point-set is an important task for many computer vision applications. Perspective-n-Point (PnP) solvers are routinely used for camera pose estimation, provided that a good quality set of 2D-3D feature correspondences are known beforehand. However, finding optimal correspondences between 2D key-points and a 3D point-set is non-trivial, especially when only geometric (position) information is known. Existing approaches to the simultaneous pose and correspondence problem use local optimisation, and are therefore unlikely to find the optimal solution without a good pose initialisation, or introduce restrictive assumptions. Since a large proportion of outliers are common for this problem, we instead propose a globally-optimal inlier set cardinality maximisation approach which jointly estimates optimal camera pose and optimal correspondences. Our approach employs branch-and-bound to search the 6D space of camera poses, guaranteeing global optimality without requiring a pose prior. The geometry of SE(3) is used to find novel upper and lower bounds for the number of inliers and local optimisation is integrated to accelerate convergence. The evaluation empirically suppons the optimality proof and shows that the method performs much more robustly than existing approaches, including on a large-scale outdoor data-set.",
"title": ""
},
{
"docid": "9c562763cac968ce38359635d1826ff9",
"text": "This paper proposes a novel multi-layered gesture recognition method with Kinect. We explore the essential linguistic characters of gestures: the components concurrent character and the sequential organization character, in a multi-layered framework, which extracts features from both the segmented semantic units and the whole gesture sequence and then sequentially classifies the motion, location and shape components. In the first layer, an improved principle motion is applied to model the motion component. In the second layer, a particle-based descriptor and a weighted dynamic time warping are proposed for the location component classification. In the last layer, the spatial path warping is further proposed to classify the shape component represented by unclosed shape context. The proposed method can obtain relatively high performance for one-shot learning gesture recognition on the ChaLearn Gesture Dataset comprising more than 50, 000 gesture sequences recorded with Kinect.",
"title": ""
},
{
"docid": "fc6e5b83900d87fd5d6eec6d84d47939",
"text": "In this letter, we propose a low complexity linear precoding scheme for downlink multiuser MIMO precoding systems where there is no limit on the number of multiple antennas employed at both the base station and the users. In the proposed algorithm, we can achieve the precoder in two steps. In the first step, we balance the multiuser interference (MUI) and noise by carrying out a novel channel extension approach. In the second step, we further optimize the system performance assuming parallel SU MIMO channels. Simulation results show that the proposed algorithm can achieve elaborate performance while offering lower computational complexity.",
"title": ""
},
{
"docid": "d50b6e7c130080eba98bf4437c333f16",
"text": "In this paper we provide a brief review of how out-of-sample methods can be used to construct tests that evaluate a time-series model's ability to predict. We focus on the role that parameter estimation plays in constructing asymptotically valid tests of predictive ability. We illustrate why forecasts and forecast errors that depend upon estimated parameters may have statistical properties that differ from those of their population counterparts. We explain how to conduct asymptotic inference, taking due account of dependence on estimated parameters.",
"title": ""
},
{
"docid": "84ce7f45282ac6f17d57ddd6898d8695",
"text": "OBJECTIVE\nThe purpose of this case series was to retrospectively examine records of cases with uterine rupture in pregnancies following myomectomy and to describe the clinical features and pregnancy outcomes.\n\n\nMETHODS\nThis study was conducted as a multicenter case series. The patient databases at 7 tertiary hospitals were queried. Records of patients with a diagnosis of uterine rupture in the pregnancy following myomectomy between January 2012 and December 2014 were retrospectively collected. The uterine rupture cases enrolled in this study were defined as follows: through-and-through uterine rupture or tear of the uterine muscle and serosa, occurrence from 24+0 to 41+6 weeks' gestation, singleton pregnancy, and previous laparoscopic myomectomy (LSM) or laparotomic myomectomy (LTM) status.\n\n\nRESULTS\nFourteen pregnant women experienced uterine rupture during their pregnancy after LSM or LTM. Preterm delivery of less than 34 weeks' gestation occurred in 5 cases, while intrauterine fetal death occurred in 3, and 3 cases had fetal distress. Of the 14 uterine rupture cases, none occurred during labor. All mothers survived and had no sequelae, unlike the perinatal outcomes, although they were receiving blood transfusion or treatment for uterine artery embolization because of uterine atony or massive hemorrhage.\n\n\nCONCLUSION\nIn women of childbearing age who are scheduled to undergo LTM or LSM, the potential risk of uterine rupture on subsequent pregnancy should be explained before surgery. Pregnancy in women after myomectomy should be carefully observed, and they should be adequately counseled during this period.",
"title": ""
},
{
"docid": "8b70670fa152dbd5185e80136983ff12",
"text": "This letter proposes a novel converter topology that interfaces three power ports: a source, a bidirectional storage port, and an isolated load port. The proposed converter is based on a modified version of the isolated half-bridge converter topology that utilizes three basic modes of operation within a constant-frequency switching cycle to provide two independent control variables. This allows tight control over two of the converter ports, while the third port provides the power balance in the system. The switching sequence ensures a clamping path for the energy of the leakage inductance of the transformer at all times. This energy is further utilized to achieve zero-voltage switching for all primary switches for a wide range of source and load conditions. Basic steady-state analysis of the proposed converter is included, together with a suggested structure for feedback control. Key experimental results are presented that validate the converter operation and confirm its ability to achieve tight independent control over two power processing paths. This topology promises significant savings in component count and losses for power-harvesting systems. The proposed topology and control is particularly relevant to battery-backed power systems sourced by solar or fuel cells",
"title": ""
},
{
"docid": "339aa2d53be2cf1215caa142ad5c58d2",
"text": "A true random number generator (TRNG) is an important component in cryptographic systems. Designing a fast and secure TRNG in an FPGA is a challenging task. In this paper we analyze the TRNG designed by Sunar et al. based on XOR of the outputs of many oscillator rings. We propose an enhanced TRNG that does not require post-processing to pass statistical tests and with better randomness characteristics on the output. We have shown by experiment that the frequencies of the equal length oscillator rings in the TRNG are not identical but different due to the placement of the inverters in the FPGA. We have implemented our proposed TRNG in an Altera Cyclone II FPGA. Our implementation has passed the NIST and DIEHARD statistical tests with a throughput of 100 Mbps and with a usage of less than 100 logic elements in the FPGA.",
"title": ""
},
{
"docid": "dcf24411ffed0d5bf2709e005f6db753",
"text": "Dynamic Causal Modelling (DCM) is an approach first introduced for the analysis of functional magnetic resonance imaging (fMRI) to quantify effective connectivity between brain areas. Recently, this framework has been extended and established in the magneto/encephalography (M/EEG) domain. DCM for M/EEG entails the inversion a full spatiotemporal model of evoked responses, over multiple conditions. This model rests on a biophysical and neurobiological generative model for electrophysiological data. A generative model is a prescription of how data are generated. The inversion of a DCM provides conditional densities on the model parameters and, indeed on the model itself. These densities enable one to answer key questions about the underlying system. A DCM comprises two parts; one part describes the dynamics within and among neuronal sources, and the second describes how source dynamics generate data in the sensors, using the lead-field. The parameters of this spatiotemporal model are estimated using a single (iterative) Bayesian procedure. In this paper, we will motivate and describe the current DCM framework. Two examples show how the approach can be applied to M/EEG experiments.",
"title": ""
},
{
"docid": "225e7b608d06d218144853b900d40fd1",
"text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.",
"title": ""
},
{
"docid": "fc8976f3cf91f104b1ebed1698f152b8",
"text": "In this paper we address the problem of predicting SPARQL query performance. We use machine learning techniques to learn SPARQL query performance from previously executed queries. Traditional approaches for estimating SPARQL query cost are based on statistics about the underlying data. However, in many use-cases involving querying Linked Data, statistics about the underlying data are often missing. Our approach does not require any statistics about the underlying RDF data, which makes it ideal for the Linked Data scenario. We show how to model SPARQL queries as feature vectors, and use k-nearest neighbors regression and Support Vector Machine with the nu-SVR kernel to accurately predict SPARQL query execution time.",
"title": ""
},
{
"docid": "0406af2f32a077be7eb2c3e1db8715bd",
"text": "The task of Named Entity Linking is to link entity mentions in the document to their correct entries in a knowledge base and to cluster NIL mentions. Ambiguous, misspelled, and incomplete entity mention names are the main challenges in the linking process. We propose a novel approach that combines two state-of-the-art models — for entity disambiguation and for paraphrase detection — to overcome these challenges. We consider name variations as paraphrases of the same entity mention and adopt a paraphrase model for this task. Our approach utilizes a graph-based disambiguation model based on Personalized Page Rank, and then refines and clusters its output using the paraphrase similarity between entity mention strings. It achieves a competitive performance of 80.5% in B+F clustering score on diagnostic TAC EDL 2014 data.",
"title": ""
},
{
"docid": "8348a89e74707b8e42beb7589e2603b2",
"text": "Skin-lightening agents such as kojic acid, arbutin, ellagic acid, lucinol and 5,5′-dipropylbiphenyl-2,2′-diol are used in ‘anti-ageing’ cosmetics. Cases of allergic contact dermatitis caused by these skin-lightening agents have been reported (1, 2). Vitamin C and its derivatives have also been used in cosmetics as skin-lightening agents for a long time. Vitamin C in topical agents is poorly absorbed through the skin, and is easily oxidized after percutaneous absorption. Recently, ascorbic acid derivatives have been developed with enhanced properties. The ascorbic acid derivative 3-o-ethyl-l-ascorbic acid (CAS no. 86404-048, molecular weight 204.18; Fig. 1), also known as vitamin C ethyl, is chemically stable and is more easily absorbed through the skin than the other vitamin C derivatives. Moreover, 3-o-ethyl-l-ascorbic acid has skinlightening properties. Here, we report a case of allergic contact dermatitis caused by a skin-lightening lotion containing 3-o-ethyl-l-ascorbic acid.",
"title": ""
},
{
"docid": "d52f8428afcef8b7f612f78dd0bf0841",
"text": "We address the problem of activity detection in continuous, untrimmed video streams. This is a difficult task that requires extracting meaningful spatio-temporal features to capture activities, accurately localizing the start and end times of each activity. We introduce a new model, Region Convolutional 3D Network (R-C3D), which encodes the video streams using a three-dimensional fully convolutional network, then generates candidate temporal regions containing activities, and finally classifies selected regions into specific activities. Computation is saved due to the sharing of convolutional features between the proposal and the classification pipelines. The entire model is trained end-to-end with jointly optimized localization and classification losses. R-C3D is faster than existing methods (569 frames per second on a single Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS’14. We further demonstrate that our model is a general activity detection framework that does not rely on assumptions about particular dataset properties by evaluating our approach on ActivityNet and Charades. Our code is available at http://ai.bu.edu/r-c3d/",
"title": ""
},
{
"docid": "7ebbb9ebc94c72997895b4141de6f67a",
"text": "Purpose – The purpose of this paper is to highlight the potential role that the so-called “toxic triangle” (Padilla et al., 2007) can play in undermining the processes around effectiveness. It is the interaction between leaders, organisational members, and the environmental context in which those interactions occur that has the potential to generate dysfunctional behaviours and processes. The paper seeks to set out a set of issues that would seem to be worthy of further consideration within the Journal and which deal with the relationships between organisational effectiveness and the threats from insiders. Design/methodology/approach – The paper adopts a systems approach to the threats from insiders and the manner in which it impacts on organisation effectiveness. The ultimate goal of the paper is to stimulate further debate and discussion around the issues. Findings – The paper adds to the discussions around effectiveness by highlighting how senior managers can create the conditions in which failure can occur through the erosion of controls, poor decision making, and the creation of a culture that has the potential to generate failure. Within this setting, insiders can serve to trigger a series of failures by their actions and for which the controls in place are either ineffective or have been by-passed as a result of insider knowledge. Research limitations/implications – The issues raised in this paper need to be tested empirically as a means of providing a clear evidence base in support of their relationships with the generation of organisational ineffectiveness. Practical implications – The paper aims to raise awareness and stimulate thinking by practising managers around the role that the “toxic triangle” of issues can play in creating the conditions by which organisations can incubate the potential for crisis. Originality/value – The paper seeks to bring together a disparate body of published work within the context of “organisational effectiveness” and sets out a series of dark characteristics that organisations need to consider if they are to avoid failure. The paper argues the case that effectiveness can be a fragile construct and that the mechanisms that generate failure also need to be actively considered when discussing what effectiveness means in practice.",
"title": ""
},
{
"docid": "971692db73441f7c68a0cc32927ae0b2",
"text": "This letter presents a new lattice-form complex adaptive IIR notch filter to estimate and track the frequency of a complex sinusoid signal. The IIR filter is a cascade of a direct-form all-pole prefilter and an adaptive lattice-form all-zero filter. A complex domain exponentially weighted recursive least square algorithm is adopted instead of the widely used least mean square algorithm to increase the convergence rate. The convergence property of this algorithm is investigated, and an expression for the steady-state asymptotic bias is derived. Analysis results indicate that the frequency estimate for a single complex sinusoid is unbiased. Simulation results demonstrate that the proposed method achieves faster convergence and better tracking performance than all traditional algorithms.",
"title": ""
},
{
"docid": "f005ebceeac067ffae197fee603ed8c7",
"text": "The extended Kalman filter (EKF) is one of the most widely used methods for state estimation with communication and aerospace applications based on its apparent simplicity and tractability (Shi et al., 2002; Bolognani et al., 2003; Wu et al., 2004). However, for an EKF to guarantee satisfactory performance, the system model should be known exactly. Unknown external disturbances may result in the inaccuracy of the state estimate, even cause divergence. This difficulty has been recognized in the literature (Reif & Unbehauen, 1999; Reif et al., 2000), and several schemes have been developed to overcome it. A traditional approach to improve the performance of the filter is the 'covariance setting' technique, where a positive definite estimation error covariance matrix is chosen by the filter designer (Einicke et al., 2003; Bolognani et al., 2003). As it is difficult to manually tune the covariance matrix for dynamic system, adaptive extended Kalman filter (AEKF) approaches for online estimation of the covariance matrix have been adopted (Kim & ILTIS, 2004; Yu et al., 2005; Ahn & Won, 2006). However, only in some special cases, the optimal estimation of the covariance matrix can be obtained. And inaccurate approximation of the covariance matrix may blur the state estimate. Recently, the robust H∞ filter has received considerable attention (Theodor et al., 1994; Shen & Deng, 1999; Zhang et al., 2005; Tseng & Chen, 2001). The robust filters take different forms depending on what kind of disturbances are accounted for, while the general performance criterion of the filters is to guarantee a bounded energy gain from the worst possible disturbance to the estimation error. Although the robust extended Kalman filter (REKF) has been deeply investigated (Einicke & White, 1999; Reif et al., 1999; Seo et al., 2006), how to prescribe the level of disturbances attenuation is still an open problem. In general, the selection of the attenuation level can be seen as a tradeoff between the optimality and the robustness. In other words, the robustness of the REKF is obtained at the expense of optimality. This chapter reviews the adaptive robust extended Kalman filter (AREKF), an effective algorithm which will remain stable in the presence of unknown disturbances, and yield accurate estimates in the absence of disturbances (Xiong et al., 2008). The key idea of the AREKF is to design the estimator based on the stability analysis, and determine whether the error covariance matrix should be reset according to the magnitude of the innovation. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg",
"title": ""
},
{
"docid": "a5391753b4ac2b7cab9f58f28348ab8d",
"text": "We present a temporal map of key processes that occur during decision making, which consists of three stages: 1) formation of preferences among options, 2) selection and execution of an action, and 3) experience or evaluation of an outcome. This framework can be used to integrate findings of traditional choice psychology, neuropsychology, brain lesion studies, and functional neuroimaging. Decision making is distributed across various brain centers, which are differentially active across these stages of decision making. This approach can be used to follow developmental trajectories of the different stages of decision making and to identify unique deficits associated with distinct psychiatric disorders.",
"title": ""
}
] |
scidocsrr
|
ae3dbdad428b7cd12dadceef2f3ef261
|
Linguistic Reflections of Student Engagement in Massive Open Online Courses
|
[
{
"docid": "a7eff25c60f759f15b41c85ac5e3624f",
"text": "Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized – active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.",
"title": ""
},
{
"docid": "2fbc75f848a0a3ae8228b5c6cbe76ec4",
"text": "The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations.",
"title": ""
}
] |
[
{
"docid": "4dca240e5073db9f09e6fdc3b022a29a",
"text": "We describe an evolutionary approach to the control problem of bipedal walking. Using a full rigid-body simulation of a biped, it was possible to evolve recurrent neural networks that controlled stable straight-line walking on a planar surface. No proprioceptive information was necessary to achieve this task. Furthermore, simple sensory input to locate a sound source was integrated to achieve directional walking. To our knowledge, this is the first work that demonstrates the application of evolutionary optimization to three-dimensional physically simulated biped locomotion.",
"title": ""
},
{
"docid": "cf0b49aabe042b93be0c382ad69e4093",
"text": "This paper shows a technique to enhance the resolution of a frequency modulated continuous wave (FMCW) radar system. The range resolution of an FMCW radar system is limited by the bandwidth of the transmitted signal. By using high resolution methods such as the Matrix Pencil Method (MPM) it is possible to enhance the resolution. In this paper a new method to obtain a better resolution for FMCW radar systems is used. This new method is based on the MPM and is enhanced to require less computing power. To evaluate this new technique, simulations and measurements are used. The result shows that this new method is able to improve the performance of FMCW radar systems.",
"title": ""
},
{
"docid": "a5b147f5b3da39fed9ed11026f5974a2",
"text": "The aperture coupled patch geometry has been extended to dual polarization by several authors. In Tsao et al. (1988) a cross-shaped slot is fed by a balanced feed network which allows for a high degree of isolation. However, the balanced feed calls for an air-bridge which complicates both the design process and the manufacture. An alleviation to this problem is to separate the two channels onto two different substrate layers separated by the ground plane. In this case the disadvantage is increased cost. Another solution with a single layer feed is presented in Brachat and Baracco (1995) where one channel feeds a single slot centered under the patch whereas the other channel feeds two separate slots placed near the edges of the patch. Our experience is that with this geometry it is hard to achieve a well-matched broadband design since the slots near the edge of the patch present very low coupling. All the above geometries maintain symmetry with respect to the two principal planes if we ignore the small spurious coupling from feed lines in the vicinity of the aperture. We propose to reduce the symmetry to only one principal plane which turns out to be sufficient for high isolation and low cross-polarization. The advantage is that only one layer of feed network is needed, with no air-bridges required. In addition the aperture position is centered under the patch. An important application for dual polarized antennas is base station antennas. We have therefore designed and measured an element for the PCS band (1.85-1.99 GHz).",
"title": ""
},
{
"docid": "db7a4ab8d233119806e7edf2a34fffd1",
"text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.",
"title": ""
},
{
"docid": "d9a87325efbd29520c37ec46531c6062",
"text": "Predicting the risk of potential diseases from Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Compared with traditional machine learning models, deep learning based approaches achieve superior performance on risk prediction task. However, none of existing work explicitly takes prior medical knowledge (such as the relationships between diseases and corresponding risk factors) into account. In medical domain, knowledge is usually represented by discrete and arbitrary rules. Thus, how to integrate such medical rules into existing risk prediction models to improve the performance is a challenge. To tackle this challenge, we propose a novel and general framework called PRIME for risk prediction task, which can successfully incorporate discrete prior medical knowledge into all of the state-of-the-art predictive models using posterior regularization technique. Different from traditional posterior regularization, we do not need to manually set a bound for each piece of prior medical knowledge when modeling desired distribution of the target disease on patients. Moreover, the proposed PRIME can automatically learn the importance of different prior knowledge with a log-linear model.Experimental results on three real medical datasets demonstrate the effectiveness of the proposed framework for the task of risk prediction",
"title": ""
},
{
"docid": "719c1b6ad0d945b68b34abceb1ed8e3b",
"text": "This editorial provides a behavioral science view on gamification and health behavior change, describes its principles and mechanisms, and reviews some of the evidence for its efficacy. Furthermore, this editorial explores the relation between gamification and behavior change frameworks used in the health sciences and shows how gamification principles are closely related to principles that have been proven to work in health behavior change technology. Finally, this editorial provides criteria that can be used to assess when gamification provides a potentially promising framework for digital health interventions.",
"title": ""
},
{
"docid": "927f2c68d709c7418ad76fd9d81b18c4",
"text": "With the growing deployment of host and network intrusion detection systems, managing reports from these systems becomes critically important. We present a probabilistic approach to alert correlation, extending ideas from multisensor data fusion. Features used for alert correlation are based on alert content that anticipates evolving IETF standards. The probabilistic approach provides a unified mathematical framework for correlating alerts that match closely but not perfectly, where the minimum degree of match required to fuse alerts is controlled by a single configurable parameter. Only features in common are considered in the fusion algorithm. For each feature we define an appropriate similarity function. The overall similarity is weighted by a specifiable expectation of similarity. In addition, a minimum similarity may be specified for some or all features. Features in this set must match at least as well as the minimum similarity specification in order to combine alerts, regardless of the goodness of match on the feature set as a whole. Our approach correlates attacks over time, correlates reports from heterogeneous sensors, and correlates multiple attack steps.",
"title": ""
},
{
"docid": "121f2bfd854b79a14e8171d875ba951f",
"text": "Arising from many applications at the intersection of decision-making and machine learning, Marginal Maximum A Posteriori (Marginal MAP) problems unify the two main classes of inference, namely maximization (optimization) and marginal inference (counting), and are believed to have higher complexity than both of them. We propose XOR_MMAP, a novel approach to solve the Marginal MAP problem, which represents the intractable counting subproblem with queries to NP oracles, subject to additional parity constraints. XOR_MMAP provides a constant factor approximation to the Marginal MAP problem, by encoding it as a single optimization in a polynomial size of the original problem. We evaluate our approach in several machine learning and decision-making applications, and show that our approach outperforms several state-of-the-art Marginal MAP solvers.",
"title": ""
},
{
"docid": "3bae971fce094c3ff6c34595bac60ef2",
"text": "In this work, we present a 3D 128Gb 2bit/cell vertical-NAND (V-NAND) Flash product. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1xnm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50MB/s write throughput with 3K endurance for typical embedded applications. Also, extended endurance of 35K is achieved with 36MB/s of write throughput for data center and enterprise SSD applications. And 2nd generation of 3D V-NAND opens up a whole new world at SSD endurance, density and battery life for portables.",
"title": ""
},
{
"docid": "7a4bb28ae7c175a018b278653e32c3a1",
"text": "Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.",
"title": ""
},
{
"docid": "f2a1e5d8e99977c53de9f2a82576db69",
"text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.",
"title": ""
},
{
"docid": "d6d07f50778ba3d99f00938b69fe0081",
"text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.",
"title": ""
},
{
"docid": "f2fed9066ac945ae517aef8ec5bb5c61",
"text": "BACKGROUND\nThe aging of society is a global trend, and care of older adults with dementia is an urgent challenge. As dementia progresses, patients exhibit negative emotions, memory disorders, sleep disorders, and agitated behavior. Agitated behavior is one of the most difficult problems for family caregivers and healthcare providers to handle when caring for older adults with dementia.\n\n\nPURPOSE\nThe aim of this study was to investigate the effectiveness of white noise in improving agitated behavior, mental status, and activities of daily living in older adults with dementia.\n\n\nMETHODS\nAn experimental research design was used to study elderly participants two times (pretest and posttest). Six dementia care centers in central and southern Taiwan were targeted to recruit participants. There were 63 participants: 28 were in the experimental group, and 35 were in the comparison group. Experimental group participants received 20 minutes of white noise consisting of ocean, rain, wind, and running water sounds between 4 and 5 P.M. daily over a period of 4 weeks. The comparison group received routine care. Questionnaires were completed, and observations of agitated behaviors were collected before and after the intervention.\n\n\nRESULTS\nAgitated behavior in the experimental group improved significantly between pretest and posttest. Furthermore, posttest scores on the Mini-Mental Status Examination and Barthel Index were slightly better for this group than at pretest. However, the experimental group registered no significant difference in mental status or activities of daily living at posttest. For the comparison group, agitated behavior was unchanged between pretest and posttest.\n\n\nCONCLUSIONS\nThe results of this study support white noise as a simple, convenient, and noninvasive intervention that improves agitated behavior in older adults with dementia. These results may provide a reference for related healthcare providers, educators, and administrators who care for older adults with dementia.",
"title": ""
},
{
"docid": "e3d0a58ddcffabb26d5e059d3ae6b370",
"text": "HCI ( Human Computer Interaction ) studies the ways humans use digital or computational machines, systems or infrastructures. The study of the barriers encountered when users interact with the various interfaces is critical to improving their use, as well as their experience. Access and information processing is carried out today from multiple devices (computers, tablets, phones... ) which is essential to maintain a multichannel consistency. This complexity increases with environments in which we do not have much experience as users, where interaction with the machine is a challenge even in phases of research: virtual reality environments, augmented reality, or viewing and handling of large amounts of data, where the simplicity and ease of use are critical.",
"title": ""
},
{
"docid": "e8c9067f13c9a57be46823425deb783b",
"text": "In order to utilize the tremendous computing power of graphics hardware and to automatically adapt to the fast and frequent changes in its architecture and performance characteristics, this paper implements an automatic tuning system to generate high-performance matrix-multiplication implementation on graphics hardware. The automatic tuning system uses a parameterized code generator to generate multiple versions of matrix multiplication, whose performances are empirically evaluated by actual execution on the target platform. An ad-hoc search engine is employed to search over the implementation space for the version that yields the best performance. In contrast to similar systems on CPUs, which utilize cache blocking, register tiling, instruction scheduling tuning strategies, this paper identifies and exploits several tuning strategies that are unique for graphics hardware. These tuning strategies include optimizing for multiple-render-targets, SIMD instructions with data packing, overcoming limitations on instruction count and dynamic branch instruction. The generated implementations have comparable performance with expert manually tuned version in spite of the significant overhead incurred due to the use of the high-level BrookGPU language.",
"title": ""
},
{
"docid": "01f8616cafa72c473e33f149faff044a",
"text": "We show that the e-commerce domain can provide all the right ingredients for successful data mining and claim that it is a killer domain for data mining. We describe an integrated architecture, based on our experience at Blue Martini Software, for supporting this integration. The architecture can dramatically reduce the pre-processing, cleaning, and data understanding effort often documented to take 80% of the time in knowledge discovery projects. We emphasize the need for data collection at the application server layer (not the web server) in order to support logging of data and metadata that is essential to the discovery process. We describe the data transformation bridges required from the transaction processing systems and customer event streams (e.g., clickstreams) to the data warehouse. We detail the mining workbench, which needs to provide multiple views of the data through reporting, data mining algorithms, visualization, and OLAP. We conclude with a set of challenges.",
"title": ""
},
{
"docid": "fe41de4091692d1af643bf144ac1dcaa",
"text": "Introduction. This research addresses a primary issue that involves motivating academics to share knowledge. Adapting the theory of reasoned action, this study examines the role of motivation that consists of intrinsic motivators (commitment; enjoyment in helping others) and extrinsic motivators (reputation; organizational rewards) to determine and explain the behaviour of Malaysian academics in sharing knowledge. Method. A self-administered questionnaire was distributed using a non-probability sampling technique. A total of 373 completed responses were collected with a total response rate of 38.2%. Analysis. The partial least squares analysis was used to analyse the data. Results. The results indicated that all five of the hypotheses were supported. Analysis of data from the five higher learning institutions in Malaysia found that commitment and enjoyment in helping others (i.e., intrinsic motivators) and reputation and organizational rewards (i.e., extrinsic motivators) have a positive and significant relationship with attitude towards knowledge-sharing. In addition, the findings revealed that intrinsic motivators are more influential than extrinsic motivators. This suggests that academics are influenced more by intrinsic motivators than by extrinsic motivators. Conclusions. The findings provided an indication of the determinants in enhancing knowledgesharing intention among academics in higher education institutions through extrinsic and intrinsic motivators.",
"title": ""
},
{
"docid": "2da67ed8951caf3388ca952465d61b37",
"text": "As a significant supplier of labour migrants, Southeast Asia presents itself as an important site for the study of children in transnational families who are growing up separated from at least one migrant parent and sometimes cared for by 'other mothers'. Through the often-neglected voices of left-behind children, we investigate the impact of parental migration and the resulting reconfiguration of care arrangements on the subjective well-being of migrants' children in two Southeast Asian countries, Indonesia and the Philippines. We theorise the child's position in the transnational family nexus through the framework of the 'care triangle', representing interactions between three subject groups- 'left-behind' children, non-migrant parents/other carers; and migrant parent(s). Using both quantitative (from 1010 households) and qualitative (from 32 children) data from a study of child health and migrant parents in Southeast Asia, we examine relationships within the caring spaces both of home and of transnational spaces. The interrogation of different dimensions of care reveals the importance of contact with parents (both migrant and nonmigrant) to subjective child well-being, and the diversity of experiences and intimacies among children in the two study countries.",
"title": ""
},
{
"docid": "db0b55cd4064799b9d7c52c6f3da6aac",
"text": "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-toend to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.",
"title": ""
},
{
"docid": "4c21f108d05132ce00fe6d028c17c7ab",
"text": "In this work, a new predictive phase-locked loop (PLL) for encoderless control of a permanent-magnet synchronous generator (PMSG) in a variable-speed wind energy conversion system (WECS) is presented. The idea of the predictive PLL is derived from the direct-model predictive control (DMPC) principle. The predictive PLL uses a limited (discretized) number of rotor-angles for predicting/estimating the back-electromotive-force (BEMF) of the PMSG. subsequently, that predicted angle, which optimizes a pre-defined quality function, is chosen to become the best rotor-angle/position. Accordingly, the fixed gain proportional integral (FGPI) regulator that is normally used in PLLs is eliminated. The performance of the predictive PLL is validated experimentally and compared with that of the traditional one under various operating scenarios and under variations of the PMSG parameters.",
"title": ""
}
] |
scidocsrr
|
c0159657811c724b694af1cb60a2c215
|
How to increase and sustain positive emotion : The effects of expressing gratitude and visualizing best possible selves
|
[
{
"docid": "c03265e4a7d7cc14e6799c358a4af95a",
"text": "Three studies considered the consequences of writing, talking, and thinking about significant events. In Studies 1 and 2, students wrote, talked into a tape recorder, or thought privately about their worst (N = 96) or happiest experience (N = 111) for 15 min each during 3 consecutive days. In Study 3 (N = 112), students wrote or thought about their happiest day; half systematically analyzed, and half repetitively replayed this day. Well-being and health measures were administered before each study's manipulation and 4 weeks after. As predicted, in Study 1, participants who processed a negative experience through writing or talking reported improved life satisfaction and enhanced mental and physical health relative to those who thought about it. The reverse effect for life satisfaction was observed in Study 2, which focused on positive experiences. Study 3 examined possible mechanisms underlying these effects. Students who wrote about their happiest moments--especially when analyzing them--experienced reduced well-being and physical health relative to those who replayed these moments. Results are discussed in light of current understanding of the effects of processing life events.",
"title": ""
},
{
"docid": "f515695b3d404d29a12a5e8e58a91fc0",
"text": "One area of positive psychology analyzes subjective well-being (SWB), people's cognitive and affective evaluations of their lives. Progress has been made in understanding the components of SWB, the importance of adaptation and goals to feelings of well-being, the temperament underpinnings of SWB, and the cultural influences on well-being. Representative selection of respondents, naturalistic experience sampling measures, and other methodological refinements are now used to study SWB and could be used to produce national indicators of happiness.",
"title": ""
}
] |
[
{
"docid": "88804f285f4d608b81a1cd741dbf2b7e",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "9256277615e0016992d007b29a2bcf21",
"text": "Three experiments explored how words are learned from hearing them across contexts. Adults watched 40-s videotaped vignettes of parents uttering target words (in sentences) to their infants. Videos were muted except for a beep or nonsense word inserted where each \"mystery word\" was uttered. Participants were to identify the word. Exp. 1 demonstrated that most (90%) of these natural learning instances are quite uninformative, whereas a small minority (7%) are highly informative, as indexed by participants' identification accuracy. Preschoolers showed similar information sensitivity in a shorter experimental version. Two further experiments explored how cross-situational information helps, by manipulating the serial ordering of highly informative vignettes in five contexts. Response patterns revealed a learning procedure in which only a single meaning is hypothesized and retained across learning instances, unless disconfirmed. Neither alternative hypothesized meanings nor details of past learning situations were retained. These findings challenge current models of cross-situational learning which assert that multiple meaning hypotheses are stored and cross-tabulated via statistical procedures. Learners appear to use a one-trial \"fast-mapping\" procedure, even under conditions of referential uncertainty.",
"title": ""
},
{
"docid": "7c27bfa849ba0bd49f9ddaec9beb19b5",
"text": "Very High Spatial Resolution (VHSR) large-scale SAR image databases are still an unresolved issue in the Remote Sensing field. In this work, we propose such a dataset and use it to explore patch-based classification in urban and periurban areas, considering 7 distinct semantic classes. In this context, we investigate the accuracy of large CNN classification models and pre-trained networks for SAR imaging systems. Furthermore, we propose a Generative Adversarial Network (GAN) for SAR image generation and test, whether the synthetic data can actually improve classification accuracy.",
"title": ""
},
{
"docid": "205a38ac9f2df57a33481d36576e7d54",
"text": "Business process improvement initiatives typically employ various process analysis techniques, including evidence-based analysis techniques such as process mining, to identify new ways to streamline current business processes. While plenty of process mining techniques have been proposed to extract insights about the way in which activities within processes are conducted, techniques to understand resource behaviour are limited. At the same time, an understanding of resources behaviour is critical to enable intelligent and effective resource management an important factor which can significantly impact overall process performance. The presence of detailed records kept by today’s organisations, including data about who, how, what, and when various activities were carried out by resources, open up the possibility for real behaviours of resources to be studied. This paper proposes an approach to analyse one aspect of resource behaviour: the manner in which a resource prioritises his/her work. The proposed approach has been formalised, implemented, and evaluated using a number of synthetic and real datasets. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c625221e79bdc508c7c772f5be0458a1",
"text": "Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning contextbased word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).",
"title": ""
},
{
"docid": "1e176f66a29b6bd3dfce649da1a4db9d",
"text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.",
"title": ""
},
{
"docid": "e3bb490de9489a0c02f023d25f0a94d7",
"text": "During the past two decades, self-efficacy has emerged as a highly effective predictor of students' motivation and learning. As a performance-based measure of perceived capability, self-efficacy differs conceptually and psychometrically from related motivational constructs, such as outcome expectations, self-concept, or locus of control. Researchers have succeeded in verifying its discriminant validity as well as convergent validity in predicting common motivational outcomes, such as students' activity choices, effort, persistence, and emotional reactions. Self-efficacy beliefs have been found to be sensitive to subtle changes in students' performance context, to interact with self-regulated learning processes, and to mediate students' academic achievement. Copyright 2000 Academic Press.",
"title": ""
},
{
"docid": "bef64076bf62d9e8fbb6fbaf5534fdc6",
"text": "This paper presents an application of PageRank, a random-walk model originally devised for ranking Web search results, to ranking WordNet synsets in terms of how strongly they possess a given semantic property. The semantic properties we use for exemplifying the approach are positivity and negativity, two properties of central importance in sentiment analysis. The idea derives from the observation that WordNet may be seen as a graph in which synsets are connected through the binary relation “a term belonging to synset sk occurs in the gloss of synset si”, and on the hypothesis that this relation may be viewed as a transmitter of such semantic properties. The data for this relation can be obtained from eXtended WordNet, a publicly available sensedisambiguated version of WordNet. We argue that this relation is structurally akin to the relation between hyperlinked Web pages, and thus lends itself to PageRank analysis. We report experimental results supporting our intuitions.",
"title": ""
},
{
"docid": "574838d3fecf8e8dfc4254b41d446ad2",
"text": "This paper proposes a new procedure to detect Glottal Closure and Opening Instants (GCIs and GOIs) directly from speech waveforms. The procedure is divided into two successive steps. First a mean-based signal is computed, and intervals where speech events are expected to occur are extracted from it. Secondly, at each interval a precise position of the speech event is assigned by locating a discontinuity in the Linear Prediction residual. The proposed method is compared to the DYPSA algorithm on the CMU ARCTIC database. A significant improvement as well as a better noise robustness are reported. Besides, results of GOI identification accuracy are promising for the glottal source characterization.",
"title": ""
},
{
"docid": "cee0d7bac437a3a98fa7aba31969341b",
"text": "Throughout history, the educational process used different educational technologies which did not significantly alter the manner of learning in the classroom. By implementing e-learning technology to the educational process, new and completely different innovative learning scenarios are made possible, including more active student involvement outside the traditional classroom. The quality of the realization of the educational objective in any learning environment depends primarily on the teacher who creates the educational process, mentors and acts as a moderator in the communication within the educational process, but also relies on the student who acquires the educational content. The traditional classroom learning and e-learning environment enable different manners of adopting educational content, and this paper reveals their key characteristics with the purpose of better use of e-learning technology in the educational process.",
"title": ""
},
{
"docid": "e0c87b957faf9c14ce96ed09f968e8ee",
"text": "It is well-known that the power factor of Vernier machines is small compared to permanent magnet machines. However, the power factor equations already derived show a huge deviation to the finite-element analysis (FEA) when used for Vernier machines with concentrated windings. Therefore, this paper develops an analytic model to calculate the power factor of Vernier machines with concentrated windings and different numbers of flux modulating poles (FMPs) and stator slots. The established model bases on the winding function theory in combination with a magnetic equivalent circuit. Consequently, equations for the q-inductance and for the no-load back-EMF of the machine are derived, thus allowing the calculation of the power factor. Thereby, the model considers stator leakage effects, as they are crucial for a good power factor estimation. Comparing the results of the Vernier machine to those of a pm machine explains the decreased power factor of Vernier machines. In addition, a FEA confirms the results of the derived model.",
"title": ""
},
{
"docid": "1d724b07c232098e2a5e5af2bb1e7c83",
"text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.",
"title": ""
},
{
"docid": "7843fb4bbf2e94a30c18b359076899ab",
"text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.",
"title": ""
},
{
"docid": "47199e959f3b10c6fa6b4b8c68434b94",
"text": "The everyday use of smartphones with high quality built-in cameras has lead to an increase in museum visitors' use of these devices to document and share their museum experiences. In this paper, we investigate how one particular photo sharing application, Instagram, is used to communicate visitors' experiences while visiting a museum of natural history. Based on an analysis of 222 instagrams created in the museum, as well as 14 interviews with the visitors who created them, we unpack the compositional resources and concerns contributing to the creation of instagrams in this particular context. By re-categorizing and re-configuring the museum environment, instagrammers work to construct their own narratives from their visits. These findings are then used to discuss what emerging multimedia practices imply for the visitors' engagement with and documentation of museum exhibits. Drawing upon these practices, we discuss the connection between online social media dialogue and the museum site.",
"title": ""
},
{
"docid": "4dc38ae50a2c806321020de4a140ed5f",
"text": "Transcranial direct current stimulation (tDCS) is a promising technology to enhance cognitive and physical performance. One of the major areas of interest is the enhancement of memory function in healthy individuals. The early arrival of tDCS on the market for lifestyle uses and cognitive enhancement purposes lead to the voicing of some important ethical concerns, especially because, to date, there are no official guidelines or evaluation procedures to tackle these issues. The aim of this article is to review ethical issues related to uses of tDCS for memory enhancement found in the ethics and neuroscience literature and to evaluate how realistic and scientifically well-founded these concerns are? In order to evaluate how plausible or speculative each issue is, we applied the methodological framework described by Racine et al. (2014) for \"informed and reflective\" speculation in bioethics. This framework could be succinctly presented as requiring: (1) the explicit acknowledgment of factual assumptions and identification of the value attributed to them; (2) the validation of these assumptions with interdisciplinary literature; and (3) the adoption of a broad perspective to support more comprehensive reflection on normative issues. We identified four major considerations associated with the development of tDCS for memory enhancement: safety, autonomy, justice and authenticity. In order to assess the seriousness and likelihood of harm related to each of these concerns, we analyzed the assumptions underlying the ethical issues, and the level of evidence for each of them. We identified seven distinct assumptions: prevalence, social acceptance, efficacy, ideological stance (bioconservative vs. libertarian), potential for misuse, long term side effects, and the delivery of complete and clear information. We conclude that ethical discussion about memory enhancement via tDCS sometimes involves undue speculation, and closer attention to scientific and social facts would bring a more nuanced analysis. At this time, the most realistic concerns are related to safety and violation of users' autonomy by a breach of informed consent, as potential immediate and long-term health risks to private users remain unknown or not well defined. Clear and complete information about these risks must be provided to research participants and consumers of tDCS products or related services. Broader public education initiatives and warnings would also be worthwhile to reach those who are constructing their own tDCS devices.",
"title": ""
},
{
"docid": "65e320e250cbeb8942bf00f335be4cbd",
"text": "In this paper, we propose a deep progressive reinforcement learning (DPRL) method for action recognition in skeleton-based videos, which aims to distil the most informative frames and discard ambiguous frames in sequences for recognizing actions. Since the choices of selecting representative frames are multitudinous for each video, we model the frame selection as a progressive process through deep reinforcement learning, during which we progressively adjust the chosen frames by taking two important factors into account: (1) the quality of the selected frames and (2) the relationship between the selected frames to the whole video. Moreover, considering the topology of human body inherently lies in a graph-based structure, where the vertices and edges represent the hinged joints and rigid bones respectively, we employ the graph-based convolutional neural network to capture the dependency between the joints for action recognition. Our approach achieves very competitive performance on three widely used benchmarks.",
"title": ""
},
{
"docid": "bbf987eef74d76cf2916ae3080a2b174",
"text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.",
"title": ""
},
{
"docid": "81190a4c576f86444a95e75654bddf29",
"text": "Enforcing a variety of security measures (such as intrusion detection systems, and so on) can provide a certain level of protection to computer networks. However, such security practices often fall short in face of zero-day attacks. Due to the information asymmetry between attackers and defenders, detecting zero-day attacks remains a challenge. Instead of targeting individual zero-day exploits, revealing them on an attack path is a substantially more feasible strategy. Such attack paths that go through one or more zero-day exploits are called zero-day attack paths. In this paper, we propose a probabilistic approach and implement a prototype system ZePro for zero-day attack path identification. In our approach, a zero-day attack path is essentially a graph. To capture the zero-day attack, a dependency graph named object instance graph is first built as a supergraph by analyzing system calls. To further reveal the zero-day attack paths hidden in the supergraph, our system builds a Bayesian network based upon the instance graph. By taking intrusion evidence as input, the Bayesian network is able to compute the probabilities of object instances being infected. Connecting the high-probability-instances through dependency relations forms a path, which is the zero-day attack path. The experiment results demonstrate the effectiveness of ZePro for zero-day attack path identification.",
"title": ""
},
{
"docid": "4b90fefa981e091ac6a5d2fd83e98b66",
"text": "This paper explores an analysis-aware data cleaning architecture for a large class of SPJ SQL queries. In particular, we propose QuERy, a novel framework for integrating entity resolution (ER) with query processing. The aim of QuERy is to correctly and efficiently answer complex queries issued on top of dirty data. The comprehensive empirical evaluation of the proposed solution demonstrates its significant advantage in terms of efficiency over the traditional techniques for the given problem settings.",
"title": ""
},
{
"docid": "3630c575bf7b5250930c7c54d8a1c6d0",
"text": "The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine.",
"title": ""
}
] |
scidocsrr
|
1057b1673d4ac7b30c702bff9c449e9e
|
Malware Detection with Deep Neural Network Using Process Behavior
|
[
{
"docid": "4ca5fec568185d3699c711cc86104854",
"text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.",
"title": ""
}
] |
[
{
"docid": "45636bc97812ecfd949438c2e8ee9d52",
"text": "Single-image super-resolution is a fundamental task for vision applications to enhance the image quality with respect to spatial resolution. If the input image contains degraded pixels, the artifacts caused by the degradation could be amplified by superresolution methods. Image blur is a common degradation source. Images captured by moving or still cameras are inevitably affected by motion blur due to relative movements between sensors and objects. In this work, we focus on the super-resolution task with the presence of motion blur. We propose a deep gated fusion convolution neural network to generate a clear high-resolution frame from a single natural image with severe blur. By decomposing the feature extraction step into two task-independent streams, the dualbranch design can facilitate the training process by avoiding learning the mixed degradation all-in-one and thus enhance the final high-resolution prediction results. Extensive experiments demonstrate that our method generates sharper super-resolved images from low-resolution inputs with high computational efficiency.",
"title": ""
},
{
"docid": "d11c2dd512f680e79706f73d4cd3d0aa",
"text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.",
"title": ""
},
{
"docid": "096249a1b13cd994427eacddc8af3cf6",
"text": "Many factors influence the adoption of cloud computing. Organizations must systematically evaluate these factors before deciding to adopt cloud-based solutions. To assess the determinants that influence the adoption of cloud computing, we develop a research model based on the innovation characteristics from the diffusion of innovation (DOI) theory and the technology-organization-environment (TOE) framework. Data collected from 369 firms in Portugal are used to test the related hypotheses. The study also investigates the determinants of cloud-computing adoption in the manufacturing and services sectors. 2014 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +351 914 934 438. E-mail addresses: toliveira@isegi.unl.pt (T. Oliveira), mthomas@vcu.edu (M. Thomas), mariana.espadanal@gmail.com (M. Espadanal).",
"title": ""
},
{
"docid": "873a24a210aa57fc22895500530df2ba",
"text": "We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic system building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs. assumptions, computation vs. embodiment, and planning vs. feedback. 2) To understand which region of each spectrum most adequately addresses which robotic problem, we must explore the full spectrum of possible approaches. To achieve this, our community should agree on key aspects that characterize the solution space of robotic systems. 3) For manipulation problems in unstructured environments, certain regions of each spectrum match the problem most adequately, and should be exploited further. This is supported by the fact that our solution deviated from the majority of the other challenge entries along each of the spectra.",
"title": ""
},
{
"docid": "7e51bffe62c16cdc517a7c1cbd4ac3fe",
"text": "Information is a perennially significant business asset in all organizations. Therefore, it must be protected as any other valuable asset. This is the objective of information security, and an information security program provides this kind of protection for a company’s information assets and for the company as a whole. One of the best ways to address information security problems in the corporate world is through a risk-based approach. In this paper, we present a taxonomy of security risk assessment drawn from 125 papers published from 1995 to May 2014. Organizations with different size may face problems in selecting suitable risk assessment methods that satisfy their needs. Although many risk-based approaches have been proposed, most of them are based on the old taxonomy, avoiding the need for considering and applying the important criteria in assessing risk raised by rapidly changing technologies and the attackers knowledge level. In this paper, we discuss the key features of risk assessment that should be included in an information security management system. We believe that our new risk assessment taxonomy helps organizations to not only understand the risk assessment better by comparing different new concepts but also select a suitable way to conduct the risk assessment properly. Moreover, this taxonomy will open up interesting avenues for future research in the growing field of security risk assessment.",
"title": ""
},
{
"docid": "82a4bac1745e2d5dd9e39c5a4bf5b3e9",
"text": "Meaning can be as important as usability in the design of technology.",
"title": ""
},
{
"docid": "c95e58c054855c60b16db4816c626ecb",
"text": "Markerless tracking of human pose is a hard yet relevant problem. In this paper, we derive an efficient filtering algorithm for tracking human pose using a stream of monocular depth images. The key idea is to combine an accurate generative model — which is achievable in this setting using programmable graphics hardware — with a discriminative model that provides data-driven evidence about body part locations. In each filter iteration, we apply a form of local model-based search that exploits the nature of the kinematic chain. As fast movements and occlusion can disrupt the local search, we utilize a set of discriminatively trained patch classifiers to detect body parts. We describe a novel algorithm for propagating this noisy evidence about body part locations up the kinematic chain using the un-scented transform. The resulting distribution of body configurations allows us to reinitialize the model-based search. We provide extensive experimental results on 28 real-world sequences using automatic ground-truth annotations from a commercial motion capture system.",
"title": ""
},
{
"docid": "90a9e56cc5a2f9c149dfb33d3446f095",
"text": "The author explores the viability of a comparative approach to personality research. A review of the diverse animal-personality literature suggests that (a) most research uses trait constructs, focuses on variation within (vs. across) species, and uses either behavioral codings or trait ratings; (b) ratings are generally reliable and show some validity (7 parameters that could influence reliability and 4 challenges to validation are discussed); and (c) some dimensions emerge across species, but summaries are hindered by a lack of standard descriptors. Arguments for and against cross-species comparisons are discussed, and research guidelines are suggested. Finally, a research agenda guided by evolutionary and ecological principles is proposed. It is concluded that animal studies provide unique opportunities to examine biological, genetic, and environmental bases of personality and to study personality change, personality-health links, and personality perception.",
"title": ""
},
{
"docid": "67d141b8e53e1398b6988e211d16719e",
"text": "the recent advancement of networking technology has enabled the streaming of video content over wired/wireless network to a great extent. Video streaming includes various types of video content, namely, IP television (IPTV), Video on demand (VOD), Peer-to-Peer (P2P) video sharing, Voice (and video) over IP (VoIP) etc. The consumption of the video contents has been increasing a lot these days and promises a huge potential for the network provider, content provider and device manufacturers. However, from the end user's perspective there is no universally accepted existing standard metric, which will ensure the quality of the application/utility to meet the user's desired experience. In order to fulfill this gap, a new metric, called Quality of Experience (QoE), has been proposed in numerous researches recently. Our aim in this paper is to research the evolution of the term QoE, find the influencing factors of QoE metric especially in video streaming and finally QoE modelling and methodologies in practice.",
"title": ""
},
{
"docid": "f393b6e00ef1e97f683a5dace33e40ff",
"text": "s on human factors in computing systems (pp. 815–828). ACM New York, NY, USA. Hudlicka, E. (1997). Summary of knowledge elicitation techniques for requirements analysis (Course material for human computer interaction). Worcester Polytechnic Institute. Kaptelinin, V., & Nardi, B. (2012). Affordances in HCI: Toward a mediated action perspective. In Proceedings of CHI '12 (pp. 967–976).",
"title": ""
},
{
"docid": "77ec1741e7a0876a0fe9fb85dd57f552",
"text": "Despite growing recognition that attention fluctuates from moment-to-moment during sustained performance, prevailing analysis strategies involve averaging data across multiple trials or time points, treating these fluctuations as noise. Here, using alternative approaches, we clarify the relationship between ongoing brain activity and performance fluctuations during sustained attention. We introduce a novel task (the gradual onset continuous performance task), along with innovative analysis procedures that probe the relationships between reaction time (RT) variability, attention lapses, and intrinsic brain activity. Our results highlight 2 attentional states-a stable, less error-prone state (\"in the zone\"), characterized by higher default mode network (DMN) activity but during which subjects are at risk of erring if DMN activity rises beyond intermediate levels, and a more effortful mode of processing (\"out of the zone\"), that is less optimal for sustained performance and relies on activity in dorsal attention network (DAN) regions. These findings motivate a new view of DMN and DAN functioning capable of integrating seemingly disparate reports of their role in goal-directed behavior. Further, they hold potential to reconcile conflicting theories of sustained attention, and represent an important step forward in linking intrinsic brain activity to behavioral phenomena.",
"title": ""
},
{
"docid": "229605eada4ca390d17c5ff168c6199a",
"text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.",
"title": ""
},
{
"docid": "65a990303d1d6efd3aea5307e7db9248",
"text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org",
"title": ""
},
{
"docid": "5ed74b235edcbcb5aeb5b6b3680e2122",
"text": "Self-paced learning (SPL) mimics the cognitive mechanism o f humans and animals that gradually learns from easy to hard samples. One key issue in SPL is to obtain better weighting strategy that is determined by mini zer function. Existing methods usually pursue this by artificially designing th e explicit form of SPL regularizer. In this paper, we focus on the minimizer functi on, and study a group of new regularizer, named self-paced implicit regularizer th at is deduced from robust loss function. Based on the convex conjugacy theory, the min imizer function for self-paced implicit regularizer can be directly learned fr om the latent loss function, while the analytic form of the regularizer can be even known. A general framework (named SPL-IR) for SPL is developed accordingly. We dem onstrate that the learning procedure of SPL-IR is associated with latent robu st loss functions, thus can provide some theoretical inspirations for its working m echanism. We further analyze the relation between SPL-IR and half-quadratic opt imization. Finally, we implement SPL-IR to both supervised and unsupervised tasks , nd experimental results corroborate our ideas and demonstrate the correctn ess and effectiveness of implicit regularizers.",
"title": ""
},
{
"docid": "e141a1c5c221aa97db98534b339694cb",
"text": "Despite the tremendous popularity and great potential, the field of Enterprise Resource Planning (ERP) adoption and implementation is littered with remarkable failures. Though many contributing factors have been cited in the literature, we argue that the integrated nature of ERP systems, which generally requires an organization to adopt standardized business processes reflected in the design of the software, is a key factor contributing to these failures. We submit that the integration and standardization imposed by most ERP systems may not be suitable for all types of organizations and thus the ‘‘fit’’ between the characteristics of the adopting organization and the standardized business process designs embedded in the adopted ERP system affects the likelihood of implementation success or failure. In this paper, we use the structural contingency theory to identify a set of dimensions of organizational structure and ERP system characteristics that can be used to gauge the degree of fit, thus providing some insights into successful ERP implementations. Propositions are developed based on analyses regarding the success of ERP implementations in different types of organizations. These propositions also provide directions for future research that might lead to prescriptive guidelines for managers of organizations contemplating implementing ERP systems. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2cc55b2cf34d363de50b220a5ced5676",
"text": "We report an imaging scheme, termed aperture-scanning Fourier ptychography, for 3D refocusing and super-resolution macroscopic imaging. The reported scheme scans an aperture at the Fourier plane of an optical system and acquires the corresponding intensity images of the object. The acquired images are then synthesized in the frequency domain to recover a high-resolution complex sample wavefront; no phase information is needed in the recovery process. We demonstrate two applications of the reported scheme. In the first example, we use an aperture-scanning Fourier ptychography platform to recover the complex hologram of extended objects. The recovered hologram is then digitally propagated into different planes along the optical axis to examine the 3D structure of the object. We also demonstrate a reconstruction resolution better than the detector pixel limit (i.e., pixel super-resolution). In the second example, we develop a camera-scanning Fourier ptychography platform for super-resolution macroscopic imaging. By simply scanning the camera over different positions, we bypass the diffraction limit of the photographic lens and recover a super-resolution image of an object placed at the far field. This platform's maximum achievable resolution is ultimately determined by the camera's traveling range, not the aperture size of the lens. The FP scheme reported in this work may find applications in 3D object tracking, synthetic aperture imaging, remote sensing, and optical/electron/X-ray microscopy.",
"title": ""
},
{
"docid": "a4790fdc5f6469b45fa4a22a871f3501",
"text": "NSGA ( [5]) is a popular non-domination based genetic algorithm for multiobjective optimization. It is a very effective algorithm but has been generally criticized for its computational complexity, lack of elitism and for choosing the optimal parameter value for sharing parameter σshare. A modified version, NSGAII ( [3]) was developed, which has a better sorting algorithm , incorporates elitism and no sharing parameter needs to be chosen a priori. NSGA-II is discussed in detail in this.",
"title": ""
},
{
"docid": "790895861cb5bba78513d26c1eb30e4c",
"text": "This paper develops an integrated approach, combining quality function deployment (QFD), fuzzy set theory, and analytic hierarchy process (AHP) approach, to evaluate and select the optimal third-party logistics service providers (3PLs). In the approach, multiple evaluating criteria are derived from the requirements of company stakeholders using a series of house of quality (HOQ). The importance of evaluating criteria is prioritized with respect to the degree of achieving the stakeholder requirements using fuzzy AHP. Based on the ranked criteria, alternative 3PLs are evaluated and compared with each other using fuzzy AHP again to make an optimal selection. The effectiveness of proposed approach is demonstrated by applying it to a Hong Kong based enterprise that supplies hard disk components. The proposed integrated approach outperforms the existing approaches because the outsourcing strategy and 3PLs selection are derived from the corporate/business strategy.",
"title": ""
},
{
"docid": "c6347c06d84051023baaab39e418fb65",
"text": "This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELMs) Toolbox for Big Data. It summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements. The results are applicable to a wide range of machine learning problems and thus provide a solid ground for tackling numerous Big Data challenges. The included toolbox is targeted at enabling the full potential of ELMs to the widest range of users.",
"title": ""
},
{
"docid": "db570f8ff8d714dc2964a9d9b7032bf4",
"text": "Pain related to the osseous thoracolumbar spine is common in the equine athlete, with minimal information available regarding soft tissue pathology. The aims of this study were to describe the anatomy of the equine SSL and ISL (supraspinous and interspinous ligaments) in detail and to assess the innervation of the ligaments and their myofascial attachments including the thoracolumbar fascia. Ten equine thoracolumbar spines (T15-L1) were dissected to define structure and anatomy of the SSL, ISL and adjacent myofascial attachments. Morphological evaluation included histology, electron microscopy and immunohistochemistry (S100 and Substance P) of the SSL, ISL, adjacent fascial attachments, connective tissue and musculature. The anatomical study demonstrated that the SSL and ISL tissues merge with the adjacent myofascia. The ISL has a crossing fibre arrangement consisting of four ligamentous layers with adipose tissue axially. A high proportion of single nerve fibres were detected in the SSL (mean = 2.08 fibres/mm2 ) and ISL (mean = 0.75 fibres/mm2 ), with the larger nerves located between the ligamentous and muscular tissue. The oblique crossing arrangement of the fibres of the ISL likely functions to resist distractive and rotational forces, therefore stabilizing the equine thoracolumbar spine. The dense sensory innervation within the SSL and ISL could explain the severe pain experienced by some horses with impinging dorsal spinous processes. Documentation of the nervous supply of the soft tissues associated with the dorsal spinous processes is a key step towards improving our understanding of equine back pain.",
"title": ""
}
] |
scidocsrr
|
8627e2833f43092297a911400c8ece69
|
Video-based Framework for Safer and Smarter Computer Aided Surgery
|
[
{
"docid": "3e66d3e2674bdaa00787259ac99c3f68",
"text": "Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. DempsterShafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.",
"title": ""
}
] |
[
{
"docid": "d09b4b59c30925bae0983c7e56c3386d",
"text": "We describe a system that automatically extracts 3D geometry of an indoor scene from a single 2D panorama. Our system recovers the spatial layout by finding the floor, walls, and ceiling; it also recovers shapes of typical indoor objects such as furniture. Using sampled perspective sub-views, we extract geometric cues (lines, vanishing points, orientation map, and surface normals) and semantic cues (saliency and object detection information). These cues are used for ground plane estimation and occlusion reasoning. The global spatial layout is inferred through a constraint graph on line segments and planar superpixels. The recovered layout is then used to guide shape estimation of the remaining objects using their normal information. Experiments on synthetic and real datasets show that our approach is state-of-the-art in both accuracy and efficiency. Our system can handle cluttered scenes with complex geometry that are challenging to existing techniques.",
"title": ""
},
{
"docid": "d4c8acbbee72b8a9e880e2bce6e2153a",
"text": "This paper presents a simple linear operator that accurately estimates the position and parameters of ellipse features. Based on the dual conic model, the operator avoids the intermediate stage of precisely extracting individual edge points by exploiting directly the raw gradient information in the neighborhood of an ellipse's boundary. Moreover, under the dual representation, the dual conic can easily be constrained to a dual ellipse when minimizing the algebraic distance. The new operator is assessed and compared to other estimation approaches in simulation as well as in real situation experiments and shows better accuracy than the best approaches, including those limited to the center position.",
"title": ""
},
{
"docid": "b19630c809608601948a7f16910396f7",
"text": "This paper presents a novel, smart and portable active knee rehabilitation orthotic device (AKROD) designed to train stroke patients to correct knee hyperextension during stance and stiff-legged gait (defined as reduced knee flexion during swing). The knee brace provides variable damping controlled in ways that foster motor recovery in stroke patients. A resistive, variable damper, electro-rheological fluid (ERF) based component is used to facilitate knee flexion during stance by providing resistance to knee buckling. Furthermore, the knee brace is used to assist in knee control during swing, i.e. to allow patients to achieve adequate knee flexion for toe clearance and adequate knee extension in preparation to heel strike. The detailed design of AKROD, the first prototype built, closed loop control results and initial human testing are presented here",
"title": ""
},
{
"docid": "a2f3b158f1ec7e6ecb68f5ddfeaf0502",
"text": "Facial landmark detection of face alignment has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multitask learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [29]. In this technical report, we extend the method presented in our ECCV 2014 [39] paper to handle more landmark points (68 points instead of 5 major facial points) without either redesigning the deep model or involving significant increase in run time cost. This is made possible by transferring the learned 5-point model to the desired facial landmark configuration, through model fine-tuning with dense landmark annotations. Our new model achieves the state-of-the-art result on the 300-W benchmark dataset (mean error of 9.15% on the challenging IBUG subset).",
"title": ""
},
{
"docid": "ef1d28df2575c2c844ca2fa109893d92",
"text": "Measurement of the quantum-mechanical phase in quantum matter provides the most direct manifestation of the underlying abstract physics. We used resonant x-ray scattering to probe the relative phases of constituent atomic orbitals in an electronic wave function, which uncovers the unconventional Mott insulating state induced by relativistic spin-orbit coupling in the layered 5d transition metal oxide Sr2IrO4. A selection rule based on intra-atomic interference effects establishes a complex spin-orbital state represented by an effective total angular momentum = 1/2 quantum number, the phase of which can lead to a quantum topological state of matter.",
"title": ""
},
{
"docid": "758310a8bcfcdec01b11889617f5a2c7",
"text": "1 †This paper is an extended version of the ICSCA 2017 paper “Reference scope identification for citances by classification with text similarity measures” [55]. This work is supported by the Ministry of Science and Technology (MOST), Taiwan (Grant number: MOST 104-2221-E-178-001). *Corresponding author. Tel: +886 4 23226940728; fax: +886 4 23222621. On Identifying Cited Texts for Citances and Classifying Their Discourse Facets by Classification Techniques",
"title": ""
},
{
"docid": "49b7fa9ad8912c23a7e9e725307cf69c",
"text": "In recent years, with the development of social networks, sentiment analysis has become one of the most important research topics in the field of natural language processing. The deep neural network model combining attention mechanism has achieved remarkable success in the task of target-based sentiment analysis. In current research, however, the attention mechanism is more combined with LSTM networks, such neural network- based architectures generally rely on complex computation and only focus on the single target, thus it is difficult to effectively distinguish the different polarities of variant targets in the same sentence. To address this problem, we propose a deep neural network model combining convolutional neural network and regional long short-term memory (CNN-RLSTM) for the task of target-based sentiment analysis. The approach can reduce the training time of neural network model through a regional LSTM. At the same time, the CNN-RLSTM uses a sentence-level CNN to extract sentiment features of the whole sentence, and controls the transmission of information through different weight matrices, which can effectively infer the sentiment polarities of different targets in the same sentence. Finally, experimental results on multi-domain datasets of two languages from SemEval2016 and auto data show that, our approach yields better performance than SVM and several other neural network models.",
"title": ""
},
{
"docid": "6d41ec322f71c32195119807f35fde53",
"text": "Input current distortion in the vicinity of input voltage zero crossings of boost single-phase power factor corrected (PFC) ac-dc converters is studied in this paper. Previously known causes for the zero-crossing distortion are reviewed and are shown to be inadequate in explaining the observed input current distortion, especially under high ac line frequencies. A simple linear model is then presented which reveals two previously unknown causes for zero-crossing distortion, namely, the leading phase of the input current and the lack of critical damping in the current loop. Theoretical and practical limitations in reducing the phase lead and increasing the damping factor are discussed. A simple phase compensation technique to reduce the zero-crossing distortion is also presented. Numerical simulation and experimental results are presented to validate the theory.",
"title": ""
},
{
"docid": "84195c27330dad460b00494ead1654c8",
"text": "We present a unified framework for the computational implementation of syntactic, semantic, pragmatic and even \"stylistic\" constraints on anaphora. We build on our BUILDRS implementation of Discourse Representation (DR) Theory and Lexical Functional Grammar (LFG) discussed in Wada & Asher (1986). We develop and argue for a semantically based processing model for anaphora resolution that exploits a number of desirable features: (1) the partial semantics provided by the discourse representation structures (DRSs) of DR theory, (2) the use of syntactic and lexical features to filter out unacceptable potential anaphoric antecedents from the set of logically possible antecedents determined by the logical structure of the DRS, (3) the use of pragmatic or discourse constraints, noted by those working on focus, to impose a salience ordering on the set of grammatically acceptable potential antecedents. Only where there is a marked difference in the degree of salience among the possible antecedents does the salience ranking allow us to make predictions on preferred readings. In cases where the difference is extreme, we predict the discourse to be infelicitous if, because of other constraints, one of the markedly less salient antecedents must be linked with the pronoun. We also briefly consider the applications of our processing model to other definite noun phrases besides anaphoric pronouns.",
"title": ""
},
{
"docid": "9493fa9f3749088462c1af7b34d9cfc9",
"text": "Computer vision assisted diagnostic systems are gaining popularity in different healthcare applications. This paper presents a video analysis and pattern recognition framework for the automatic grading of vertical suspension tests on infants during the Hammersmith Infant Neurological Examination (HINE). The proposed vision-guided pipeline applies a color-based skin region segmentation procedure followed by the localization of body parts before feature extraction and classification. After constrained localization of lower body parts, a stick-diagram representation is used for extracting novel features that correspond to the motion dynamic characteristics of the infant's leg movements during HINE. This set of pose features generated from such a representation includes knee angles and distances between knees and hills. Finally, a time-series representation of the feature vector is used to train a Hidden Markov Model (HMM) for classifying the grades of the HINE tests into three predefined categories. Experiments are carried out by testing the proposed framework on a large number of vertical suspension test videos recorded at a Neuro-development clinic. The automatic grading results obtained from the proposed method matches the scores of experts at an accuracy of 74%.",
"title": ""
},
{
"docid": "05046c00903852a983bf194f4348799c",
"text": "This paper describes a temperature sensor realized in a 65nm CMOS process with a batch-calibrated inaccuracy of ±0.5°C (3s) and a trimmed inaccuracy of ±0.2°C (3s) from −70°C to 125°C. This represents a 10-fold improvement in accuracy compared to other deep-submicron temperature sensors [1,2], and is comparable with that of state-of-the-art sensors implemented in larger-feature-size processes [3,4]. The sensor draws 8.3µA from a 1.2V supply and occupies an area of 0.1mm2, which is 45 times less than that of sensors with comparable accuracy [3,4]. These advances are enabled by the use of NPN transistors as sensing elements, the use of dynamic techniques i.e. correlated double sampling (CDS) and dynamic element matching (DEM), and a single room-temperature trim.",
"title": ""
},
{
"docid": "563d5144c9e053bb4e3cf5a06b19f656",
"text": "After introductory remarks on the definition of marketing, the evolution of library and information services (LIS) marketing is explained. The authors then describe how marketing was applied to LIS over the years. Marketing is also related to other concepts used in the management of LIS. Finally the role of professional associations in diffusing marketing theory is portrayed and the importance of education addressed. The entry ends with a reflection on the future of marketing for LIS.",
"title": ""
},
{
"docid": "bd37aa47cf495c7ea327caf2247d28e4",
"text": "The purpose of this study is to identify the negative effects of social network sites such as Facebook among Asia Pacific University scholars. The researcher, distributed 152 surveys to students of the chosen university to examine and study the negative effects. Electronic communication is emotionally gratifying but how do such technological distraction impact on academic performance? Because of social media platform’s widespread adoption by university students, there is an interest in how Facebook is related to academic performance. This paper measure frequency of use, participation in activities and time spent preparing for class, in order to know if Facebook affects the performance of students. Moreover, the impact of social network site on academic performance also raised another major concern which is health. Today social network sites are running the future and carrier of students. Social network sites were only an electronic connection between users, but unfortunately it has become an addiction for students. This paper examines the relationship between social network sites and health threat. Lastly, the paper provides a comprehensive analysis of the law and privacy of Facebook. It shows how Facebook users socialize on the site, while they are not aware or misunderstand the risk involved and how their privacy suffers as a result.",
"title": ""
},
{
"docid": "b282d29318b44b56e5bfe07d28c00286",
"text": "Word2vec (Mikolov et al., 2013b) has proven to be successful in natural language processing by capturing the semantic relationships between different words. Built on top of single-word embeddings, paragraph vectors (Le and Mikolov, 2014) find fixed-length representations for pieces of text with arbitrary lengths, such as documents, paragraphs, and sentences. In this work, we propose a novel interpretation for neural-network-based paragraph vectors by developing an unsupervised generative model whose maximum likelihood solution corresponds to traditional paragraph vectors. This probabilistic formulation allows us to go beyond point estimates of parameters and to perform Bayesian posterior inference. We find that the entropy of paragraph vectors decreases with the length of documents, and that information about posterior uncertainty improves performance in supervised learning tasks such as sentiment analysis and paraphrase detection.",
"title": ""
},
{
"docid": "15102e561d9640ee39952e4ad62ef896",
"text": "OBJECTIVE\nTo define the relative position of the maxilla and mandible in fetuses with trisomy 18 at 11 + 0 to 13 + 6 weeks of gestation.\n\n\nMETHODS\nA three-dimensional (3D) volume of the fetal head was obtained before karyotyping at 11 + 0 to 13 + 6 weeks of gestation in 36 fetuses subsequently found to have trisomy 18, and 200 chromosomally normal fetuses. The frontomaxillary facial (FMF) angle and the mandibulomaxillary facial (MMF) angle were measured in a mid-sagittal view of the fetal face.\n\n\nRESULTS\nIn the chromosomally normal group both the FMF and MMF angles decreased significantly with crown-rump length (CRL). In the trisomy 18 fetuses the FMF angle was significantly greater and the angle was above the 95(th) centile of the normal range in 21 (58.3%) cases. In contrast, in trisomy 18 fetuses the MMF angle was significantly smaller than that in normal fetuses and the angle was below the 5(th) centile of the normal range in 12 (33.3%) cases.\n\n\nCONCLUSIONS\nTrisomy 18 at 11 + 0 to 13 + 6 weeks of gestation is associated with both mid-facial hypoplasia and micrognathia or retrognathia that can be documented by measurement of the FMF angle and MMF angle, respectively.",
"title": ""
},
{
"docid": "4e4bd38230dba0012227d8b40b01e867",
"text": "In this paper, we present a travel guidance system W2Go (Where to Go), which can automatically recognize and rank the landmarks for travellers. In this system, a novel Automatic Landmark Ranking (ALR) method is proposed by utilizing the tag and geo-tag information of photos in Flickr and user knowledge from Yahoo Travel Guide. ALR selects the popular tourist attractions (landmarks) based on not only the subjective opinion of the travel editors as is currently done on sites like WikiTravel and Yahoo Travel Guide, but also the ranking derived from popularity among tourists. Our approach utilizes geo-tag information to locate the positions of the tag-indicated places, and computes the probability of a tag being a landmark/site name. For potential landmarks, impact factors are calculated from the frequency of tags, user numbers in Flickr, and user knowledge in Yahoo Travel Guide. These tags are then ranked based on the impact factors. Several representative views for popular landmarks are generated from the crawled images with geo-tags to describe and present them in context of information derived from several relevant reference sources. The experimental comparisons to the other systems are conducted on eight famous cities over the world. User-based evaluation demonstrates the effectiveness of the proposed ALR method and the W2Go system.",
"title": ""
},
{
"docid": "4575b5c93aa86c150944597638402439",
"text": "Multilayer networks are networks where edges exist in multiple layers that encode different types or sources of interactions. As one of the most important problems in network science, discovering the underlying community structure in multilayer networks has received an increasing amount of attention in recent years. One of the challenging issues is to develop effective community structure quality functions for characterizing the structural or functional properties of the expected community structure. Although several quality functions have been developed for evaluating the detected community structure, little has been explored about how to explicitly bring our knowledge of the desired community structure into such quality functions, in particular for the multilayer networks. To address this issue, we propose the multilayer edge mixture model (MEMM), which is positioned as a general framework that enables us to design a quality function that reflects our knowledge about the desired community structure. The proposed model is based on a mixture of the edges, and the weights reflect their role in the detection process. By decomposing a community structure quality function into the form of MEMM, it becomes clear which type of community structure will be discovered by such quality function. Similarly, after such decomposition we can also modify the weights of the edges to find the desired community structure. In this paper, we apply the quality functions modified with the knowledge of MEMM to different multilayer benchmark networks as well as real-world multilayer networks and the detection results confirm the feasibility of MEMM.",
"title": ""
},
{
"docid": "a47d001dc8305885e42a44171c9a94b2",
"text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.",
"title": ""
},
{
"docid": "558e6532f9a228c1ec41448a67214df2",
"text": "We consider the problem of shape recovery for real world scenes, where a variety of global illumination (inter-reflections, subsurface scattering, etc.) and illumination defocus effects are present. These effects introduce systematic and often significant errors in the recovered shape. We introduce a structured light technique called Micro Phase Shifting, which overcomes these problems. The key idea is to project sinusoidal patterns with frequencies limited to a narrow, high-frequency band. These patterns produce a set of images over which global illumination and defocus effects remain constant for each point in the scene. This enables high quality reconstructions of scenes which have traditionally been considered hard, using only a small number of images. We also derive theoretical lower bounds on the number of input images needed for phase shifting and show that Micro PS achieves the bound.",
"title": ""
},
{
"docid": "1b17fd5250b50a750931660ac0e130fe",
"text": "MOS varactors are used extensively as tunable elements in the tank circuits of RF voltage-controlled oscillators (VCOs) based on submicrometer CMOS technologies. MOS varactor topologies include conventionalD=S=B connected, inversion-mode (I-MOS), and accumulation-mode (A-MOS) structures. When incorporated into the VCO tank circuit, the large-signal swing of the VCO output oscillation modulates the varactor capacitance in time, resulting in a VCO tuning curve that deviates from the dc tuning curve of the particular varactor structure. This paper presents a detailed analysis of this large-signal effect. Simulated results are compared to measurements for an example 2.5-GHz complementary LC VCO using I-MOS varactors implemented in 0.35m CMOS technology.",
"title": ""
}
] |
scidocsrr
|
445b8b02a94ce13049c6dc57d90c0234
|
Modeling Non-Functional Requirements
|
[
{
"docid": "221cd488d735c194e07722b1d9b3ee2a",
"text": "HURTS HELPS HURTS HELPS Data Type [Target System] Implicit HELPS HURTS HURTS BREAKS ? Invocation [Target System] Pipe & HELPS BREAKS BREAKS HELPS Filter WHEN [Target condl System] condl: size of data in domain is huge Figure 13.4. A generic Correlation Catalogue, based on [Garlan93]. Figure 13.3 shows a method which decomposes the topic on process, including algorithms as used in [Garlan93]. Decomposition methods for processes are also described in [Nixon93, 94a, 97a], drawing on implementations of processes [Chung84, 88]. These two method definitions are unparameterized. A fuller catalogue would include parameterized definitions too. Operationalization methods, which organize knowledge about satisficing NFR softgoals, are embedded in architectural designs when selected. For example, an ImplicitFunctionlnvocationRegime (based on [Garlan93]' architecture 3) can be used to hide implementation details in order to make an architectural 358 NON-FUNCTIONAL REQUIREMENTS IN SOFTWARE ENGINEERING design more extensible, thus contributing to one of the softgoals in the above decomposition. Argumentation methods and templates are used to organize principles and guidelines for making design rationale for or against design decisions (Cf. [J. Lee91]).",
"title": ""
}
] |
[
{
"docid": "42fb8651df19c1295cedd3d426089802",
"text": "Recent research has shown that mindfulness-based cognitive therapy (MBCT) could be a useful alternative approach to the treatment of health anxiety and deserves further investigation. In this paper, we outline the rationale for using MBCT in the treatment of this condition, namely its hypothesised impact on the underlying mechanisms which maintain health anxiety, such as rumination and avoidance, hypervigilance to body sensations and misinterpretation of such sensations. We also describe some of the adaptations which were made to the MBCT protocol for recurrent depression in this trial and discuss the rationale for these adaptations. We use a case example from the trial to illustrate how MBCT was implemented and outline the experience of one of the participants who took part in an 8-week MBCT course. Finally, we detail some of the more general experiences of participants and discuss the advantages and possible limitations of this approach for this population, as well as considering what might be useful avenues to explore in future research.",
"title": ""
},
{
"docid": "3b988fe1c91096f67461dc9fc7bb6fae",
"text": "The paper analyzes the test setup required by the International Electrotechnical Commission (IEC) 61000-4-4 to evaluate the immunity of electronic equipment to electrical fast transients (EFTs), and proposes an electrical model of the capacitive coupling clamp, which is employed to add disturbances to nominal signals. The study points out limits on accuracy of this model, and shows how it can be fruitfully employed to predict the interference waveform affecting nominal system signals through computer simulations.",
"title": ""
},
{
"docid": "460accc688fa58684ae1a00fd5d5ddfa",
"text": "3D computer animation often struggles to compete with the flexibility and expressiveness commonly found in traditional animation, particularly when rendered non-photorealistically. We present an animation tool that takes skeleton-driven 3D computer animations and generates expressive deformations to the character geometry. The technique is based upon the cartooning and animation concepts of 'lines of action' and 'lines of motion' and automatically infuses computer animations with some of the expressiveness displayed by traditional animation. Motion and pose-based expressive deformations are generated from the motion data and the character geometry is warped along each limb's individual line of motion. The effect of this subtle, yet significant, warping is twofold: geometric inter-frame consistency is increased which helps create visually smoother animated sequences, and the warped geometry provides a novel solution to the problem of implied motion in non-photorealistic still images.",
"title": ""
},
{
"docid": "d3c91b43a4ac5b50f2faa02811616e72",
"text": "BACKGROUND\nSleep disturbance is common among disaster survivors with posttraumatic stress symptoms but is rarely addressed as a primary therapeutic target. Sleep Dynamic Therapy (SDT), an integrated program of primarily evidence-based, nonpharmacologic sleep medicine therapies coupled with standard clinical sleep medicine instructions, was administered to a large group of fire evacuees to treat posttraumatic insomnia and nightmares and determine effects on posttraumatic stress severity.\n\n\nMETHOD\nThe trial was an uncontrolled, prospective pilot study of SDT for 66 adult men and women, 10 months after exposure to the Cerro Grande Fire. SDT was provided to the entire group in 6, weekly, 2-hour sessions. Primary and secondary outcomes included validated scales for insomnia, nightmares, posttraumatic stress, anxiety, and depression, assessed at 2 pretreatment baselines on average 8 weeks apart, weekly during treatment, posttreatment, and 12-week follow-up.\n\n\nRESULTS\nSixty-nine participants completed both pretreatment assessment, demonstrating small improvement in symptoms prior to starting SDT. Treatment and posttreatment assessments were completed by 66 participants, and 12-week follow-up was completed by 59 participants. From immediate pretreatment (second baseline) to posttreatment, all primary and secondary scales decreased significantly (all p values < .0001) with consistent medium-sized effects (Cohen's d = 0.29 to 1.09), and improvements were maintained at follow-up. Posttraumatic stress disorder subscales demonstrated similar changes: intrusion (d = 0.56), avoidance (d = 0.45), and arousal (d = 0.69). Fifty-three patients improved, 10 worsened, and 3 reported no change in posttraumatic stress.\n\n\nCONCLUSION\nIn an uncontrolled pilot study, chronic sleep symptoms in fire disaster evacuees were treated with SDT, which was associated with substantive and stable improvements in sleep disturbance, posttraumatic stress, anxiety, and depression 12 weeks after initiating treatment.",
"title": ""
},
{
"docid": "bb0b9b679444291bceecd68153f6f480",
"text": "Path planning is one of the most significant and challenging subjects in robot control field. In this paper, a path planning method based on an improved shuffled frog leaping algorithm is proposed. In the proposed approach, a novel updating mechanism based on the median strategy is used to avoid local optimal solution problem in the general shuffled frog leaping algorithm. Furthermore, the fitness function is modified to make the path generated by the shuffled frog leaping algorithm smoother. In each iteration, the globally best frog is obtained and its position is used to lead the movement of the robot. Finally, some simulation experiments are carried out. The experimental results show the feasibility and effectiveness of the proposed algorithm in path planning for mobile robots.",
"title": ""
},
{
"docid": "123f5d93d0b7c483a50d73ba04762550",
"text": "Chemistry and biology are intimately connected sciences yet the chemistry-biology interface remains problematic and central issues regarding the very essence of living systems remain unresolved. In this essay we build on a kinetic theory of replicating systems that encompasses the idea that there are two distinct kinds of stability in nature-thermodynamic stability, associated with \"regular\" chemical systems, and dynamic kinetic stability, associated with replicating systems. That fundamental distinction is utilized to bridge between chemistry and biology by demonstrating that within the parallel world of replicating systems there is a second law analogue to the second law of thermodynamics, and that Darwinian theory may, through scientific reductionism, be related to that second law analogue. Possible implications of these ideas to the origin of life problem and the relationship between chemical emergence and biological evolution are discussed.",
"title": ""
},
{
"docid": "01683120a2199b55d8f4aaca27098a47",
"text": "As the microblogging service (such as Weibo) is becoming popular, spam becomes a serious problem of affecting the credibility and readability of Online Social Networks. Most existing studies took use of a set of features to identify spam, but without the consideration of the overlap and dependency among different features. In this study, we investigate the problem of spam detection by analyzing real spam dataset collections of Weibo and propose a novel hybrid model of spammer detection, called SDHM, which utilizing significant features, i.e. user behavior information, online social network attributes and text content characteristics, in an organic way. Experiments on real Weibo dataset demonstrate the power of the proposed hybrid model and the promising performance.",
"title": ""
},
{
"docid": "d9888d448df6329e9a9b4fb5c1385ee3",
"text": "Designing and developing a comfortable and convenient EEG system for daily usage that can provide reliable and robust EEG signal, encompasses a number of challenges. Among them, the most ambitious is the reduction of artifacts due to body movements. This paper studies the effect of head movement artifacts on the EEG signal and on the dry electrode-tissue impedance (ETI), monitored continuously using the imec's wireless EEG headset. We have shown that motion artifacts have huge impact on the EEG spectral content in the frequency range lower than 20Hz. Coherence and spectral analysis revealed that ETI is not capable of describing disturbances at very low frequencies (below 2Hz). Therefore, we devised a motion artifact reduction (MAR) method that uses a combination of a band-pass filtering and multi-channel adaptive filtering (AF), suitable for real-time MAR. This method was capable of substantially reducing artifacts produced by head movements.",
"title": ""
},
{
"docid": "39bfd705fb71e9ba4a503246408c6820",
"text": "We develop a theoretical model to describe and explain variation in corporate governance among advanced capitalist economies, identifying the social relations and institutional arrangements that shape who controls corporations, what interests corporations serve, and the allocation of rights and responsibilities among corporate stakeholders. Our “actor-centered” institutional approach explains firm-level corporate governance practices in terms of institutional factors that shape how actors’ interests are defined (“socially constructed”) and represented. Our model has strong implications for studying issues of international convergence.",
"title": ""
},
{
"docid": "e2c4f9cfce1db6282fe3a23fd5d6f3a4",
"text": "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.",
"title": ""
},
{
"docid": "fc509e8f8c0076ad80df5ff6ee6b6f1e",
"text": "The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.",
"title": ""
},
{
"docid": "450a0ffcd35400f586e766d68b75cc98",
"text": "While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied. In this paper, we tackle the 3D human pose estimation task with end-to-end learning using CNNs. Relative 3D positions between one joint and the other joints are learned via CNNs. The proposed method improves the performance of CNN with two novel ideas. First, we added 2D pose information to estimate a 3D pose from an image by concatenating 2D pose estimation result with the features from an image. Second, we have found that more accurate 3D poses are obtained by combining information on relative positions with respect to multiple joints, instead of just one root joint. Experimental results show that the proposed method achieves comparable performance to the state-of-the-art methods on Human 3.6m dataset.",
"title": ""
},
{
"docid": "81384d801ba37feaca150eca5621afbb",
"text": "Next-generation sequencing technologies have had a dramatic impact in the field of genomic research through the provision of a low cost, high-throughput alternative to traditional capillary sequencers. These new sequencing methods have surpassed their original scope and now provide a range of utility-based applications, which allow for a more comprehensive analysis of the structure and content of microbial genomes than was previously possible. With the commercialization of a third generation of sequencing technologies imminent, we discuss the applications of current next-generation sequencing methods and explore their impact on and contribution to microbial genome research.",
"title": ""
},
{
"docid": "e2a605f5c22592bd5ca828d4893984be",
"text": "Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions. We develop an approach to explaining deep neural networks by constructing causal models on salient concepts contained in a CNN. We develop methods to extract salient concepts throughout a target network by using autoencoders trained to extract humanunderstandable representations of network activations. We then build a bayesian causal model using these extracted concepts as variables in order to explain image classification. Finally, we use this causal model to identify and visualize features with significant causal influence on final classification.",
"title": ""
},
{
"docid": "1d949b64320fce803048b981ae32ce38",
"text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.",
"title": ""
},
{
"docid": "ac1cf73b0f59279d02611239781af7c1",
"text": "This paper presents V3, an unsupervised system for aspect-based Sentiment Analysis when evaluated on the SemEval 2014 Task 4. V3 focuses on generating a list of aspect terms for a new domain using a collection of raw texts from the domain. We also implement a very basic approach to classify the aspect terms into categories and assign polarities to them.",
"title": ""
},
{
"docid": "b43bcd460924f0b5a7366f23bf0d8fe7",
"text": "Historically, it has been difficult to define paraphilias in a consistent manner or distinguish paraphilias from non-paraphilic or normophilic sexual interests (see Blanchard, 2009a; Moser & Kleinplatz, 2005). As part of the American Psychiatric Association’s (APA) process of revising the Diagnostic and Statistical Manual of Mental Disorders (DSM), Blanchard (2010a), the chair of the DSM-5 Paraphilias subworkgroup (PSWG), has proposed a new paraphilia definition: ‘‘A paraphilia is any powerful and persistent sexual interest other than sexual interest in copulatory or precopulatory behavior with phenotypicallynormal, consentingadulthumanpartners’’ (p. 367). Blanchard (2009a) acknowledges that his paraphilia ‘‘definition is not watertight’’and it already has attracted serious criticism (see Haeberle, 2010; Hinderliter, 2010; Singy, 2010). The current analysis will critique three components of Blanchard’s proposed definition (sexual interest in copulatory or precopulatory behavior, phenotypically normal, and consenting adult human partners) to determine if the definition is internally consistent andreliably distinguishes individualswith a paraphilia from individuals with normophilia. Blanchard (2009a) believes his definition ‘‘is better than no real definition,’’but that remains to be seen. According to Blanchard (2009a), the current DSM paraphilia definition (APA, 2000) is a definition by concatenation (a list of things that are paraphilias), but he believes a definition by exclusion (everything that is not normophilic) is preferable. The change is not substantive as normophilia (formerly a definitionofexclusion)nowbecomesadefinitionofconcatenation (a list of acceptable activities). Nevertheless, it seems odd to define a paraphilia on the basis of what it is not, rather than by the commonalities among the different paraphilias. Most definitions are statements of what things are, not what things are excluded or lists of things to be included. Blanchard (2009a) purposefully left ‘‘intact the distinction betweennormativeandnon-normativesexualbehavior,’’implying that these categories are meaningful. Blanchard (2010b; see alsoBlanchardetal.,2009)definesaparaphiliabyrelativeascertainment (the interest in paraphilic stimuli is greater than the interest in normophilic stimuli) rather than absolute ascertainment (the interest is intense). Using relative ascertainment confirms that one cannot be both paraphilic and normophilic; the greater interest would classify the individual as paraphilic or normophilic. Blanchard (2010a) then contradicts himself when he asserts that once ascertained with a paraphilia, the individual should retain that label, even if the powerful and persistent paraphilic sexual interest dissipates. Logically, the relative dissipation of the paraphilic and augmentation of the normophilic interests should re-categorize the individual as normophilic. The first aspect of Blanchard’s paraphilia definition is the ‘‘sexual interest incopulatoryorprecopulatorybehavior.’’Obviously, most normophilic individuals do not desire or respond sexually to all adults. Ascertaining if someone is more aroused by the coitus or their partner’s physique, attitude, attributes, etc. seems fruitless and hopelessly convoluted. I can see no other way to interpret sexual interest in copulatory or precopulatory behavior, except to conclude that coitus (between phenotypically normal consenting adults) is normophilic. Otherwise, a powerful and persistent preference for blonde (or Asian or petite) coital partners is a paraphilia. If a relative lack of sexual interest in brunettes as potential coital partners indicates a C. Moser (&) Department of Sexual Medicine, Institute for Advanced Study of Human Sexuality, 45 Castro Street, #125, San Francisco, CA 94114, USA e-mail: docx2@ix.netcom.com 1 Another version of this definition exists (Blanchard, 2009a, 2009b), but I do not believe the changes substantially alter any of my comments.",
"title": ""
},
{
"docid": "c30d53cd8c350615f20d5baef55de6d0",
"text": "The Internet of Things (IoT) is everywhere around us. Smart communicating objects offer the digitalization of lives. Thus, IoT opens new opportunities in criminal investigations such as a protagonist or a witness to the event. Any investigation process involves four phases: firstly the identification of an incident and its evidence, secondly device collection and preservation, thirdly data examination and extraction and then finally data analysis and formalization.\n In recent years, the scientific community sought to develop a common digital framework and methodology adapted to IoT-based infrastructure. However, the difficulty of IoT lies in the heterogeneous nature of the device, lack of standards and the complex architecture. Although digital forensics are considered and adopted in IoT investigations, this work only focuses on collection. Indeed the identification phase is relatively unexplored. It addresses challenges of finding the best evidence and locating hidden devices. So, the traditional method of digital forensics does not fully fit the IoT environment.\n In this paperwork, we investigate the mobility in the context of IoT at the crime scene. This paper discusses the data identification and the classification methodology from IoT to looking for the best evidences. We propose tools and techniques to identify and locate IoT devices. We develop the recent concept of \"digital footprint\" in the crime area based on frequencies and interactions mapping between devices. We propose technical and data criteria to efficiently select IoT devices. Finally, the paper introduces a generalist classification table as well as the limits of such an approach.",
"title": ""
},
{
"docid": "0a557bbd59817ceb5ae34699c72d79ee",
"text": "In this paper, we propose a PTS-based approach to solve the high peak-to-average power ratio (PAPR) problem in filter bank multicarrier (FBMC) system with the consider of the prototype filter and the overlap feature of the symbols in time domain. In this approach, we improve the performance of the traditional PTS approach by modifying the choice of the best weighting factors with the consideration of the overlap between the present symbol and the past symbols. The simulation result shows this approach performs better than traditional PTS approach in the reduction of PAPR in FBMC system.",
"title": ""
},
{
"docid": "8705415b41d8b3c2e7cb4f7523e0f958",
"text": "Research in the field of Computer Supported Collaborative Learning (CSCL) is based on a wide variety of methodologies. In this paper, we focus upon content analysis, which is a technique often used to analyze transcripts of asynchronous, computer mediated discussion groups in formal educational settings. Although this research technique is often used, standards are not yet established. The applied instruments reflect a wide variety of approaches and differ in their level of detail and the type of analysis categories used. Further differences are related to a diversity in their theoretical base, the amount of information about validity and reliability, and the choice for the unit of analysis. This article presents an overview of different content analysis instruments, building on a sample of models commonly used in the CSCL-literature. The discussion of 15 instruments results in a number of critical conclusions. There are questions about the coherence between the theoretical base and the operational translation of the theory in the instruments. Instruments are hardly compared or contrasted with one another. As a consequence the empirical base of the validity of the instruments is limited. The analysis is rather critical when it comes to the issue of reliability. The authors put forward the need to improve the theoretical and empirical base of the existing instruments in order to promote the overall quality of CSCL-research. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
99051e983f91eea7b2e3c66f305c2d63
|
Machine Recognition of Music Emotion: A Review
|
[
{
"docid": "c692dd35605c4af62429edef6b80c121",
"text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.",
"title": ""
}
] |
[
{
"docid": "502d31f5f473f3e93ee86bdfd79e0d75",
"text": "The call-by-need lambda calculus provides an equational framework for reasoning syntactically about lazy evaluation. This paper examines its operational characteristics.\n By a series of reasoning steps, we systematically unpack the standard-order reduction relation of the calculus and discover a novel abstract machine definition which, like the calculus, goes \"under lambdas.\" We prove that machine evaluation is equivalent to standard-order evaluation.\n Unlike traditional abstract machines, delimited control plays a significant role in the machine's behavior. In particular, the machine replaces the manipulation of a heap using store-based effects with disciplined management of the evaluation stack using control-based effects. In short, state is replaced with control.\n To further articulate this observation, we present a simulation of call-by-need in a call-by-value language using delimited control operations.",
"title": ""
},
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "8c3ecd27a695fef2d009bbf627820a0d",
"text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.",
"title": ""
},
{
"docid": "b825426604420620e1bba43c0f45115e",
"text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.",
"title": ""
},
{
"docid": "44ff9580f0ad6321827cf3f391a61151",
"text": "This paper aims to evaluate the aesthetic visual quality of a special type of visual media: digital images of paintings. Assessing the aesthetic visual quality of paintings can be considered a highly subjective task. However, to some extent, certain paintings are believed, by consensus, to have higher aesthetic quality than others. In this paper, we treat this challenge as a machine learning problem, in order to evaluate the aesthetic quality of paintings based on their visual content. We design a group of methods to extract features to represent both the global characteristics and local characteristics of a painting. Inspiration for these features comes from our prior knowledge in art and a questionnaire survey we conducted to study factors that affect human's judgments. We collect painting images and ask human subjects to score them. These paintings are then used for both training and testing in our experiments. Experimental results show that the proposed work can classify high-quality and low-quality paintings with performance comparable to humans. This work provides a machine learning scheme for the research of exploring the relationship between aesthetic perceptions of human and the computational visual features extracted from paintings.",
"title": ""
},
{
"docid": "616bd9a0599c2039ca6d32fd855b43da",
"text": "A new software-based liveness detection approach using a novel fingerprint parameterization based on quality related features is proposed. The system is tested on a highly challenging database comprising over 10,500 real and fake images acquired with five sensors of different technologies and covering a wide range of direct attack scenarios in terms of materials and procedures followed to generate the gummy fingers. The proposed solution proves to be robust to the multi-scenario dataset, and presents an overall rate of 90% correctly classified samples. Furthermore, the liveness detection method presented has the added advantage over previously studied techniques of needing just one image from a finger to decide whether it is real or fake. This last characteristic provides the method with very valuable features as it makes it less intrusive, more user friendly, faster and reduces its implementation costs.",
"title": ""
},
{
"docid": "4d11eca5601f5128801a8159a154593a",
"text": "Polymorphic malware belong to the class of host based threats which defy signature based detection mechanisms. Threat actors use various code obfuscation methods to hide the code details of the polymorphic malware and each dynamic iteration of the malware bears different and new signatures therefore makes its detection harder by signature based antimalware programs. Sandbox based detection systems perform syntactic analysis of the binary files to find known patterns from the un-encrypted segment of the malware file. Anomaly based detection systems can detect polymorphic threats but generate enormous false alarms. In this work, authors present a novel cognitive framework using semantic features to detect the presence of polymorphic malware inside a Microsoft Windows host using a process tree based temporal directed graph. Fractal analysis is performed to find cognitively distinguishable patterns of the malicious processes containing polymorphic malware executables. The main contributions of this paper are; the presentation of a graph theoretic approach for semantic characterization of polymorphism in the operating system's process tree, and the cognitive feature extraction of the polymorphic behavior for detection over a temporal process space.",
"title": ""
},
{
"docid": "f23316e66118193da4c6f166edfae6c0",
"text": "We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. To compensate for the lack of example annotations or question-answer pairs, GUSP adopts a novel grounded-learning approach to leverage database for indirect supervision. On the challenging ATIS dataset, GUSP attained an accuracy of 84%, effectively tying with the best published results by supervised approaches.",
"title": ""
},
{
"docid": "5ee21318b1601a1d42162273a7c9026c",
"text": "We used a knock-in strategy to generate two lines of mice expressing Cre recombinase under the transcriptional control of the dopamine transporter promoter (DAT-cre mice) or the serotonin transporter promoter (SERT-cre mice). In DAT-cre mice, immunocytochemical staining of adult brains for the dopamine-synthetic enzyme tyrosine hydroxylase and for Cre recombinase revealed that virtually all dopaminergic neurons in the ventral midbrain expressed Cre. Crossing DAT-cre mice with ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice revealed a near perfect correlation between staining for tyrosine hydroxylase and beta-galactosidase or YFP. YFP-labeled fluorescent dopaminergic neurons could be readily identified in live slices. Crossing SERT-cre mice with the ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice similarly revealed a near perfect correlation between staining for serotonin-synthetic enzyme tryptophan hydroxylase and beta-galactosidase or YFP. Additional Cre expression in the thalamus and cortex was observed, reflecting the known pattern of transient SERT expression during early postnatal development. These findings suggest a general strategy of using neurotransmitter transporter promoters to drive selective Cre expression and thus control mutations in specific neurotransmitter systems. Crossed with fluorescent-gene reporters, this strategy tags neurons by neurotransmitter status, providing new tools for electrophysiology and imaging.",
"title": ""
},
{
"docid": "b73f0b44786330a363bbbcbb71c63219",
"text": "In the third shared task of the Computational Approaches to Linguistic CodeSwitching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard ArabicEgyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.",
"title": ""
},
{
"docid": "1128977e3831283b900f7d1c344f6713",
"text": "In this work, we present a framework to capture 3D models of faces in high resolutions with low computational load. The system captures only two pictures of the face, one illuminated with a colored stripe pattern and one with regular white light. The former is needed for the depth calculation, the latter is used as texture. Having these two images a combination of specialized algorithms is applied to generate a 3D model. The results are shown in different views: simple surface, wire grid respective polygon mesh or textured 3D surface.",
"title": ""
},
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "67808f54305bc2bb2b3dd666f8b4ef42",
"text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.",
"title": ""
},
{
"docid": "b6da971f13c1075ce1b4aca303e7393f",
"text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.",
"title": ""
},
{
"docid": "9bff76e87f4bfa3629e38621060050f7",
"text": "Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction. In this paper, we induce high-quality training labels for the task of figure extraction in a large number of scientific documents, with no human intervention. To accomplish this we leverage the auxiliary data provided in two large web collections of scientific documents (arXiv and PubMed) to locate figures and their associated captions in the rasterized PDF. We share the resulting dataset of over 5.5 million induced labels---4,000 times larger than the previous largest figure extraction dataset---with an average precision of 96.8%, to enable the development of modern data-driven methods for this task. We use this dataset to train a deep neural network for end-to-end figure detection, yielding a model that can be more easily extended to new domains compared to previous work. The model was successfully deployed in Semantic Scholar,\\footnote\\urlhttps://www.semanticscholar.org/ a large-scale academic search engine, and used to extract figures in 13 million scientific documents.\\footnoteA demo of our system is available at \\urlhttp://labs.semanticscholar.org/deepfigures/,and our dataset of induced labels can be downloaded at \\urlhttps://s3-us-west-2.amazonaws.com/ai2-s2-research-public/deepfigures/jcdl-deepfigures-labels.tar.gz. Code to run our system locally can be found at \\urlhttps://github.com/allenai/deepfigures-open.",
"title": ""
},
{
"docid": "9dad87b0134d9f165b0208baf40c7f0f",
"text": "Frequent Itemset Mining (FIM) is the most important and time-consuming step of association rules mining. With the increment of data scale, many efficient single-machine algorithms of FIM, such as FP-growth and Apriori, cannot accomplish the computing tasks within reasonable time. As a result of the limitation of single-machine methods, researchers presented some distributed algorithms based on MapReduce and Spark, such as PFP and YAFIM. Nevertheless, the heavy disk I/O cost at each MapReduce operation makes PFP not efficient enough. YAFIM needs to generate candidate frequent itemsets in each iterative step. It makes YAFIM time-consuming. And if the scale of data is large enough, YAFIM algorithm will not work due to the limitation of memory since the candidate frequent itemsets need to be stored in the memory. And the size of candidate itemsets is very large especially facing the massive data. In this work, we propose a distributed FP-growth algorithm based on Spark, we call it DFPS. DFPS partitions computing tasks in such a way that each computing node builds the conditional FP-tree and adopts a pattern fragment growth method to mine the frequent itemsets independently. DFPS doesn't need to pass messages between nodes during mining frequent itemsets. Our performance study shows that DFPS algorithm is more excellent than YAFIM, especially when the length of transactions is long, the number of items is large and the data is massive. And DFPS has an excellent scalability. The experimental results show that DFPS is more than 10 times faster than YAFIM for T10I4D100K dataset and Pumsb_star dataset.",
"title": ""
},
{
"docid": "3c8d59590b328e0b4ab6b856721009aa",
"text": "Mobile augmented reality (MAR) enabled devices have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. Location and proximity technologies combined with detailed mapping allow effective navigation. Visual analysis software and growing image databases enable object recognition. Advanced graphics capabilities bring sophisticated presentation of the user interface. These capabilities together allow for real-time melding of the physical and the virtual worlds and can be used for information overlay of the user’s environment for various purposes such as entertainment, tourist assistance, navigation assistance, and education [ 1 ] . In designing for MAR applications it is very important to understand the context in which the information has to be presented. Past research on information presentation on small form factor computing has highlighted the importance of presenting the right information in the right way to effectively engage the user [ 2– 4 ] . The screen space that is available on a small form factor is limited, and having augmented information presented as an overlay poses very interesting challenges. MAR usages involve devices that are able to perceive the context of the user based on the location and other sensor based information. In their paper on “ContextAware Pervasive Systems: Architectures for a New Breed of Applications”, Loke [ 5 ] ,",
"title": ""
},
{
"docid": "87a14f9cfdec433672095c2b0d9b9dde",
"text": "This paper discusses a comprehensive suite of experiments that analyze the performance of the random forest (RF) learner implemented in Weka. RF is a relatively new learner, and to the best of our knowledge, only preliminary experimentation on the construction of random forest classifiers in the context of imbalanced data has been reported in previous work. Therefore, the contribution of this study is to provide an extensive empirical evaluation of RF learners built from imbalanced data. What should be the recommended default number of trees in the ensemble? What should the recommended value be for the number of attributes? How does the RF learner perform on imbalanced data when compared with other commonly-used learners? We address these and other related issues in this work.",
"title": ""
},
{
"docid": "34dfcc1e7744afb236f14b5804214c40",
"text": "This paper presents a vision-based real-time gaze zone estimator based on a driver's head orientation composed of yaw and pitch. Generally, vision-based methods are vulnerable to the wearing of eyeglasses and image variations between day and night. The proposed method is novel in the following four ways: First, the proposed method can work under both day and night conditions and is robust to facial image variation caused by eyeglasses because it only requires simple facial features and not specific features such as eyes, lip corners, and facial contours. Second, an ellipsoidal face model is proposed instead of a cylindrical face model to exactly determine a driver's yaw. Third, we propose new features-the normalized mean and the standard deviation of the horizontal edge projection histogram-to reliably and rapidly estimate a driver's pitch. Fourth, the proposed method obtains an accurate gaze zone by using a support vector machine. Experimental results from 200 000 images showed that the root mean square errors of the estimated yaw and pitch angles are below 7 under both daylight and nighttime conditions. Equivalent results were obtained for drivers with glasses or sunglasses, and 18 gaze zones were accurately estimated using the proposed gaze estimation method.",
"title": ""
},
{
"docid": "d507fc48f5d2500251b72cb2ebc94d40",
"text": "We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals' physical locations over time.",
"title": ""
}
] |
scidocsrr
|
7083fcf39daecbb9e4e1ef55b25e9f16
|
Big data on cloud for government agencies: benefits, challenges, and solutions
|
[
{
"docid": "72944a6ad81c2802d0401f9e0c2d8bb5",
"text": "Available online 10 August 2016 Big Data (BD), with their potential to ascertain valued insights for enhanced decision-making process, have recently attracted substantial interest from both academics and practitioners. Big Data Analytics (BDA) is increasingly becoming a trending practice that many organizations are adopting with the purpose of constructing valuable information from BD. The analytics process, including the deployment and use of BDA tools, is seen by organizations as a tool to improve operational efficiency though it has strategic potential, drive new revenue streams and gain competitive advantages over business rivals. However, there are different types of analytic applications to consider. Therefore, prior to hasty use and buying costly BD tools, there is a need for organizations to first understand the BDA landscape. Given the significant nature of theBDandBDA, this paper presents a state-ofthe-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/ employed by organizations to help others understand this landscape with the objective of making robust investment decisions. In doing so, systematically analysing and synthesizing the extant research published on BD and BDA area. More specifically, the authors seek to answer the following two principal questions: Q1 –What are the different types of BD challenges theorized/proposed/confronted by organizations? and Q2 – What are the different types of BDA methods theorized/proposed/employed to overcome BD challenges?. This systematic literature review (SLR) is carried out through observing and understanding the past trends and extant patterns/themes in the BDA research area, evaluating contributions, summarizing knowledge, thereby identifying limitations, implications and potential further research avenues to support the academic community in exploring research themes/patterns. Thus, to trace the implementation of BD strategies, a profiling method is employed to analyze articles (published in English-speaking peer-reviewed journals between 1996 and 2015) extracted from the Scopus database. The analysis presented in this paper has identified relevant BD research studies that have contributed both conceptually and empirically to the expansion and accrual of intellectual wealth to the BDA in technology and organizational resource management discipline. © 2016 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "03d5c8627ec09e4332edfa6842b6fe44",
"text": "In the same way businesses use big data to pursue profits, governments use it to promote the public good.",
"title": ""
}
] |
[
{
"docid": "f4d514a95cc4444dc1cbfdc04737ec75",
"text": "Ultra-high speed data links such as 400GbE continuously push transceivers to achieve better performance and lower power consumption. This paper presents a highly parallelized TRX at 56Gb/s with integrated serializer/deserializer, FFE/CTLE/DFE, CDR, and eye-monitoring circuits. It achieves BER<10−12 under 24dB loss at 14GHz while dissipating 602mW of power.",
"title": ""
},
{
"docid": "d57072f4ffa05618ebf055824e7ae058",
"text": "Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network’s members and non-members; we analyze the impact of privacy concerns on members’ behavior; we compare members’ stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual’s privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members’ misconceptions about the online community’s actual size and composition, and about the visibility of members’ profiles.",
"title": ""
},
{
"docid": "b074ba4ae329ffad0da3216dc84b22b9",
"text": "A recent research trend in Artificial Intelligence (AI) is the combination of several programs into one single, stronger, program; this is termed portfolio methods. We here investigate the application of such methods to Game Playing Programs (GPPs). In addition, we consider the case in which only one GPP is available by decomposing this single GPP into several ones through the use of parameters or even simply random seeds. These portfolio methods are trained in a learning phase. We propose two different offline approaches. The simplest one, BestArm, is a straightforward optimization of seeds or parameters; it performs quite well against the original GPP, but performs poorly against an opponent which repeats games and learns. The second one, namely Nash-portfolio, performs similarly in a “one game” test, and is much more robust against an opponent who learns. We also propose an online learning portfolio, which tests several of the GPP repeatedly and progressively switches to the best one using a bandit algorithm.",
"title": ""
},
{
"docid": "b99efb63e8016c7f5ab09e868ae894da",
"text": "The popular bag of words approach for action recognition is based on the classifying quantized local features density. This approach focuses excessively on the local features but discards all information about the interactions among them. Local features themselves may not be discriminative enough, but combined with their contexts, they can be very useful for the recognition of some actions. In this paper, we present a novel representation that captures contextual interactions between interest points, based on the density of all features observed in each interest point's mutliscale spatio-temporal contextual domain. We demonstrate that augmenting local features with our contextual feature significantly improves the recognition performance.",
"title": ""
},
{
"docid": "3bb1065dfc4e06fa35ef91e2c89d50d2",
"text": "Portable, accurate, and relatively inexpensive high-frequency vector network analyzers (VNAs) have great utility for a wide range of applications, encompassing microwave circuit characterization, reflectometry, imaging, material characterization, and nondestructive testing to name a few. To meet the rising demand for VNAs possessing the aforementioned attributes, we present a novel and simple VNA design based on a standing-wave probing device and an electronically controllable phase shifter. The phase shifter is inserted between a device under test (DUT) and a standing-wave probing device. The complex reflection coefficient of the DUT is then obtained from multiple standing-wave voltage measurements taken for several different values of the phase shift. The proposed VNA design eliminates the need for expensive heterodyne detection schemes required for tuned-receiver-based VNA designs. Compared with previously developed VNAs that operate based on performing multiple power measurements, the proposed VNA utilizes a single power detector without the need for multiport hybrid couplers. In this paper, the efficacy of the proposed VNA is demonstrated via numerical simulations and experimental measurements. For this purpose, measurements of various DUTs obtained using an X-band (8.2-12.4 GHz) prototype VNA are presented and compared with results obtained using an Agilent HP8510C VNA. The results show that the proposed VNA provides highly accurate vector measurements with typical errors on the order of 0.02 and 1° for magnitude and phase, respectively.",
"title": ""
},
{
"docid": "e37a93ff39840e1d6df589b415848a85",
"text": "In this paper we propose a stacked generalization (or stacking) model for event extraction in bio-medical text. Event extraction deals with the process of extracting detailed biological phenomenon, which is more challenging compared to the traditional binary relation extraction such as protein-protein interaction. The overall process consists of mainly three steps: event trigger detection, argument extraction by edge detection and finding correct combination of arguments. In stacking, we use Linear Support Vector Classification (Linear SVC), Logistic Regression (LR) and Stochastic Gradient Descent (SGD) as base-level learning algorithms. As meta-level learner we use Linear SVC. In edge detection step, we find out the arguments of triggers detected in trigger detection step using a SVM classifier. To find correct combination of arguments, we use rules generated by studying the properties of bio-molecular event expressions, and form an event expression consisting of event trigger, its class and arguments. The output of trigger detection is fed to edge detection for argument extraction. Experiments on benchmark datasets of BioNLP2011 show the recall, precision and Fscore of 48.96%, 66.46% and 56.38%, respectively. Comparisons with the existing systems show that our proposed model attains state-of-the-art performance.",
"title": ""
},
{
"docid": "4d585dd4d56dda31c2fb929a61aba5f8",
"text": "Growing greenhouse vegetables is one of the most exacting and intense forms of all agricultural enterprises. In combination with greenhouses, hydroponics is becoming increasingly popular, especially in the United States, Canada, western Europe, and Japan. It is high technology and capital intensive. It is highly productive, conservative of water and land and protective of the environment. For production of leafy vegetables and herbs, deep flow hydroponics is common. For growing row crops such as tomato, cucumber, and pepper, the two most popular artificial growing media are rockwool and perlite. Computers today operate hundreds of devices within a greenhouse by utilizing dozens of input parameters, to maintain the most desired growing environment. The technology of greenhouse food production is changing rapidly with systems today producing yields never before realized. The future for hydroponic/soilless cultured systems appears more positive today than any time over the last 50 years.",
"title": ""
},
{
"docid": "11c4d318abb6d2e838f74d2a6ae61415",
"text": "We propose a new framework for entity and event extraction based on generative adversarial imitation learning – an inverse reinforcement learning method using generative adversarial network (GAN). We assume that instances and labels yield to various extents of difficulty and the gains and penalties (rewards) are expected to be diverse. We utilize discriminators to estimate proper rewards according to the difference between the labels committed by ground-truth (expert) and the extractor (agent). Experiments also demonstrate that the proposed framework outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "5a3f65509a2acd678563cd495fe287de",
"text": "Auditory menus have the potential to make devices that use visual menus accessible to a wide range of users. Visually impaired users could especially benefit from the auditory feedback received during menu navigation. However, auditory menus are a relatively new concept, and there are very few guidelines that describe how to design them. This paper details how visual menu concepts may be applied to auditory menus in order to help develop design guidelines. Specifically, this set of studies examined possible ways of designing an auditory scrollbar for an auditory menu. The following different auditory scrollbar designs were evaluated: single-tone, double-tone, alphabetical grouping, and proportional grouping. Three different evaluations were conducted to determine the best design. The first two evaluations were conducted with sighted users, and the last evaluation was conducted with visually impaired users. The results suggest that pitch polarity does not matter, and proportional grouping is the best of the auditory scrollbar designs evaluated here.",
"title": ""
},
{
"docid": "4749d4153d09082d81b2b64f7954b9cd",
"text": " Background. Punctate or stippled cartilaginous calcifications are associated with many conditions, including chromosomal, infectious, endocrine, and teratogenic etiologies. Some of these conditions are clinically mild, while others are lethal. Accurate diagnosis can prove instrumental in clinical management and in genetic counseling. Objective. To describe the diagnostic radiographic features seen in Pacman dysplasia, a distinct autosomal recessive, lethal skeletal dysplasia. Materials and methods. We present the fourth reported case of Pacman dysplasia and compare the findings seen in our patient with the three previously described patients. Results. Invariable and variable radiographic findings were seen in all four cases of histologically proven Pacman dysplasia. Conclusion. Pacman dysplasia presents both constant and variable diagnostic radiographic features.",
"title": ""
},
{
"docid": "8a679c93185332398c5261ddcfe81e84",
"text": "We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, using a function approximator involving linear combinations of fixed basis functions. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an ergodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. In addition to proving new and stronger results than those previously available, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. Finally, we prove that on-line updates, based on entire trajectories of the Markov chain, are in a certain sense necessary for convergence. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning.",
"title": ""
},
{
"docid": "4248fb006221fbb74d565705dcbc5a7a",
"text": "Shot boundary detection (SBD) is an important and fundamental step in video content analysis such as content-based video indexing, browsing, and retrieval. In this paper, a hybrid SBD method is presented by integrating a high-level fuzzy Petri net (HLFPN) model with keypoint matching. The HLFPN model with histogram difference is executed as a predetection. Next, the speeded-up robust features (SURF) algorithm that is reliably robust to image affine transformation and illumination variation is used to figure out all possible false shots and the gradual transition based on the assumption from the HLFPN model. The top-down design can effectively lower down the computational complexity of SURF algorithm. The proposed approach has increased the precision of SBD and can be applied in different types of videos.",
"title": ""
},
{
"docid": "b9717a3ce0ed7245621314ba3e1ce251",
"text": "Analog beamforming with phased arrays is a promising technique for 5G wireless communication at millimeter wave frequencies. Using a discrete codebook consisting of multiple analog beams, each beam focuses on a certain range of angles of arrival or departure and corresponds to a set of fixed phase shifts across frequency due to practical hardware considerations. However, for sufficiently large bandwidth, the gain provided by the phased array is actually frequency dependent, which is an effect called beam squint, and this effect occurs even if the radiation pattern of the antenna elements is frequency independent. This paper examines the nature of beam squint for a uniform linear array (ULA) and analyzes its impact on codebook design as a function of the number of antennas and system bandwidth normalized by the carrier frequency. The criterion for codebook design is to guarantee that each beam's minimum gain for a range of angles and for all frequencies in the wideband system exceeds a target threshold, for example 3 dB below the array's maximum gain. Analysis and numerical examples suggest that a denser codebook is required to compensate for beam squint. For example, 54% more beams are needed compared to a codebook design that ignores beam squint for a ULA with 32 antennas operating at a carrier frequency of 73 GHz and bandwidth of 2.5 GHz. Furthermore, beam squint with this design criterion limits the bandwidth or the number of antennas of the array if the other one is fixed.",
"title": ""
},
{
"docid": "2d6d5c8b1ac843687db99ccf50a0baff",
"text": "This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier’s behaviour.",
"title": ""
},
{
"docid": "61ba52f205c8b497062995498816b60f",
"text": "The past century experienced a proliferation of retail formats in the marketplace. However, as a new century begins, these retail formats are being threatened by the emergence of a new kind of store, the online or Internet store. From being almost a novelty in 1995, online retailing sales were expected to reach $7 billion by 2000 [9]. In this increasngly timeconstrained world, Internet stores allow consumers to shop from the convenience of remote locations. Yet most of these Internet stores are losing money [6]. Why is such counterintuitive phenomena prevailing? The explanation may lie in the risks associated with Internet shopping. These risks may arise because consumers are concerned about the security of transmitting credit card information over the Internet. Consumers may also be apprehensive about buying something without touching or feeling it and being unable to return it if it fails to meet their approval. Having said this, however, we must point out that consumers are buying goods on the Internet. This is reflected in the fact that total sales on the Internet are on the increase [8, 11]. Who are the consumers that are patronizing the Internet? Evidently, for them the perception of the risk associated with shopping on the Internet is low or is overshadowed by its relative convenience. This article attempts to determine why certain consumers are drawn to the Internet and why others are not. Since the pioneering research done by Becker [3], it has been accepted that the consumer maximizes his utility subject to not only income constraints but also time constraints. A consumer seeks out his best decision given that he has a limited budget of time and money. While purchasing a product from a store, a consumer has to expend both money and time. Therefore, the consumer patronizes the retail store where his total costs or the money and time spent in the entire process are the least. Since the util-",
"title": ""
},
{
"docid": "28cf177349095e7db4cdaf6c9c4a6cb1",
"text": "Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks.",
"title": ""
},
{
"docid": "8bb5a38908446ca4e6acb4d65c4c817c",
"text": "Column-oriented database systems have been a real game changer for the industry in recent years. Highly tuned and performant systems have evolved that provide users with the possibility of answering ad hoc queries over large datasets in an interactive manner. In this paper we present the column-oriented datastore developed as one of the central components of PowerDrill. It combines the advantages of columnar data layout with other known techniques (such as using composite range partitions) and extensive algorithmic engineering on key data structures. The main goal of the latter being to reduce the main memory footprint and to increase the efficiency in processing typical user queries. In this combination we achieve large speed-ups. These enable a highly interactive Web UI where it is common that a single mouse click leads to processing a trillion values in the underlying dataset.",
"title": ""
},
{
"docid": "4775bf71a5eea05b77cafa53daefcff9",
"text": "There is mounting empirical evidence that interacting with nature delivers measurable benefits to people. Reviews of this topic have generally focused on a specific type of benefit, been limited to a single discipline, or covered the benefits delivered from a particular type of interaction. Here we construct novel typologies of the settings, interactions and potential benefits of people-nature experiences, and use these to organise an assessment of the benefits of interacting with nature. We discover that evidence for the benefits of interacting with nature is geographically biased towards high latitudes and Western societies, potentially contributing to a focus on certain types of settings and benefits. Social scientists have been the most active researchers in this field. Contributions from ecologists are few in number, perhaps hindering the identification of key ecological features of the natural environment that deliver human benefits. Although many types of benefits have been studied, benefits to physical health, cognitive performance and psychological well-being have received much more attention than the social or spiritual benefits of interacting with nature, despite the potential for important consequences arising from the latter. The evidence for most benefits is correlational, and although there are several experimental studies, little as yet is known about the mechanisms that are important for delivering these benefits. For example, we do not know which characteristics of natural settings (e.g., biodiversity, level of disturbance, proximity, accessibility) are most important for triggering a beneficial interaction, and how these characteristics vary in importance among cultures, geographic regions and socio-economic groups. These are key directions for future research if we are to design landscapes that promote high quality interactions between people and nature in a rapidly urbanising world.",
"title": ""
},
{
"docid": "db158f806e56a1aae74aae15252703d2",
"text": "Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.",
"title": ""
},
{
"docid": "4f7fbc3f313e68456e57a2d6d3c90cd0",
"text": "This survey paper describes a focused literature survey of machine learning (ML) and data mining (DM) methods for cyber analytics in support of intrusion detection. Short tutorial descriptions of each ML/DM method are provided. Based on the number of citations or the relevance of an emerging method, papers representing each method were identified, read, and summarized. Because data are so important in ML/DM approaches, some well-known cyber data sets used in ML/DM are described. The complexity of ML/DM algorithms is addressed, discussion of challenges for using ML/DM for cyber security is presented, and some recommendations on when to use a given method are provided.",
"title": ""
}
] |
scidocsrr
|
3a1cadc1ff0328e393819c2c150fdd8e
|
THE LANGLANDS-KOTTWITZ APPROACH FOR SOME SIMPLE SHIMURA VARIETIES
|
[
{
"docid": "134ecc62958fa9bb930ff934c5fad7a3",
"text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.",
"title": ""
}
] |
[
{
"docid": "d337f149d3e52079c56731f4f3d8ea3e",
"text": "Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range of NLP tasks. However, many questions remain as to how and why these models are so effective. In this paper, we present a detailed empirical study of how the choice of neural architecture (e.g. LSTM, CNN, or self attention) influences both end task accuracy and qualitative properties of the representations that are learned. We show there is a tradeoff between speed and accuracy, but all architectures learn high quality contextual representations that outperform word embeddings for four challenging NLP tasks. Additionally, all architectures learn representations that vary with network depth, from exclusively morphological based at the word embedding layer through local syntax based in the lower contextual layers to longer range semantics such coreference at the upper layers. Together, these results suggest that unsupervised biLMs, independent of architecture, are learning much more about the structure of language than previously appreciated.",
"title": ""
},
{
"docid": "38aba50fc1512bc48773df729c8305cf",
"text": "In this study, we explore various natural language processing (NLP) methods to perform sentiment analysis. We look at two different datasets, one with binary labels, and one with multi-class labels. For the binary classification we applied the bag of words, and skip-gram word2vec models followed by various classifiers, including random forest, SVM, and logistic regression. For the multi-class case, we implemented the recursive neural tensor networks (RNTN). To overcome the high computational cost of training the standard RNTN we introduce the lowrank RNTN, in which the matrices involved in the quadratic term of RNTN are substituted by symmetric low-rank matrices. We show that the low-rank RNTN leads to significant saving in computational cost, while having similar a accuracy as that of RNTN.",
"title": ""
},
{
"docid": "f006b6e0768e001d9593b14c8800cfde",
"text": "Do learning and retrieval of a memory activate the same neurons? Does the number of reactivated neurons correlate with memory strength? We developed a transgenic mouse that enables the long-lasting genetic tagging of c-fos-active neurons. We found neurons in the basolateral amygdala that are activated during Pavlovian fear conditioning and are reactivated during memory retrieval. The number of reactivated neurons correlated positively with the behavioral expression of the fear memory, indicating a stable neural correlate of associative memory. The ability to manipulate these neurons genetically should allow a more precise dissection of the molecular mechanisms of memory encoding within a distributed neuronal network.",
"title": ""
},
{
"docid": "805f445952a94a0e068966998b486db4",
"text": "Narcissistic personality disorder (NPD) is a trait-based disorder that can be understood as a pathological amplification of narcissistic traits. While temperamental vulnerability and psychological adversity are risk factors for NPD, sociocultural factors are also important. This review hypothesizes that increases in narcissistic traits and cultural narcissism could be associated with changes in the prevalence of NPD. These shifts seem to be a relatively recent phenomenon, driven by social changes associated with modernity. While the main treatment for NPD remains psychotherapy, that form of treatment is itself a product of modernity and individualism. The hypothesis is presented that psychological treatment, unless modified to address the specific problems associated with NPD, could run the risk of supporting narcissism.",
"title": ""
},
{
"docid": "2e4ac47cdc063d76089c17f30a379765",
"text": "Determination of the type and origin of the body fluids found at a crime scene can give important insights into crime scene reconstruction by supporting a link between sample donors and actual criminal acts. For more than a century, numerous types of body fluid identification methods have been developed, such as chemical tests, immunological tests, protein catalytic activity tests, spectroscopic methods and microscopy. However, these conventional body fluid identification methods are mostly presumptive, and are carried out for only one body fluid at a time. Therefore, the use of a molecular genetics-based approach using RNA profiling or DNA methylation detection has been recently proposed to supplant conventional body fluid identification methods. Several RNA markers and tDMRs (tissue-specific differentially methylated regions) which are specific to forensically relevant body fluids have been identified, and their specificities and sensitivities have been tested using various samples. In this review, we provide an overview of the present knowledge and the most recent developments in forensic body fluid identification and discuss its possible practical application to forensic casework.",
"title": ""
},
{
"docid": "e5bb10773d74dfe745176f1c4d7046b2",
"text": "An integral enabler of the smart city vision is the ability to effectively model collective population behaviour. The realisation of sustainable smart mobility is underpinned by the effective modelling of the spatial movements of the population. Furthermore, it is also crucial to identify significant deviations in collective behaviour over time. For example, a change in urban mobility patterns would subsequently impact traffic management systems. This paper focuses on the issue of modelling the collective behaviour of a population by utilizing mobile phone data and investigates the ability to identify significant deviations in behaviour over time. Mobile phone data facilitates the inference of real social networks from their call data records (CDR). We use this data to model collective behaviour and apply change-point detection algorithms, a category of anomaly detection, in order to identify statistically significant changes in collective behaviour over time. The result off the empirical analysis demonstrate that modern change point detection can accurately identify change points with an R2 value of 0.9633.",
"title": ""
},
{
"docid": "5ac0e1b30f3aeeb4e1f7ddae656f7dd5",
"text": "The present paper describes an implementation of fast running motions involving a humanoid robot. Two important technologies are described: a motion generation and a balance control. The motion generation is a unified way to design both walking and running and can generate the trajectory with the vertical conditions of the Center Of Mass (COM) in short calculation time. The balance control enables a robot to maintain balance by changing the positions of the contact foot dynamically when the robot is disturbed. This control consists of 1) compliance control without force sensors, in which the joints are made compliant by feed-forward torques and adjustment of gains of position control, and 2) feedback control, which uses the measured orientation of the robot's torso used in the motion generation as an initial condition to decide the foot positions. Finally, a human-sized humanoid robot that can run forward at 7.0 [km/h] is presented.",
"title": ""
},
{
"docid": "db3523bc1e3616b9fe262e5f6cab7ad8",
"text": "Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning. The objective uses rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we improve dynamic coattention networks (DCN) with a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state-of-the-art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.",
"title": ""
},
{
"docid": "b1856943f53a15df08d0ccf1a20b0251",
"text": "In this paper we introduce Graffoo, i.e., a graphical notation to develop OWL ontologies by means of yEd, a free editor for diagrams.",
"title": ""
},
{
"docid": "d00436e6248a3c7382e0e4df33205bb7",
"text": "BACKGROUND\nIt is uncertain whether male circumcision reduces the risks of penile human papillomavirus (HPV) infection in the man and of cervical cancer in his female partner.\n\n\nMETHODS\nWe pooled data on 1913 couples enrolled in one of seven case-control studies of cervical carcinoma in situ and cervical cancer in five countries. Circumcision status was self-reported, and the accuracy of the data was confirmed by physical examination at three study sites. The presence or absence of penile HPV DNA was assessed by a polymerase-chain-reaction assay in 1520 men and yielded a valid result in the case of 1139 men (74.9 percent).\n\n\nRESULTS\nPenile HPV was detected in 166 of the 847 uncircumcised men (19.6 percent) and in 16 of the 292 circumcised men (5.5 percent). After adjustment for age at first intercourse, lifetime number of sexual partners, and other potential confounders, circumcised men were less likely than uncircumcised men to have HPV infection (odds ratio, 0.37; 95 percent confidence interval, 0.16 to 0.85). Monogamous women whose male partners had six or more sexual partners and were circumcised had a lower risk of cervical cancer than women whose partners were uncircumcised (adjusted odds ratio, 0.42; 95 percent confidence interval, 0.23 to 0.79). Results were similar in the subgroup of men in whom circumcision was confirmed by medical examination.\n\n\nCONCLUSIONS\nMale circumcision is associated with a reduced risk of penile HPV infection and, in the case of men with a history of multiple sexual partners, a reduced risk of cervical cancer in their current female partners.",
"title": ""
},
{
"docid": "4cc52c8b6065d66472955dff9200b71f",
"text": "Over the past few years there has been an increasing focus on the development of features for resource management within the Linux kernel. The addition of the fair group scheduler has enabled the provisioning of proportional CPU time through the specification of group weights. Since the scheduler is inherently workconserving in nature, a task or a group can consume excess CPU share in an otherwise idle system. There are many scenarios where this extra CPU share can cause unacceptable utilization or latency. CPU bandwidth provisioning or limiting approaches this problem by providing an explicit upper bound on usage in addition to the lower bound already provided by shares. There are many enterprise scenarios where this functionality is useful. In particular are the cases of payper-use environments, and latency provisioning within non-homogeneous environments. This paper details the requirements behind this feature, the challenges involved in incorporating into CFS (Completely Fair Scheduler), and the future development road map for this feature. 1 CPU as a manageable resource Before considering the aspect of bandwidth provisioning let us first review some of the basic existing concepts currently arbitrating entity management within the scheduler. There are two major scheduling classes within the Linux CPU scheduler, SCHED_RT and SCHED_NORMAL. When runnable, entities from the former, the real-time scheduling class, will always be elected to run over those from the normal scheduling class. Prior to v2.6.24, the scheduler had no notion of any entity larger than that of single task1. The available management APIs reflected this and the primary control of bandwidth available was nice(2). In v2.6.24, the completely fair scheduler (CFS) was merged, replacing the existing SCHED_NORMAL scheduling class. This new design delivered weight based scheduling of CPU bandwidth, enabling arbitrary partitioning. This allowed support for group scheduling to be added, managed using cgroups through the CPU controller sub-system. This support allows for the flexible creation of scheduling groups, allowing the fraction of CPU resources received by a group of tasks to be arbitrated as a whole. The addition of this support has been a major step in scheduler development, enabling Linux to align more closely with enterprise requirements for managing this resouce. The hierarchies supported by this model are flexible, and groups may be nested within groups. Each group entity’s bandwidth is provisioned using a corresponding shares attribute which defines its weight. Similarly, the nice(2) API was subsumed to control the weight of an individual task entity. Figure 1 shows the hierarchical groups that might be created in a typical university server to differentiate CPU bandwidth between users such as professors, students, and different departments. One way to think about shares is that it provides lowerbound provisioning. When CPU bandwidth is scheduled at capacity, all runnable entities will receive bandwidth in accordance with the ratio of their share weight. It’s key to observe here that not all entities may be runnable 1Recall that under Linux any kernel-backed thread is considered individual task entity, there is no typical notion of a process in scheduling context.",
"title": ""
},
{
"docid": "ea9c8ee7d22c0abc34fcf3ad073e20ac",
"text": "Job performance is the most researched concept studied in industrial and organizational psychology, with the emphasis being on organizational citizenship behavior (OCB) and counterproductive work behavior (CWB) as two dimensions of it. The relationship between these two dimensions of job performance are unclear, hence the objective of the current study was to examine the relationship between organizational citizenship behavior and counterproductive work behavior. A total of 267 students studying psychology were given a questionnaire that measured organizational citizenship behavior and counterproductive work behavior (most have had part-time work experience). Correlational analysis found OCB and CWB to have only a moderate negative correlation which suggests OCB and CWB are two separate but related constructs. It was also found that females and longer-tenured individuals tend to show more OCB but no difference was found for CWB. The findings showed that individuals can engage in OCB and CWB at the same time, which necessitates organizations to find a way to encourage their employees to engage in OCB and not in CWB.",
"title": ""
},
{
"docid": "84e471482dd64e3be90e6ab884fc4481",
"text": "Game theory is a set of tools developed to model interactions between agents with conflicting interests [5]. It is a field of applied mathematics that defines and evaluates interactive decision situations. It provides analytical tools to predict the outcome of complicated interactions between rational entities, where rationality demands strict adherence to a strategy based on observed or measured results [13]. Originally developed to model problems in the field of economics, game theory has recently been applied to network problems, in most cases to solve the resource allocation problems in a competitive environment. The reason that game theory is an adapted choice for studying cooperative communications is various. Nodes in the network are independent agents, making decisions only for their own interests. Game theory provides us sufficient theoretical tools to analyze the network users’ behaviors and actions. Game theory, also primarily deals with distributed optimization, which often requires local information only. Thus it enables us to design distributed algorithms. [14]. This article surveys the literature on game theory as they apply to wireless networks. First, a brief overview of classifications of games, important definitions used in games (Nash Equilibrium, Pareto efficiency, Pure, Mixed and Fully mixed strategies) and game models are presented. Then, we identified five areas of application of game theory in wireless networks; therefore, we discuss related work to game theory in communication networks, cognitive radio networks, wireless sensor networks, resource allocation and power control. Finally, we discuss the limitations of the application of game theory in wireless networks.",
"title": ""
},
{
"docid": "6058813ab7c5a2504faea224b9f32bba",
"text": "LinkedIn, with over 1.5 million Groups, has become a popular place for business employees to create private groups to exchange information and communicate. Recent research on social networking sites (SNSs) has widely explored the phenomenon and its positive effects on firms. However, social networking’s negative effects on information security were not adequately addressed. Supported by the credibility, persuasion and motivation theories, we conducted 1) a field experiment, demonstrating how sensitive organizational data can be exploited, followed by 2) a qualitative study of employees engaged in SNSs activities; and 3) interviews with Chief Information Security Officers (CISOs). Our research has resulted in four main findings: 1) employees are easily deceived and susceptible to victimization on SNSs where contextual elements provide psychological triggers to attackers; 2) organizations lack mechanisms to control SNS online security threats, 3) companies need to strengthen their information security policies related to SNSs, where stronger employee identification and authentication is needed, and 4) SNSs have become important security holes where, with the use of social engineering techniques, malicious attacks are easily facilitated.",
"title": ""
},
{
"docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5",
"text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.",
"title": ""
},
{
"docid": "e5eda75a8adcd01ca98b8fa28768e1ca",
"text": "As demand grows for mobile phone applications, research in optical character recognition, a technology well developed for scanned documents, is shifting focus to the recognition of text embedded in digital photographs. In this paper, we present OCRdroid, a generic framework for developing OCR-based applications on mobile phones. OCRdroid combines a light-weight image preprocessing suite installed inside the mobile phone and an OCR engine connected to a backend server. We demonstrate the power and functionality of this framework by implementing two applications called PocketPal and PocketReader based on OCRdroid on HTC Android G1 mobile phone. Initial evaluations of these pilot experiments demonstrate the potential of using OCRdroid framework for realworld OCR-based mobile applications.",
"title": ""
},
{
"docid": "de3f2ad88e3a99388975cc3da73e5039",
"text": "Machine-learning techniques have recently been proved to be successful in various domains, especially in emerging commercial applications. As a set of machine-learning techniques, artificial neural networks (ANNs), requiring considerable amount of computation and memory, are one of the most popular algorithms and have been applied in a broad range of applications such as speech recognition, face identification, natural language processing, ect. Conventionally, as a straightforward way, conventional CPUs and GPUs are energy-inefficient due to their excessive effort for flexibility. According to the aforementioned situation, in recent years, many researchers have proposed a number of neural network accelerators to achieve high performance and low power consumption. Thus, the main purpose of this literature is to briefly review recent related works, as well as the DianNao-family accelerators. In summary, this review can serve as a reference for hardware researchers in the area of neural networks.",
"title": ""
},
{
"docid": "273a3f15e374d85921904cf40a77fa63",
"text": "Human activity recognition is a significant component of many innovative and human-behavior based systems. The ability to recognize various human activities enables the developing of intelligent control system. Usually the task of human activity recognition is mapped to the classification task of images representing person’s actions. This paper addresses the problem of human activities’ classification using various machine learning methods such as Convolutional Neural Networks, Bag of Features model, Support Vector Machine and K-Nearest Neighbors. This paper provides the comparison study on these methods applied for human activity recognition task using the set of images representing five different categories of daily life activities. The usage of wearable sensors that could improve classification results of human activity recognition is beyond the scope of this research. Keywords–activity recognition; machine learning; CNN; BoF; KNN; SVM",
"title": ""
},
{
"docid": "2c3bfdb36a691434ece6b9f3e7e281e9",
"text": "Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements.",
"title": ""
},
{
"docid": "d3afbb88f0575bd18365c85c6faea868",
"text": "The present paper examines the causal linkage between foreign direct investment (FDI), financial development, and economic growth in a panel of 4 countries of North Africa (Tunisia, Morocco, Algeria and Egypt) over the period 1980-2011. The study moves away from the traditional cross-sectional analysis, and focuses on more direct evidence of the channels through which FDI inflows can promote economic growth of the host country. Using Generalized Method of Moment (GMM) panel data analysis, we find strong evidence of a positive relationship between FDI and economic growth. We also find evidence that the development of the domestic financial system is an important prerequisite for FDI to have a positive effect on economic growth. The policy implications of this study appeared clear. Improvement efforts need to be driven by local-level reforms to ensure the development of domestic financial system in order to maximize the benefits of the presence of FDI.",
"title": ""
}
] |
scidocsrr
|
d3e0cc84199f9795bfe1f2001d87685e
|
Aromatase inhibitors versus tamoxifen in early breast cancer: patient-level meta-analysis of the randomised trials
|
[
{
"docid": "f2b291fd6dacf53ed88168d7e1e4ecce",
"text": "BACKGROUND\nAs trials of 5 years of tamoxifen in early breast cancer mature, the relevance of hormone receptor measurements (and other patient characteristics) to long-term outcome can be assessed increasingly reliably. We report updated meta-analyses of the trials of 5 years of adjuvant tamoxifen.\n\n\nMETHODS\nWe undertook a collaborative meta-analysis of individual patient data from 20 trials (n=21,457) in early breast cancer of about 5 years of tamoxifen versus no adjuvant tamoxifen, with about 80% compliance. Recurrence and death rate ratios (RRs) were from log-rank analyses by allocated treatment.\n\n\nFINDINGS\nIn oestrogen receptor (ER)-positive disease (n=10,645), allocation to about 5 years of tamoxifen substantially reduced recurrence rates throughout the first 10 years (RR 0·53 [SE 0·03] during years 0-4 and RR 0·68 [0·06] during years 5-9 [both 2p<0·00001]; but RR 0·97 [0·10] during years 10-14, suggesting no further gain or loss after year 10). Even in marginally ER-positive disease (10-19 fmol/mg cytosol protein) the recurrence reduction was substantial (RR 0·67 [0·08]). In ER-positive disease, the RR was approximately independent of progesterone receptor status (or level), age, nodal status, or use of chemotherapy. Breast cancer mortality was reduced by about a third throughout the first 15 years (RR 0·71 [0·05] during years 0-4, 0·66 [0·05] during years 5-9, and 0·68 [0·08] during years 10-14; p<0·0001 for extra mortality reduction during each separate time period). Overall non-breast-cancer mortality was little affected, despite small absolute increases in thromboembolic and uterine cancer mortality (both only in women older than 55 years), so all-cause mortality was substantially reduced. In ER-negative disease, tamoxifen had little or no effect on breast cancer recurrence or mortality.\n\n\nINTERPRETATION\n5 years of adjuvant tamoxifen safely reduces 15-year risks of breast cancer recurrence and death. ER status was the only recorded factor importantly predictive of the proportional reductions. Hence, the absolute risk reductions produced by tamoxifen depend on the absolute breast cancer risks (after any chemotherapy) without tamoxifen.\n\n\nFUNDING\nCancer Research UK, British Heart Foundation, and Medical Research Council.",
"title": ""
}
] |
[
{
"docid": "e6704cac805b39fe7f321f095a92ebf4",
"text": "Crowd counting is a challenging task, mainly due to the severe occlusions among dense crowds. This paper aims to take a broader view to address crowd counting from the perspective of semantic modeling. In essence, crowd counting is a task of pedestrian semantic analysis involving three key factors: pedestrians, heads, and their context structure. The information of different body parts is an important cue to help us judge whether there exists a person at a certain position. Existing methods usually perform crowd counting from the perspective of directly modeling the visual properties of either the whole body or the heads only, without explicitly capturing the composite body-part semantic structure information that is crucial for crowd counting. In our approach, we first formulate the key factors of crowd counting as semantic scene models. Then, we convert the crowd counting problem into a multi-task learning problem, such that the semantic scene models are turned into different sub-tasks. Finally, the deep convolutional neural networks are used to learn the sub-tasks in a unified scheme. Our approach encodes the semantic nature of crowd counting and provides a novel solution in terms of pedestrian semantic analysis. In experiments, our approach outperforms the state-of-the-art methods on four benchmark crowd counting data sets. The semantic structure information is demonstrated to be an effective cue in scene of crowd counting.",
"title": ""
},
{
"docid": "c61f68104b2d058acb0d16c89e4b1454",
"text": "Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST, SVHN, and CIFAR-10. We analyzed the trained model to find the performance improvement reasons. We found that adversarial dropout increases the sparsity of neural networks more than the standard dropout. Finally, we also proved that adversarial dropout is a regularization term with a rank-valued hyper parameter that is different from a continuous-valued parameter to specify the strength of the regularization.",
"title": ""
},
{
"docid": "ab47dbcafba637ae6e3b474642439bd3",
"text": "Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 x 10-6) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.",
"title": ""
},
{
"docid": "fef45863bc531960dbf2a7783995bfdb",
"text": "The main goal of facial attribute recognition is to determine various attributes of human faces, e.g. facial expressions, shapes of mouth and nose, headwears, age and race, by extracting features from the images of human faces. Facial attribute recognition has a wide range of potential application, including security surveillance and social networking. The available approaches, however, fail to consider the correlations and heterogeneities between different attributes. This paper proposes that by utilizing these correlations properly, an improvement can be achieved on the recognition of different attributes. Therefore, we propose a facial attribute recognition approach based on the grouping of different facial attribute tasks and a multi-task CNN structure. Our approach can fully utilize the correlations between attributes, and achieve a satisfactory recognition result on a large number of attributes with limited amount of parameters. Several modifications to the traditional architecture have been tested in the paper, and experiments have been conducted to examine the effectiveness of our approach.",
"title": ""
},
{
"docid": "2f9b8ee2f7578c7820eced92fb98c696",
"text": "The Tic tac toe is very popular game having a 3 × 3 grid board and 2 players. A Special Symbol (X or O) is assigned to each player to indicate the slot is covered by the respective player. The winner of the game is the player who first cover a horizontal, vertical and diagonal row of the board having only player's own symbols. This paper presents the design model of Tic tac toe Game using Multi-Tape Turing Machine in which both player choose input randomly and result of the game is declared. The computational Model of Tic tac toe is used to describe it in a formal manner.",
"title": ""
},
{
"docid": "3c203c55c925fb3f78506d46b8b453a8",
"text": "In this paper, we provide combinatorial interpretations for some determinantal identities involving Fibonacci numbers. We use the method due to Lindström-Gessel-Viennot in which we count nonintersecting n-routes in carefully chosen digraphs in order to gain insight into the nature of some well-known determinantal identities while allowing room to generalize and discover new ones.",
"title": ""
},
{
"docid": "5705022b0a08ca99d4419485f3c03eaa",
"text": "In this paper, we propose a wireless sensor network paradigm for real-time forest fire detection. The wireless sensor network can detect and forecast forest fire more promptly than the traditional satellite-based detection approach. This paper mainly describes the data collecting and processing in wireless sensor networks for real-time forest fire detection. A neural network method is applied to in-network data processing. We evaluate the performance of our approach by simulations.",
"title": ""
},
{
"docid": "673674dd11047747db79e5614daa4974",
"text": "Distracted driving is one of the main causes of vehicle collisions in the United States. Passively monitoring a driver's activities constitutes the basis of an automobile safety system that can potentially reduce the number of accidents by estimating the driver's focus of attention. This paper proposes an inexpensive vision-based system to accurately detect Eyes Off the Road (EOR). The system has three main components: 1) robust facial feature tracking; 2) head pose and gaze estimation; and 3) 3-D geometric reasoning to detect EOR. From the video stream of a camera installed on the steering wheel column, our system tracks facial features from the driver's face. Using the tracked landmarks and a 3-D face model, the system computes head pose and gaze direction. The head pose estimation algorithm is robust to nonrigid face deformations due to changes in expressions. Finally, using a 3-D geometric analysis, the system reliably detects EOR.",
"title": ""
},
{
"docid": "c281538d7aa7bd8727ce4718de82c7c8",
"text": "More than 15 years after model predictive control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models. Much progress has been made on these issues for non-linear systems but for practical applications many questions remain, including the reliability and efficiency of the on-line computation scheme. To deal with model uncertainty ‘rigorously’ an involved dynamic programming problem must be solved. The approximation techniques proposed for this purpose are largely at a conceptual stage. Among the broader research needs the following areas are identified: multivariable system identification, performance monitoring and diagnostics, non-linear state estimation, and batch system control. Many practical problems like control objective prioritization and symptom-aided diagnosis can be integrated systematically and effectively into the MPC framework by expanding the problem formulation to include integer variables yielding a mixed-integer quadratic or linear program. Efficient techniques for solving these problems are becoming available. © 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fa2ba8897c9dcd087ea01de2caaed9e4",
"text": "This paper aims to investigate the relationship between library anxiety and emotional intelligence of Bushehr University of Medical Sciences’ students and Persian Gulf University’s students in Bushehr municipality. In this descriptive study which is of correlation type, 700 students of Bushehr University of Medical Sciences and the Persian Gulf University selected through stratified random sampling. Required data has been collected using normalized Siberia Shrink’s emotional intelligence questionnaire and localized Bostick’s library anxiety scale. The results show that the rate of library anxiety among students is less than average (91.73%) except “mechanical factors”. There is not a significant difference in all factors of library anxiety except “interaction with librarian” between male and female. The findings also indicate that there is a negative significant relationship between library anxiety and emotional intelligence (r= -0.41). According to the results, it seems that by improving the emotional intelligence we can decrease the rate of library anxiety among students during their search in a library. Emotional intelligence can optimize academic library’s productivity.",
"title": ""
},
{
"docid": "dcee61dad66f59b2450a3e154726d6b1",
"text": "Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering.",
"title": ""
},
{
"docid": "31dbf3fcd1a70ad7fb32fb6e69ef88e3",
"text": "OBJECTIVE\nHealth care researchers have not taken full advantage of the potential to effectively convey meaning in their multivariate data through graphical presentation. The aim of this paper is to translate knowledge from the fields of analytical chemistry, toxicology, and marketing research to the field of medicine by introducing the radar plot, a useful graphical display method for multivariate data.\n\n\nSTUDY DESIGN AND SETTING\nDescriptive study based on literature review.\n\n\nRESULTS\nThe radar plotting technique is described, and examples are used to illustrate not only its programming language, but also the differences in tabular and bar chart approaches compared to radar-graphed data displays.\n\n\nCONCLUSION\nRadar graphing, a form of radial graphing, could have great utility in the presentation of health-related research, especially in situations in which there are large numbers of independent variables, possibly with different measurement scales. This technique has particular relevance for researchers who wish to illustrate the degree of multiple-group similarity/consensus, or group differences on multiple variables in a single graphical display.",
"title": ""
},
{
"docid": "206263c06b0d41725aeec7844f3b3a01",
"text": "Basic properties of the operational transconductance amplifier (OTA) are discussed. Applications of the OTA in voltage-controlled amplifiers, filters, and impedances are presented. A versatile family of voltage-controlled filter sections suitable for systematic design requirements is described. The total number of components used in these circuits is small, and the design equations and voltage-control characteristics are attractive. Limitations as well as practical considerations of OTA-based filters using commercially available bipolar OTAs are discussed. Applications of OTAs in continuous-time monolithic filters are considered.",
"title": ""
},
{
"docid": "9b3a39ddeadd14ea5a50be8ac2057a26",
"text": "0 7 4 0 7 4 5 9 / 0 0 / $ 1 0 . 0 0 © 2 0 0 0 I E E E J u l y / A u g u s t 2 0 0 0 I E E E S O F T W A R E 19 design, algorithm, code, or test—does indeed improve software quality and reduce time to market. Additionally, student and professional programmers consistently find pair programming more enjoyable than working alone. Yet most who have not tried and tested pair programming reject the idea as a redundant, wasteful use of programming resources: “Why would I put two people on a job that just one can do? I can’t afford to do that!” But we have found, as Larry Constantine wrote, that “Two programmers in tandem is not redundancy; it’s a direct route to greater efficiency and better quality.”1 Our supportive evidence comes from professional programmers and from advanced undergraduate students who participated in a structured experiment. The experimental results show that programming pairs develop better code faster with only a minimal increase in prerelease programmer hours. These results apply to all levels of programming skill from novice to expert.",
"title": ""
},
{
"docid": "d76b7b25bce29cdac24015f8fa8ee5bb",
"text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.",
"title": ""
},
{
"docid": "e30df718ca1981175e888755cce3ce90",
"text": "Human identification at distance by analysis of gait patterns extracted from video has recently become very popular research in biometrics. This paper presents multi-projections based approach to extract gait patterns for human recognition. Binarized silhouette of a motion object is represented by 1-D signals which are the basic image features called the distance vectors. The distance vectors are differences between the bounding box and silhouette, and extracted using four projections to silhouette. Eigenspace transformation is applied to time-varying distance vectors and the statistical distance based supervised pattern classification is then performed in the lower-dimensional eigenspace for human identification. A fusion strategy developed is finally executed to produce final decision. Based on normalized correlation on the distance vectors, gait cycle estimation is also performed to extract the gait cycle. Experimental results on four databases demonstrate that the right person in top two matches 100% of the times for the cases where training and testing sets corresponds to the same walking styles, and in top three-four matches 100% of the times for training and testing sets corresponds to the different walking styles.",
"title": ""
},
{
"docid": "c5eb252d17c2bec8ab168ca79ec11321",
"text": "Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18",
"title": ""
},
{
"docid": "3db3308b3f98563390e8f21e565798b7",
"text": "RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a natural language question, the existing work takes a two-stage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. More specifically, we propose two different frameworks to build the semantic query graph, one is relation (edge)-first and the other one is node-first. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.",
"title": ""
},
{
"docid": "23ff0b54dcef99754549275eb6714a9a",
"text": "The HCI community has developed guidelines and recommendations for improving the usability system that are usually applied at the last stages of the software development process. On the other hand, the SE community has developed sound methods to elicit functional requirements in the early stages, but usability has been relegated to the last stages together with other nonfunctional requirements. Therefore, there are no methods of usability requirements elicitation to develop software within both communities. An example of this problem arises if we focus on the Model-Driven Development paradigm, where the methods and tools that are used to develop software do not support usability requirements elicitation. In order to study the existing publications that deal with usability requirements from the first steps of the software development process, this work presents a mapping study. Our aim is to compare usability requirements methods and to identify the strong points of each one.",
"title": ""
},
{
"docid": "a6d550a64dc633e50ee2b21255344e7b",
"text": "Sentiment classification is a much-researched field that identifies positive or negative emotions in a large number of texts. Most existing studies focus on document-based approaches and documents are represented as bag-of word. Therefore, this feature representation fails to obtain the relation or associative information between words and it can't distinguish different opinions of a sentiment word with different targets. In this paper, we present a dependency tree-based sentence-level sentiment classification approach. In contrast to a document, a sentence just contains little information and a small set of features which can be used for the sentiment classification. So we not only capture flat features (bag-of-word), but also extract structured features from the dependency tree of a sentence. We propose a method to add more information to the dependency tree and provide an algorithm to prune dependency tree to reduce the noisy, and then introduce a convolution tree kernel-based approach to the sentence-level sentiment classification. The experimental results show that our dependency tree-based approach achieved significant improvement, particularly for implicit sentiment classification.",
"title": ""
}
] |
scidocsrr
|
eaae75ea41536abc581cd11693810975
|
Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
|
[
{
"docid": "3223563162967868075a43ca86c1d31a",
"text": "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these",
"title": ""
},
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
}
] |
[
{
"docid": "df7a68ebb9bc03d8a73a54ab3474373f",
"text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.",
"title": ""
},
{
"docid": "a522072914b33af2611896cac9613cb4",
"text": "Relation Extraction refers to the task of populating a database with tuples of the form r(e1, e2), where r is a relation and e1, e2 are entities. Distant supervision is one such technique which tries to automatically generate training examples based on an existing KB such as Freebase. This paper is a survey of some of the techniques in distant supervision which primarily rely on Probabilistic Graphical Models (PGMs).",
"title": ""
},
{
"docid": "296602c0884ea9c330a6fc8e33a7b722",
"text": "The skin is a major exposure route for many potentially toxic chemicals. It is, therefore, important to be able to predict the permeability of compounds through skin under a variety of conditions. Available skin permeability databases are often limited in scope and not conducive to developing effective models. This sparseness and ambiguity of available data prompted the use of fuzzy set theory to model and predict skin permeability. Using a previously published database containing 140 compounds, a rule-based Takagi–Sugeno fuzzy model is shown to predict skin permeability of compounds using octanol-water partition coefficient, molecular weight, and temperature as inputs. Model performance was estimated using a cross-validation approach. In addition, 10 data points were removed prior to model development for additional testing with new data. The fuzzy model is compared to a regression model for the same inputs using both R2 and root mean square error measures. The quality of the fuzzy model is also compared with previously published models. The statistical analysis demonstrates that the fuzzy model performs better than the regression model with identical data and validation protocols. The prediction quality for this model is similar to others that were published. The fuzzy model provides insights on the relationships between lipophilicity, molecular weight, and temperature on percutaneous penetration. This model can be used as a tool for rapid determination of initial estimates of skin permeability.",
"title": ""
},
{
"docid": "0153774b49121d8735cc3d33df69fc00",
"text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.",
"title": ""
},
{
"docid": "2450ccfdff4503fc642550a876976f10",
"text": "The purpose of this paper is to introduce sequential investment strategies that guarantee an optimal rate of growth of the capital, under minimal assumptions on the behavior of the market. The new strategies are analyzed both theoretically and empirically. The theoretical results show that the asymptotic rate of growth matches the optimal one that one could achieve with a full knowledge of the statistical properties of the underlying process generating the market, under the only assumption that the market is stationary and ergodic. The empirical results show that the performance of the proposed investment strategies measured on past NYSE and currency exchange data is solid, and sometimes even spectacular.",
"title": ""
},
{
"docid": "8ffb63dcee3bc0f541e3ec0df0d46be5",
"text": "In this paper, we show the existence of small coresets for the problems of computing k-median and kmeans clustering for points in low dimension. In other words, we show that given a point set P in <, one can compute a weighted set S ⊆ P , of size O(kε−d log n), such that one can compute the k-median/means clustering on S instead of on P , and get an (1 + ε)-approximation. As a result, we improve the fastest known algorithms for (1+ε)-approximate k-means and k-median. Our algorithms have linear running time for a fixed k and ε. In addition, we can maintain the (1+ε)-approximate k-median or k-means clustering of a stream when points are being only inserted, using polylogarithmic space and update time.",
"title": ""
},
{
"docid": "84e926e7b255a3c45e0cb515804250c3",
"text": "User-driven access control improves the coarse-grained access control of current operating systems (particularly in the mobile space) that provide only all-or-nothing access to a resource such as the camera or the current location. By granting appropriate permissions only in response to explicit user actions (for example, pressing a camera button), user-driven access control better aligns application actions with user expectations. Prior work on user-driven access control has relied in essential ways on operating system (OS) modifications to provide applications with uncompromisable access control gadgets, distinguished user interface (UI) elements that can grant access permissions. This work presents a design, implementation, and evaluation of user-driven access control that works with no OS modifications, thus making deployability and incremental adoption of the model more feasible. We develop (1) a user-level trusted library for access control gadgets, (2) static analyses to prevent malicious creation of UI events, illegal flows of sensitive information, and circumvention of our library, and (3) dynamic analyses to ensure users are not tricked into granting permissions. In addition to providing the original user-driven access control guarantees, we use static information flow to limit where results derived from sensitive sources may flow in an application.\n Our implementation targets Android applications. We port open-source applications that need interesting resource permissions to use our system. We determine in what ways user-driven access control in general and our implementation in particular are good matches for real applications. We demonstrate that our system is secure against a variety of attacks that malware on Android could otherwise mount.",
"title": ""
},
{
"docid": "2fbcd34468edf53ee08e0a76a048c275",
"text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.",
"title": ""
},
{
"docid": "3b7c0a822c5937ac9e4d702bb23e3432",
"text": "In a video surveillance system with static cameras, object segmentation often fails when part of the object has similar color with the background, resulting in poor performance of the subsequent object tracking. Multiple kernels have been utilized in object tracking to deal with occlusion, but the performance still highly depends on segmentation. This paper presents an innovative system, named Multiple-kernel Adaptive Segmentation and Tracking (MAST), which dynamically controls the decision thresholds of background subtraction and shadow removal around the adaptive kernel regions based on the preliminary tracking results. Then the objects are tracked for the second time according to the adaptively segmented foreground. Evaluations of both segmentation and tracking on benchmark datasets and our own recorded video sequences demonstrate that the proposed method can successfully track objects in similar-color background and/or shadow areas with favorable segmentation performance.",
"title": ""
},
{
"docid": "9e766871b172f7a752c8af629bd10856",
"text": "A fundamental computational limit on automated reasoning and its effect on Knowledge Representation is examined. Basically, the problem is that it can be more difficult to reason correctly ;Nith one representationallanguage than with another and, moreover, that this difficulty increases dramatically as the expressive power of the language increases. This leads to a tradeoff between the expressiveness of a representational language and its computational tractability. Here we show that this tradeoff can be seen to underlie the differences among a number of existing representational formalisms, in addition to motivating many of the current research issues in Knowledge Representation.",
"title": ""
},
{
"docid": "37edb948f37baa14aff4843d3f83e69b",
"text": "This article concerns the manner in which group interaction during focus groups impacted upon the data generated in a study of adolescent sexual health. Twenty-nine group interviews were conducted with secondary school pupils in Ireland, and data were subjected to a qualitative analysis. In exploring the relationship between method and theory generation, we begin by focusing on the ethnographic potential within group interviews. We propose that at times during the interviews, episodes of acting-out, or presenting a particular image in the presence of others, can be highly revealing in attempting to understand the normative rules embedded in the culture from which participants are drawn. However, we highlight a specific problem with distinguishing which parts of the group interview are a valid representation of group processes and which parts accurately reflect individuals' retrospective experiences of reality. We also note that at various points in the interview, focus groups have the potential to reveal participants' vulnerabilities. In addition, group members themselves can challenge one another on how aspects of their sub-culture are represented within the focus group, in a way that is normally beyond reach within individual interviews. The formation and composition of focus groups, particularly through the clustering of like-minded individuals, can affect the dominant views being expressed within specific groups. While focus groups have been noted to have an educational and transformative potential, we caution that they may also be a source of inaccurate information, placing participants at risk. Finally, the opportunities that focus groups offer in enabling researchers to cross-check the trustworthiness of data using a post-interview questionnaire are considered. We conclude by arguing that although far from flawless, focus groups are a valuable method for gathering data about health issues.",
"title": ""
},
{
"docid": "1796b8d91de88303571cc6f3f66b580b",
"text": "In this paper it is shown that bifilar of a Quadrifilar Helix Antenna (QHA) when designed in side-fed configuration at a given diameter and length of helical arm, effectively becomes equivalent to combination of a loop and a dipole antenna. The vertical and horizontal electric fields caused by these equivalent antennas can be made to vary by changing the turn angle of the bifilar. It is shown how the variation in horizontal and vertical electric field dominance is seen until perfect circular polarization is achieved when two fields are equal at a certain turn angle where area of the loop equals product of pitch of helix and radian length i.e. equivalent dipole length. The antenna is low profile and does not require ground plane and thus can be used in high speed aerodynamic and platform bodies made of composite material where metallic ground is unavailable. Additionally not requiring ground plane increases the isolation between the antennas with stable radiation pattern and hence can be used in MIMO systems.",
"title": ""
},
{
"docid": "0bc68769c263973309b7f19a8bc7d06d",
"text": "The publication of a scholarly book is always the conjunction of an author’s desire (or need) to disseminate their experience and knowledge and the interest or expectations of a potential community of readers to gain benefit from the publication itself. Michael Piotrowski has indeed managed to optimize this relation by bringing to the public a compendium of information that I think has been heavily awaited by many scholars having to deal with corpora of historical texts. The book covers most topics related to the acquisition, encoding, and annotation of historical textual data, seen from the point of view of their linguistic content. As such, it does not address issues related, for instance, to scholarly editions of these texts, but conveys a wealth of information on the various aspects where recent developments in language technology may help digital humanities projects to be aware of the current state of the art in the field.",
"title": ""
},
{
"docid": "52d4f95b6dc6da7d5dd54003b0bc5fbf",
"text": "Leadership is a process directing to a target of which followers, the participators are shared. For this reason leadership has an important effect on succeeding organizational targets. More importance is given to the leadership studies in order to increase organizational success each day. One of the leadership researches that attracts attention recently is spiritual leadership. Spiritual leadership (SL) is important for imposing ideal to the followers and giving meaning to the works they do. Focusing on SL that has recently taken its place in leadership literature, this study looks into what extend faculty members teaching at Faculty of Education display SL qualities. The study is in descriptive scanning model. 1819 students studying at Kocaeli University Faculty of Education in 2009-2010 academic year constitute the universe of the study. Observing leadership qualities takes long time. Therefore, the sample of the study is determined by deliberate sampling method and includes 432 students studying at the last year of the faculty. Data regarding faculty members' SL qualities were collected using a questionnaire adapted from Fry's (2003) 'Spiritual Leadership Scale'. Consequently, university students think that academic stuff shows the features of SL and its sub dimensions in a medium level. According to students, academicians show attitudes related to altruistic love rather than faith and vision. It is found that faculty members couldn't display leadership qualities enough according to the students at the end of the study. © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3b5e584b95ae31ff94be85d7dbea1ccb",
"text": "Due to the fact that no NP-complete problem can be solved in polynomial time (unless P=NP), many approximability results (both positive and negative) of NP-hard optimization problems have appeared in the technical literature. In this compendium, we collect a large number of these results. ● Introduction ❍ NPO Problems: Definitions and Preliminaries ❍ Approximate Algorithms and Approximation Classes ❍ Completeness in Approximation Classes ❍ A list of NPO problems ❍ Improving the compendium ● Graph Theory ❍ Covering and Partitioning ❍ Subgraphs and Supergraphs ❍ Vertex Ordering file:///E|/COMPEND/COMPED19/COMPENDI.HTM (1 of 2) [19/1/2003 1:36:58] A compendium of NP optimization problems ❍ Isoand Other Morphisms ❍ Miscellaneous ● Network Design ❍ Spanning Trees ❍ Cuts and Connectivity ❍ Routing Problems ❍ Flow Problems ❍ Miscellaneous ● Sets and Partitions ❍ Covering, Hitting, and Splitting ❍ Weighted Set Problems ● Storage and Retrieval ❍ Data Storage ❍ Compression and Representation ❍ Miscellaneous ● Sequencing and Scheduling ❍ Sequencing on One Processor ❍ Multiprocessor Scheduling ❍ Shop Scheduling ❍ Miscellaneous ● Mathematical Programming ● Algebra and Number Theory ● Games and Puzzles ● Logic ● Program Optimization ● Miscellaneous ● References ● Index ● About this document ... Viggo Kann Mon Apr 21 13:07:14 MET DST 1997 file:///E|/COMPEND/COMPED19/COMPENDI.HTM (2 of 2) [19/1/2003 1:36:58]",
"title": ""
},
{
"docid": "0b7142ade987ca6f2683fc3fe6179fcb",
"text": "The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.",
"title": ""
},
{
"docid": "53c836280ad99b28c892ef85f31a5985",
"text": "This paper focuses on the design of 1 bit full adder circuit using Gate Diffusion Input Logic. The proposed adder schematics are developed using DSCH2 CAD tool, and their layouts are generated with Microwind 3 VLSI CAD tool. A 1 bit adder circuits are analyzed using standard CMOS 120nm features with corresponding voltage of 1.2V. The Simulated results of the proposed adder is compared with those of Pass transistor, Transmission Function, and CMOS based adder circuits. The proposed adder dissipates low power and responds faster.",
"title": ""
},
{
"docid": "55bdb8b6f4dd3dc836e9751ae8d721e3",
"text": "Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "6bc31257bfbcc9531a3acf1ec738c790",
"text": "BACKGROUND\nThe interaction of depression and anesthesia and surgery may result in significant increases in morbidity and mortality of patients. Major depressive disorder is a frequent complication of surgery, which may lead to further morbidity and mortality.\n\n\nLITERATURE SEARCH\nSeveral electronic data bases, including PubMed, were searched pairing \"depression\" with surgery, postoperative complications, postoperative cognitive impairment, cognition disorder, intensive care unit, mild cognitive impairment and Alzheimer's disease.\n\n\nREVIEW OF THE LITERATURE\nThe suppression of the immune system in depressive disorders may expose the patients to increased rates of postoperative infections and increased mortality from cancer. Depression is commonly associated with cognitive impairment, which may be exacerbated postoperatively. There is evidence that acute postoperative pain causes depression and depression lowers the threshold for pain. Depression is also a strong predictor and correlate of chronic post-surgical pain. Many studies have identified depression as an independent risk factor for development of postoperative delirium, which may be a cause for a long and incomplete recovery after surgery. Depression is also frequent in intensive care unit patients and is associated with a lower health-related quality of life and increased mortality. Depression and anxiety have been widely reported soon after coronary artery bypass surgery and remain evident one year after surgery. They may increase the likelihood for new coronary artery events, further hospitalizations and increased mortality. Morbidly obese patients who undergo bariatric surgery have an increased risk of depression. Postoperative depression may also be associated with less weight loss at one year and longer. The extent of preoperative depression in patients scheduled for lumbar discectomy is a predictor of functional outcome and patient's dissatisfaction, especially after revision surgery. General postoperative mortality is increased.\n\n\nCONCLUSIONS\nDepression is a frequent cause of morbidity in surgery patients suffering from a wide range of conditions. Depression may be identified through the use of Patient Health Questionnaire-9 or similar instruments. Counseling interventions may be useful in ameliorating depression, but should be subject to clinical trials.",
"title": ""
},
{
"docid": "4b5d5d4da56ad916afdad73cc0180cb5",
"text": "This work proposes a substrate integrated waveguide (SIW) power divider employing the Wilkinson configuration for improving the isolation performance of conventional T-junction SIW power dividers. Measurement results at 15GHz show that the isolation (S23, S32) between output ports is about 17 dB and the output return losses (S22, S33) are about 14.5 dB, respectively. The Wilkinson-type performance has been greatly improved from those (7.0 dB ∼ 8.0 dB) of conventional T-junction SIW power dividers. The measured input return loss (23 dB) and average insertion loss (3.9 dB) are also improved from those of conventional ones. The proposed Wilkinson SIW divider will play an important role in high performance SIW circuits involving power divisions.",
"title": ""
}
] |
scidocsrr
|
797ec259cf5128e687eb9748f3e338f9
|
Chronic insomnia and its negative consequences for health and functioning of adolescents: a 12-month prospective study.
|
[
{
"docid": "b6bf6c87040bc4996315fee62acb911b",
"text": "The influence of the sleep patterns of 2,259 students, aged 11 to 14 years, on trajectories of depressive symptoms, self-esteem, and grades was longitudinally examined using latent growth cross-domain models. Consistent with previous research, sleep decreased over time. Students who obtained less sleep in sixth grade exhibited lower initial self-esteem and grades and higher initial levels of depressive symptoms. Similarly, students who obtained less sleep over time reported heightened levels of depressive symptoms and decreased self-esteem. Sex of the student played a strong role as a predictor of hours of sleep, self-esteem, and grades. This study underscores the role of sleep in predicting adolescents' psychosocial outcomes and highlights the importance of using idiographic methodologies in the study of developmental processes.",
"title": ""
}
] |
[
{
"docid": "e510140bfc93089e69cb762b968de5e9",
"text": "Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.",
"title": ""
},
{
"docid": "e9838d3c33d19bdd20a001864a878757",
"text": "FPGAs are increasingly popular as application-specific accelerators because they lead to a good balance between flexibility and energy efficiency, compared to CPUs and ASICs. However, the long routing time imposes a barrier on FPGA computing, which significantly hinders the design productivity. Existing attempts of parallelizing the FPGA routing either do not fully exploit the parallelism or suffer from an excessive quality loss. Massive parallelism using GPUs has the potential to solve this issue but faces non-trivial challenges.\n To cope with these challenges, this work presents Corolla, a GPU-accelerated FPGA routing method. Corolla enables applying the GPU-friendly shortest path algorithm in FPGA routing, leveraging the idea of problem size reduction by limiting the search in routing subgraphs. We maintain the convergence after problem size reduction using the dynamic expansion of the routing resource subgraphs. In addition, Corolla explores the fine-grained single-net parallelism and proposes a hybrid approach to combine the static and dynamic parallelism on GPU. To explore the coarse-grained multi-net parallelism, Corolla proposes an effective method to parallelize mutli-net routing while preserving the equivalent routing results as the original single-net routing. Experimental results show that Corolla achieves an average of 18.72x speedup on GPU with a tolerable loss in the routing quality and sustains a scalable speedup on large-scale routing graphs. To our knowledge, this is the first work to demonstrate the effectiveness of GPU-accelerated FPGA routing.",
"title": ""
},
{
"docid": "77d11e0b66f3543fadf91d0de4c928c9",
"text": "In the United States, the number of people over 65 will double between ow and 2030 to 69.4 million. Providing care for this increasing population becomes increasingly difficult as the cognitive and physical health of elders deteriorates. This survey article describes ome of the factors that contribute to the institutionalization of elders, and then presents some of the work done towards providing technological support for this vulnerable community.",
"title": ""
},
{
"docid": "02855c493744435d868d669a6ddedd1c",
"text": "Recurrent neural networks (RNNs), particularly long short-term memory (LSTM), have gained much attention in automatic speech recognition (ASR). Although some successful stories have been reported, training RNNs remains highly challenging, especially with limited training data. Recent research found that a well-trained model can be used as a teacher to train other child models, by using the predictions generated by the teacher model as supervision. This knowledge transfer learning has been employed to train simple neural nets with a complex one, so that the final performance can reach a level that is infeasible to obtain by regular training. In this paper, we employ the knowledge transfer learning approach to train RNNs (precisely LSTM) using a deep neural network (DNN) model as the teacher. This is different from most of the existing research on knowledge transfer learning, since the teacher (DNN) is assumed to be weaker than the child (RNN); however, our experiments on an ASR task showed that it works fairly well: without applying any tricks on the learning scheme, this approach can train RNNs successfully even with limited training data.",
"title": ""
},
{
"docid": "c6160b8ad36bc4f297bfb1f6b04c79e0",
"text": "Despite their incentive structure flaws, mining pools account for more than 95% of Bitcoin’s computation power. This paper introduces an attack against mining pools in which a malicious party pays pool members to withhold their solutions from their pool operator. We show that an adversary with a tiny amount of computing power and capital can execute this attack. Smart contracts enforce the malicious party’s payments, and therefore miners need neither trust the attacker’s intentions nor his ability to pay. Assuming pool members are rational, an adversary with a single mining ASIC can, in theory, destroy all big mining pools without losing any money (and even make some profit).",
"title": ""
},
{
"docid": "62c6050db8e42b1de54f8d1d54fd861f",
"text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.",
"title": ""
},
{
"docid": "3e9f54363d930c703dfe20941b2568b0",
"text": "Organizations are looking to new graduate nurses to fill expected staffing shortages over the next decade. Creative and effective onboarding programs will determine the success or failure of these graduates as they transition from student to professional nurse. This longitudinal quantitative study with repeated measures used the Casey-Fink Graduate Nurse Experience Survey to investigate the effects of offering a prelicensure extern program and postlicensure residency program on new graduate nurses and organizational outcomes versus a residency program alone. Compared with the nurse residency program alone, the combination of extern program and nurse residency program improved neither the transition factors most important to new nurse graduates during their first year of practice nor a measure important to organizations, retention rates. The additional cost of providing an extern program should be closely evaluated when making financially responsible decisions.",
"title": ""
},
{
"docid": "68971b7efc9663c37113749206b5382b",
"text": "Trehalose 6-phosphate (Tre6P), the intermediate of trehalose biosynthesis, has a profound influence on plant metabolism, growth, and development. It has been proposed that Tre6P acts as a signal of sugar availability and is possibly specific for sucrose status. Short-term sugar-feeding experiments were carried out with carbon-starved Arabidopsis thaliana seedlings grown in axenic shaking liquid cultures. Tre6P increased when seedlings were exogenously supplied with sucrose, or with hexoses that can be metabolized to sucrose, such as glucose and fructose. Conditional correlation analysis and inhibitor experiments indicated that the hexose-induced increase in Tre6P was an indirect response dependent on conversion of the hexose sugars to sucrose. Tre6P content was affected by changes in nitrogen status, but this response was also attributable to parallel changes in sucrose. The sucrose-induced rise in Tre6P was unaffected by cordycepin but almost completely blocked by cycloheximide, indicating that de novo protein synthesis is necessary for the response. There was a strong correlation between Tre6P and sucrose even in lines that constitutively express heterologous trehalose-phosphate synthase or trehalose-phosphate phosphatase, although the Tre6P:sucrose ratio was shifted higher or lower, respectively. It is proposed that the Tre6P:sucrose ratio is a critical parameter for the plant and forms part of a homeostatic mechanism to maintain sucrose levels within a range that is appropriate for the cell type and developmental stage of the plant.",
"title": ""
},
{
"docid": "555e3bbc504c7309981559a66c584097",
"text": "The hippocampus has been implicated in the regulation of anxiety and memory processes. Nevertheless, the precise contribution of its ventral (VH) and dorsal (DH) division in these issues still remains a matter of debate. The Trial 1/2 protocol in the elevated plus-maze (EPM) is a suitable approach to assess features associated with anxiety and memory. Information about the spatial environment on initial (Trial 1) exploration leads to a subsequent increase in open-arm avoidance during retesting (Trial 2). The objective of the present study was to investigate whether transient VH or DH deactivation by lidocaine microinfusion would differently interfere with the performance of EPM-naive and EPM-experienced rats. Male Wistar rats were bilaterally-implanted with guide cannulas aimed at the VH or the DH. One-week after surgery, they received vehicle or lidocaine 2.0% in 1.0 microL (0.5 microL per side) at pre-Trial 1, post-Trial 1 or pre-Trial 2. There was an increase in open-arm exploration after the intra-VH lidocaine injection on Trial 1. Intra-DH pre-Trial 2 administration of lidocaine also reduced the open-arm avoidance. No significant changes were observed in enclosed-arm entries, an EPM index of general exploratory activity. The cautious exploration of potentially dangerous environment requires VH functional integrity, suggesting a specific role for this region in modulating anxiety-related behaviors. With regard to the DH, it may be preferentially involved in learning and memory since the acquired response of inhibitory avoidance was no longer observed when lidocaine was injected pre-Trial 2.",
"title": ""
},
{
"docid": "4ec266df91a40330b704c4e10eacb820",
"text": "Recently many cases of missing children between ages 14 and 17 years are reported. Parents always worry about the possibility of kidnapping of their children. This paper proposes an Android based solution to aid parents to track their children in real time. Nowadays, most mobile phones are equipped with location services capabilities allowing us to get the device’s geographic position in real time. The proposed solution takes the advantage of the location services provided by mobile phone since most of kids carry mobile phones. The mobile application use the GPS and SMS services found in Android mobile phones. It allows the parent to get their child’s location on a real time map. The system consists of two sides, child side and parent side. A parent’s device main duty is to send a request location SMS to the child’s device to get the location of the child. On the other hand, the child’s device main responsibility is to reply the GPS position to the parent’s device upon request. Keywords—Child Tracking System, Global Positioning System (GPS), SMS-based Mobile Application.",
"title": ""
},
{
"docid": "065b0af0f1ed195ac90fa3ad041fa4c4",
"text": "We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.",
"title": ""
},
{
"docid": "16d417e6d2c75edbdf2adbed8ec8d072",
"text": "Network middleboxes are difficult to manage and troubleshoot, due to their proprietary monolithic design. Moving towards Network Functions Virtualization (NFV), virtualized middlebox appliances can be more flexibly instantiated and dynamically chained, making troubleshooting even more difficult. To guarantee carrier-grade availability and minimize outages, operators need ways to automatically verify that the deployed network and middlebox configurations obey higher level network policies. In this paper, we first define and identify the key challenges for checking the correct forwarding behavior of Service Function Chains (SFC). We then design and develop a network diagnosis framework that aids network administrators in verifying the correctness of SFC policy enforcement. Our prototype - SFC-Checker can verify stateful service chains efficiently, by analyzing the switches' forwarding rules and the middleboxes' stateful forwarding behavior. Built on top of the network function models we proposed, we develop a diagnosis algorithm that is able to check the stateful forwarding behavior of a chain of network service functions.",
"title": ""
},
{
"docid": "1a2fe54f7456c5e726f87a401a4628f3",
"text": "Starting from a neurobiological standpoint, I will propose that our capacity to understand others as intentional agents, far from being exclusively dependent upon mentalistic/linguistic abilities, be deeply grounded in the relational nature of our interactions with the world. According to this hypothesis, an implicit, prereflexive form of understanding of other individuals is based on the strong sense of identity binding us to them. We share with our conspecifics a multiplicity of states that include actions, sensations and emotions. A new conceptual tool able to capture the richness of the experiences we share with others will be introduced: the shared manifold of intersubjectivity. I will posit that it is through this shared manifold that it is possible for us to recognize other human beings as similar to us. It is just because of this shared manifold that intersubjective communication and ascription of intentionality become possible. It will be argued that the same neural structures that are involved in processing and controlling executed actions, felt sensations and emotions are also active when the same actions, sensations and emotions are to be detected in others. It therefore appears that a whole range of different \"mirror matching mechanisms\" may be present in our brain. This matching mechanism, constituted by mirror neurons originally discovered and described in the domain of action, could well be a basic organizational feature of our brain, enabling our rich and diversified intersubjective experiences. This perspective is in a position to offer a global approach to the understanding of the vulnerability to major psychoses such as schizophrenia.",
"title": ""
},
{
"docid": "7c9d35fb9cec2affbe451aed78541cef",
"text": "Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better impact on patient care. Recently, there has been interest in the application of machine learning strategies for classification and analysis of image data. In this paper, we propose a new method to detect and identify dental caries using X-ray images as dataset and deep neural network as technique. This technique is based on stacked sparse auto-encoder and a softmax classifier. Those techniques, sparse auto-encoder and softmax, are used to train a deep neural network. The novelty here is to apply deep neural network to diagnosis of dental caries. This approach was tested on a real dataset and has demonstrated a good performance of detection. Keywords-dental X-ray; classification; Deep Neural Networks; Stacked sparse auto-encoder; Softmax.",
"title": ""
},
{
"docid": "e98aefff2ab776efcc13c1d9534ec9fb",
"text": "Many software providers operate crash reporting services to automatically collect crashes from millions of customers and file bug reports. Precisely triaging crashes is necessary and important for software providers because the millions of crashes that may be reported every day are critical in identifying high impact bugs. However, the triaging accuracy of existing systems is limited, as they rely only on the syntactic information of the stack trace at the moment of a crash without analyzing program semantics.\n In this paper, we present RETracer, the first system to triage software crashes based on program semantics reconstructed from memory dumps. RETracer was designed to meet the requirements of large-scale crash reporting services. RETracer performs binary-level backward taint analysis without a recorded execution trace to understand how functions on the stack contribute to the crash. The main challenge is that the machine state at an earlier time cannot be recovered completely from a memory dump, since most instructions are information destroying.\n We have implemented RETracer for x86 and x86-64 native code, and compared it with the existing crash triaging tool used by Microsoft. We found that RETracer eliminates two thirds of triage errors based on a manual analysis of 140 bugs fixed in Microsoft Windows and Office. RETracer has been deployed as the main crash triaging system on Microsoft's crash reporting service.",
"title": ""
},
{
"docid": "ed3a859e2cea465a6d34c556fec860d9",
"text": "Multi-word expressions constitute a significant portion of the lexicon of every natural language, and handling them correctly is mandatory for various NLP applications. Yet such entities are notoriously hard to define, and are consequently missing from standard lexicons and dictionaries. Multi-word expressions exhibit idiosyncratic behavior on various levels: orthographic, morphological, syntactic and semantic. In this work we take advantage of the morphological and syntactic idiosyncrasy of Hebrew noun compounds and employ it to extract such expressions from text corpora. We show that relying on linguistic information dramatically improves the accuracy of compound extraction, reducing over one third of the errors compared with the best baseline.",
"title": ""
},
{
"docid": "c80dbfc2e1f676a7ffe4a6a4f7460d36",
"text": "Coarse-grained semantic categories such as supersenses have proven useful for a range of downstream tasks such as question answering or machine translation. To date, no effort has been put into integrating the supersenses into distributional word representations. We present a novel joint embedding model of words and supersenses, providing insights into the relationship between words and supersenses in the same vector space. Using these embeddings in a deep neural network model, we demonstrate that the supersense enrichment leads to a significant improvement in a range of downstream classification tasks.",
"title": ""
},
{
"docid": "e1e1005788a0133025f9f3951b9a5372",
"text": "Despite the recent success of neural networks in tasks involving natural language understanding (NLU) there has only been limited progress in some of the fundamental challenges of NLU, such as the disambiguation of the meaning and function of words in context. This work approaches this problem by incorporating contextual information into word representations prior to processing the task at hand. To this end we propose a general-purpose reading architecture that is employed prior to a task-specific NLU model. It is responsible for refining context-agnostic word representations with contextual information and lends itself to the introduction of additional, context-relevant information from external knowledge sources. We demonstrate that previously non-competitive models benefit dramatically from employing contextual representations, closing the gap between general-purpose reading architectures and the state-of-the-art performance obtained with fine-tuned, task-specific architectures. Apart from our empirical results we present a comprehensive analysis of the computed representations which gives insights into the kind of information added during the refinement process.",
"title": ""
},
{
"docid": "a3cb839b4299a50c475b2bb1b608ee91",
"text": "In this work, we present an event detection method in Twitter based on clustering of hashtags and introduce an enhancement technique by using the semantic similarities between the hashtags. To this aim, we devised two methods for tweet vector generation and evaluated their effect on clustering and event detection performance in comparison to word-based vector generation methods. By analyzing the contexts of hashtags and their co-occurrence statistics with other words, we identify their paradigmatic relationships and similarities. We make use of this information while applying a lexico-semantic expansion on tweet contents before clustering the tweets based on their similarities. Our aim is to tolerate spelling errors and capture statements which actually refer to the same concepts. We evaluate our enhancement solution on a three-day dataset of tweets with Turkish content. In our evaluations, we observe clearer clusters, improvements in accuracy, and earlier event detection times.",
"title": ""
},
{
"docid": "de2527840267fbc3bf5412498323933b",
"text": "In time series classification, signals are typically mapped into some intermediate representation which is used to construct models. We introduce the joint time-frequency scattering transform, a locally time-shift invariant representation which characterizes the multiscale energy distribution of a signal in time and frequency. It is computed through wavelet convolutions and modulus non-linearities and may therefore be implemented as a deep convolutional neural network whose filters are not learned but calculated from wavelets. We consider the progression from mel-spectrograms to time scattering and joint time-frequency scattering transforms, illustrating the relationship between increased discriminability and refinements of convolutional network architectures. The suitability of the joint time-frequency scattering transform for characterizing time series is demonstrated through applications to chirp signals and audio synthesis experiments. The proposed transform also obtains state-of-the-art results on several audio classification tasks, outperforming time scattering transforms and achieving accuracies comparable to those of fully learned networks.",
"title": ""
}
] |
scidocsrr
|
693a7765d3d98364d7d8eb154de2f31d
|
Towards a Unified Natural Language Inference Framework to Evaluate Sentence Representations
|
[
{
"docid": "e925f5fa3f6a2bdcce3712e2f8e79fe3",
"text": "Events are communicated in natural language with varying degrees of certainty. For example, if you are “hoping for a raise,” it may be somewhat less likely than if you are “expecting” one. To study these distinctions, we present scalable, highquality annotation schemes for event detection and fine-grained factuality assessment. We find that non-experts, with very little training, can reliably provide judgments about what events are mentioned and the extent to which the author thinks they actually happened. We also show how such data enables the development of regression models for fine-grained scalar factuality predictions that outperform strong baselines.",
"title": ""
}
] |
[
{
"docid": "61a2b0e51b27f46124a8042d59c0f022",
"text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.",
"title": ""
},
{
"docid": "a922051835f239db76be1dbb8edead3e",
"text": "Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.",
"title": ""
},
{
"docid": "93a49a164437d3cc266d8e859f2bb265",
"text": "...................................................................................................................................................4",
"title": ""
},
{
"docid": "579536fe3f52f4ed244f06210a9c2cd1",
"text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.",
"title": ""
},
{
"docid": "553ec50cb948fb96d96b5481ada71399",
"text": "Enormous amount of online information, available in legal domain, has made legal text processing an important area of research. In this paper, we attempt to survey different text summarization techniques that have taken place in the recent past. We put special emphasis on the issue of legal text summarization, as it is one of the most important areas in legal domain. We start with general introduction to text summarization, briefly touch the recent advances in single and multi-document summarization, and then delve into extraction based legal text summarization. We discuss different datasets and metrics used in summarization and compare performances of different approaches, first in general and then focused to legal text. we also mention highlights of different summarization techniques. We briefly cover a few software tools used in legal text summarization. We finally conclude with some future research directions.",
"title": ""
},
{
"docid": "ec1228f8ddf271e8ec5e7018e45b0e77",
"text": "The present work is focused on the systematization of a process of knowledge acquisition for its use in intelligent management systems. The result was the construction of a computational structure for use inside the institutions (Intranet) as well as out of them (Internet). This structure was called Knowledge Engineering Suite an ontological engineering tool to support the construction of antologies in a collaborative setting and was based on observations made at Semantic Web, UNL (Universal Networking Language) and WorldNet. We use a knowledge representation technique called DCKR to organize knowledge and psychoanalytic studies, focused mainly on Lacan and his language theory to develop a methodology called Engineering of Minds to improve the synchronicity between knowledge engineers and specialist in a particular knowledge domain.",
"title": ""
},
{
"docid": "82af5212b43e8dfe6d54582de621d96c",
"text": "The use of multiple radar configurations can overcome some of the geometrical limitations that exist when obtaining radar images of a target using inverse synthetic aperture radar (ISAR) techniques. It is shown here how a particular bistatic configuration can produce three view angles and three ISAR images simultaneously. A new ISAR signal model is proposed and the applicability of employing existing monostatic ISAR techniques to bistatic configurations is analytically demonstrated. An analysis of the distortion introduced by the bistatic geometry to the ISAR image point spread function (PSF) is then carried out and the limits of the applicability of ISAR techniques (without the introduction of additional signal processing) are found and discussed. Simulations and proof of concept experimental data are also provided that support the theory.",
"title": ""
},
{
"docid": "70789bc929ef7d36f9bb4a02793f38f5",
"text": "Lock managers are among the most studied components in concurrency control and transactional systems. However, one question seems to have been generally overlooked: “When there are multiple lock requests on the same object, which one(s) should be granted first?” Nearly all existing systems rely on a FIFO (first in, first out) strategy to decide which transaction(s) to grant the lock to. In this paper, however, we show that the lock scheduling choices have significant ramifications on the overall performance of a transactional system. Despite the large body of research on job scheduling outside the database context, lock scheduling presents subtle but challenging requirements that render existing results on scheduling inapt for a transactional database. By carefully studying this problem, we present the concept of contention-aware scheduling, show the hardness of the problem, and propose novel lock scheduling algorithms (LDSF and bLDSF), which guarantee a constant factor approximation of the best scheduling. We conduct extensive experiments using a popular database on both TPC-C and a microbenchmark. Compared to FIFO— the default scheduler in most database systems—our bLDSF algorithm yields up to 300x speedup in overall transaction latency. Alternatively, our LDSF algorithm, which is simpler and achieves comparable performance to bLDSF, has already been adopted by open-source community, and was chosen as the default scheduling strategy in MySQL 8.0.3+. PVLDB Reference Format: Boyu Tian, Jiamin Huang, Barzan Mozafari, Grant Schoenebeck. Contention-Aware Lock Scheduling for Transactional Databases. PVLDB, 11 (5): xxxx-yyyy, 2018. DOI: 10.1145/3177732.3177740",
"title": ""
},
{
"docid": "1e69c1aef1b194a27d150e45607abd5a",
"text": "Methods of semantic relatedness are essential for wide range of tasks such as information retrieval and text mining. This paper, concerned with these automated methods, attempts to improve Gloss Vector semantic relatedness measure for more reliable estimation of relatedness between two input concepts. Generally, this measure by considering frequency cut-off for big rams tries to remove low and high frequency words which usually do not end up being significant features. However, this naive cutting approach can lead to loss of valuable information. By employing point wise mutual information (PMI) as a measure of association between features, we will try to enforce the foregoing elimination step in a statistical fashion. Applying both approaches to the biomedical domain, using MEDLINE as corpus, MeSH as thesaurus, and available reference standard of 311 concept pairs manually rated for semantic relatedness, we will show that PMI for removing insignificant features is more effective approach than frequency cut-off.",
"title": ""
},
{
"docid": "68c1aa2e3d476f1f24064ed6f0f07fb7",
"text": "Granuloma annulare is a benign, asymptomatic, self-limited papular eruption found in patients of all ages. The primary skin lesion usually is grouped papules in an enlarging annular shape, with color ranging from flesh-colored to erythematous. The two most common types of granuloma annulare are localized, which typically is found on the lateral or dorsal surfaces of the hands and feet; and disseminated, which is widespread. Localized disease generally is self-limited and resolves within one to two years, whereas disseminated disease lasts longer. Because localized granuloma annulare is self-limited, no treatment other than reassurance may be necessary. There are no well-designed randomized controlled trials of the treatment of granuloma annulare. Treatment recommendations are based on the pathophysiology of the disease, expert opinion, and case reports only. Liquid nitrogen, injected steroids, or topical steroids under occlusion have been recommended for treatment of localized disease. Disseminated granuloma annulare may be treated with one of several systemic therapies such as dapsone, retinoids, niacinamide, antimalarials, psoralen plus ultraviolet A therapy, fumaric acid esters, tacrolimus, and pimecrolimus. Consultation with a dermatologist is recommended because of the possible toxicities of these agents.",
"title": ""
},
{
"docid": "c04171e96f62493fd75cdf379de1c2ab",
"text": "Alarming epidemiological features of Alzheimer's disease impose curative treatment rather than symptomatic relief. Drug repurposing, that is reappraisal of a substance's indications against other diseases, offers time, cost and efficiency benefits in drug development, especially when in silico techniques are used. In this study, we have used gene signatures, where up- and down-regulated gene lists summarize a cell's gene expression perturbation from a drug or disease. To cope with the inherent biological and computational noise, we used an integrative approach on five disease-related microarray data sets of hippocampal origin with three different methods of evaluating differential gene expression and four drug repurposing tools. We found a list of 27 potential anti-Alzheimer agents that were additionally processed with regard to molecular similarity, pathway/ontology enrichment and network analysis. Protein kinase C, histone deacetylase, glycogen synthase kinase 3 and arginase inhibitors appear consistently in the resultant drug list and may exert their pharmacologic action in an epidermal growth factor receptor-mediated subpathway of Alzheimer's disease.",
"title": ""
},
{
"docid": "f0a5d33084588ed4b7fc4905995f91e2",
"text": "A new microstrip dual-band polarization reconfigurable antenna is presented for wireless local area network (WLAN) systems operating at 2.4 and 5.8 GHz. The antenna consists of a square microstrip patch that is aperture coupled to a microstrip line located along the diagonal line of the patch. The dual-band operation is realized by employing the TM10 and TM30 modes of the patch antenna. Four shorting posts are inserted into the patch to adjust the frequency ratio of the two modes. The center of each edge of the patch is connected to ground via a PIN diode for polarization switching. By switching between the different states of PIN diodes, the proposed antenna can radiate either horizontal, vertical, or 45° linear polarization in the two frequency bands. Measured results on reflection coefficients and radiation patterns agree well with numerical simulations.",
"title": ""
},
{
"docid": "a4e92e4dc5d93aec4414bc650436c522",
"text": "Where you can find the compiling with continuations easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this compiling with continuations book. It is about this book that will give wellness for all people from many societies.",
"title": ""
},
{
"docid": "948e65673f679fe37027f4dc496397f8",
"text": "Online courses are growing at a tremendous rate, and although we have discovered a great deal about teaching and learning in the online environment, there is much left to learn. One variable that needs to be explored further is procrastination in online coursework. In this mixed methods study, quantitative methods were utilized to evaluate the influence of online graduate students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Additionally, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Collectively, results indicated that ability, effort, context, and luck influenced procrastination in this sample of graduate students. A discussion of these findings, implications for instructors, and recommendations for future research ensues. Online course offerings and degree programs have recently increased at a rapid rate and have gained in popularity among students (Allen & Seaman, 2010, 2011). Garrett (2007) reported that half of prospective students surveyed about postsecondary programs expressed a preference for online and hybrid programs, typically because of the flexibility and convenience (Daymont, Blau, & Campbell, 2011). Advances in learning management systems such as Blackboard have facilitated the dramatic increase in asynchronous programs. Although the research literature concerning online learning has blossomed over the past decade, much is left to learn about important variables that impact student learning and achievement. The purpose of this mixed methods study was to better understand the relationship between online graduate students’ attributional beliefs and their tendency to procrastinate. The approach to this objective was twofold. First, quantitative methods were utilized to evaluate the influence of students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Second, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Journal of Interactive Online Learning Rakes, Dunn, and Rakes",
"title": ""
},
{
"docid": "9d1455f0c26812ae7bbf6d0cebd190c2",
"text": "This paper describes the design, construction, and performance analysis of an adjustable Scotch yoke mechanism mimicking the dorsoventral movement for dolphin-like robots. Since dolphins propel themselves by vertical oscillations following a sinusoidal path with alterable amplitudes, a two- motor-driven Scotch yoke mechanism is adopted as the main propulsor to generate sinusoidal oscillations, where leading screw mechanism and rack and pinion mechanism actuated by the minor motor are incorporated to independently change the length of the crank actuated by the major motor. Meanwhile, the output of the Scotch yoke, i.e., reciprocating motion, is converted into the up-and-down oscillation via rack and gear transmission. A motion control scheme based on the novel Scotch yoke is then formed and applied to achieve desired propulsion. Preliminary tests in a robotics context finally confirm the feasibility of the developed mechanism in mechanics and propulsion.",
"title": ""
},
{
"docid": "57233e0b2c7ef60cc505cd23492a2e03",
"text": "In nature, the eastern North American monarch population is known for its southward migration during the late summer/autumn from the northern USA and southern Canada to Mexico, covering thousands of miles. By simplifying and idealizing the migration of monarch butterflies, a new kind of nature-inspired metaheuristic algorithm, called monarch butterfly optimization (MBO), a first of its kind, is proposed in this paper. In MBO, all the monarch butterfly individuals are located in two distinct lands, viz. southern Canada and the northern USA (Land 1) and Mexico (Land 2). Accordingly, the positions of the monarch butterflies are updated in two ways. Firstly, the offsprings are generated (position updating) by migration operator, which can be adjusted by the migration ratio. It is followed by tuning the positions for other butterflies by means of butterfly adjusting operator. In order to keep the population unchanged and minimize fitness evaluations, the sum of the newly generated butterflies in these two ways remains equal to the original population. In order to demonstrate the superior performance of the MBO algorithm, a comparative study with five other metaheuristic algorithms through thirty-eight benchmark problems is carried out. The results clearly exhibit the capability of the MBO method toward finding the enhanced function values on most of the benchmark problems with respect to the other five algorithms. Note that the source codes of the proposed MBO algorithm are publicly available at GitHub ( https://github.com/ggw0122/Monarch-Butterfly-Optimization , C++/MATLAB) and MATLAB Central ( http://www.mathworks.com/matlabcentral/fileexchange/50828-monarch-butterfly-optimization , MATLAB).",
"title": ""
},
{
"docid": "f39d7a353e289e7aa13f060c93a81acd",
"text": "Functional magnetic resonance imaging was used to study brain regions implicated in retrieval of memories that are decades old. To probe autobiographical memory, family photographs were selected by confederates without the participant's involvement, thereby eliminating many of the variables that potentially confounded previous neuroimaging studies. We found that context-rich memories were associated with activity in lingual and precuneus gyri independently of their age. By contrast, retrosplenial cortex was more active for recent events regardless of memory vividness. Hippocampal activation was related to the richness of re-experiencing (vividness) rather than the age of the memory per se. Remote memories were associated with distributed activation along the rostrocaudal axis of the hippocampus whereas activation associated with recent memories was clustered in the anterior portion. This may explain why circumscribed lesions to the hippocampus disproportionately affect recent memories. These findings are incompatible with theories of long-term memory consolidation, and are more easily accommodated by multiple-trace theory, which posits that detailed memories are always dependent on the hippocampus.",
"title": ""
},
{
"docid": "a08e91040414d6bbec156a5ee90d854d",
"text": "MapReduce has emerged as an important paradigm for processing data in large data centers. MapReduce is a three phase algorithm comprising of Map, Shuffle and Reduce phases. Due to its widespread deployment, there have been several recent papers outlining practical schemes to improve the performance of MapReduce systems. All these efforts focus on one of the three phases to obtain performance improvement. In this paper, we consider the problem of jointly scheduling all three phases of the MapReduce process with a view of understanding the theoretical complexity of the joint scheduling and working towards practical heuristics for scheduling the tasks. We give guaranteed approximation algorithms and outline several heuristics to solve the joint scheduling problem.",
"title": ""
},
{
"docid": "dad7dbbb31f0d9d6268bfdc8303d1c9c",
"text": "This letter proposes a reconfigurable microstrip patch antenna with polarization states being switched among linear polarization (LP), left-hand (LH) and right-hand (RH) circular polarizations (CP). The CP waves are excited by two perturbation elements of loop slots in the ground plane. A p-i-n diode is placed on every slot to alter the current direction, which determines the polarization state. The influences of the slots and p-i-n diodes on antenna performance are minimized because the slots and diodes are not on the patch. The simulated and measured results verified the effectiveness of the proposed antenna configuration. The experimental bandwidths of the -10-dB reflection coefficient for LHCP and RHCP are about 60 MHz, while for LP is about 30 MHz. The bandwidths of the 3-dB axial ratio for both CP states are 20 MHz with best value of 0.5 dB at the center frequency on the broadside direction. Gains for two CP operations are 6.4 dB, and that for the LP one is 5.83 dB. This reconfigurable patch antenna with agile polarization has good performance and concise structure, which can be used for 2.4 GHz wireless communication systems.",
"title": ""
},
{
"docid": "4b4ff17023cf54fe552697ef83c83926",
"text": "Artificial intelligence has been an active branch of research for computer scientists and psychologists for 50 years. The concept of mimicking human intelligence in a computer fuels the public imagination and has led to countless academic papers, news articles and fictional works. However, public expectations remain largely unfulfilled, owing to the incredible complexity of everyday human behavior. A wide range of tools and techniques have emerged from the field of artificial intelligence, many of which are reviewed here. They include rules, frames, model-based reasoning, case-based reasoning, Bayesian updating, fuzzy logic, multiagent systems, swarm intelligence, genetic algorithms, neural networks, and hybrids such as blackboard systems. These are all ingenious, practical, and useful in various contexts. Some approaches are pre-specified and structured, while others specify only low-level behavior, leaving the intelligence to emerge through complex interactions. Some approaches are based on the use of knowledge expressed in words and symbols, whereas others use only mathematical and numerical constructions. It is proposed that there exists a spectrum of intelligent behaviors from low-level reactive systems through to high-level systems that encapsulate specialist expertise. Separate branches of research have made strides at both ends of the spectrum, but difficulties remain in devising a system that spans the full spectrum of intelligent behavior, including the difficult areas in the middle that include common sense and perception. Artificial intelligence is increasingly appearing in situated systems that interact with their physical environment. As these systems become more compact they are likely to become embedded into everyday equipment. As the 50th anniversary approaches of the Dartmouth conference where the term ‘artificial intelligence’ was first published, it is concluded that the field is in good shape and has delivered some great results. Yet human thought processes are incredibly complex, and mimicking them convincingly remains an elusive challenge. ADVANCES IN COMPUTERS, VOL. 65 1 Copyright © 2005 Elsevier Inc. ISSN: 0065-2458/DOI 10.1016/S0065-2458(05)65001-2 All rights reserved.",
"title": ""
}
] |
scidocsrr
|
29a0a7020da68a5c1dc9988ca8da05f8
|
A Review of Experiences with Reliable Multicast
|
[
{
"docid": "383cfad43187d0cca06b4211548e4f5c",
"text": "Research can rarely be performed on large-scale, distributed systems at the level of thousands of workstations. In this paper, we describe the motivating constraints, design principles, and architecture for an extensible, distributed system operating in such an environment. The constraints include continuous operation, dynamic system evolution, and integration with extant systems. The Information Bus, our solution, is a novel synthesis of four design principles: core communication protocols have minimal semantics, objects are self-describing, types can be dynamically defined, and communication is anonymous. The current implementation provides both flexibility and high performance, and has been proven in several commercial environments, including integrated circuit fabrication plants and brokerage/trading floors.",
"title": ""
}
] |
[
{
"docid": "4f59e141ffc88aaed620ca58522e8f03",
"text": "Undergraduate volunteers rated a series of words for pleasantness while hearing a particular background music. The subjects in Experiment 1 received, immediately or after a 48-h delay, an unexpected word-recall test in one of the following musical cue contexts: same cue (S), different cue (D), or no cue (N). For immediate recall, context dependency (S-D) was significant but same-cue facilitation (S-N) was not. No cue effects at all were found for delayed recall, and there was a significant interaction between cue and retention interval. A similar interaction was also found in Experiment 3, which was designed to rule out an alternative explanation with respect to distraction. When the different musical selection was changed specifically in either tempo or form (genre), only pieces having an altered tempo produced significantly lower immediate recall compared with the same pieces (Experiment 2). The results support a stimulus generalization view of music-dependent memory.",
"title": ""
},
{
"docid": "dfe502f728d76f9b4294f725eca78413",
"text": "SUMMARY This paper reports work being carried out under the AMODEUS project (BRA 3066). The goal of the project is to develop interdisciplinary approaches to studying human-computer interaction and to move towards applying the results to the practicalities of design. This paper describes one of the approaches the project is taking to represent design-Design Space Analysis. One of its goals is help us bridge from relatively theoretical concerns to the practicalities of design. Design Space Analysis is a central component of a framework for representing the design rationale for designed artifacts. Our current work focusses more specifically on the design of user interfaces. A Design Space Analysis is represented using the QOC notation, which consists of Questions identifying key design issues, Options providing possible answers to the Questions, and Criteria for assessing and comparing the Options. In this paper we give an overview of our approach, some examples of the research issues we are currently tackling and an illustration of its role in helping to integrate the work of some of our project partners with design considerations.",
"title": ""
},
{
"docid": "1384f95f0f66e64af28e91f8c99a12e8",
"text": "Nature-inspired computing has been a hot topic in scientific and engineering fields in recent years. Inspired by the shallow water wave theory, the paper presents a novel metaheuristic method, named water wave optimization (WWO), for global optimization problems. We show how the beautiful phenomena of water waves, such as propagation, refraction, and breaking, can be used to derive effective mechanisms for searching in a high-dimensional solution space. In general, the algorithmic framework of WWO is simple, and easy to implement with a small-size population and only a few control parameters. We have tested WWO on a diverse set of benchmark problems, and applied WWO to a real-world high-speed train scheduling problem in China. The computational results demonstrate that WWO is very competitive with state-of-the-art evolutionary algorithms including invasive weed optimization (IWO), biogeography-based optimization (BBO), bat algorithm (BA), etc. The new metaheuristic is expected to have wide applications in real-world engineering optimization problems. & 2014 Elsevier Ltd. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license (http://creativecommons.org/licenses/by-nc-sa/3.0/).",
"title": ""
},
{
"docid": "ba72cbe165b4dc5855498f4dc5c0eb71",
"text": "Meta-heuristic algorithms prove to be competent in outperforming deterministic algorithms for real-world optimization problems. Firefly algorithm is one such recently developed algorithm inspired by the flashing behavior of fireflies. In this work, a detailed formulation and explanation of the Firefly algorithm implementation is given. Later Firefly algorithm is verified using six unimodal engineering optimization problems reported in the specialized literature.",
"title": ""
},
{
"docid": "1eee6741c5f303763a45fccf2aebe776",
"text": "This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources. * The work described in this report was performed for Sandia National Laboratories under Contract No. 19094",
"title": ""
},
{
"docid": "90acc4ae44da11db8fbcae5cfa70bf10",
"text": "Capsules as well as dynamic routing between them are most recently proposed structures for deep neural networks. A capsule groups data into vectors or matrices as poses rather than conventional scalars to represent specific properties of target instance. Besides of pose, a capsule should be attached with a probability (often denoted as activation) for its presence. The dynamic routing helps capsules achieve more generalization capacity with many fewer model parameters. However, the bottleneck that prevents widespread applications of capsule is the expense of computation during routing. To address this problem, we generalize existing routing methods within the framework of weighted kernel density estimation, and propose two fast routing methods with different optimization strategies. Our methods prompt the time efficiency of routing by nearly 40% with negligible performance degradation. By stacking a hybrid of convolutional layers and capsule layers, we construct a network architecture to handle inputs at a resolution of 64× 64 pixels. The proposed models achieve a parallel performance with other leading methods in multiple benchmarks.",
"title": ""
},
{
"docid": "71404f5500b0e173c91ac1abdf5d1c88",
"text": "Understanding the navigational behavior of website visitors is a significant factor of success in the emerging business models of electronic commerce and even mobile commerce. In this paper, we describe the different approaches of mining web navigation pattern.",
"title": ""
},
{
"docid": "15f2aca611a24b4932e70b472a8ec7e3",
"text": "Hashing is critical for high performance computer architecture. Hashing is used extensively in hardware applications, such as page tables, for address translation. Bit extraction and exclusive ORing hashing “methods” are two commonly used hashing functions for hardware applications. There is no study of the performance of these functions and no mention anywhere of the practical performance of the hashing functions in comparison with the theoretical performance prediction of hashing schemes. In this paper, we show that, by choosing hashing functions at random from a particular class, called H3, of hashing functions, the analytical performance of hashing can be achieved in practice on real-life data. Our results about the expected worst case performance of hashing are of special significance, as they provide evidence for earlier theoretical predictions. Index Terms —Hashing in hardware, high performance computer architecture, page address translation, signature functions, high speed information storage and retrieval.",
"title": ""
},
{
"docid": "736b98a5b6a86db837362ab2c7086484",
"text": "This is an in-vitro pilot study which established the effect of radiofrequency radiation (RFR) from 2.4 GHz laptop antenna on human semen. Ten samples of the semen, collected from donors between the ages of 20 and 30 years were exposed when the source of the RFR was in active mode. Sequel to the exposure, both the exposed samples and another ten unexposed samples from same donors were analysed for sperm concentration, motility and morphology grading. A test of significance between results of these semen parameters using Mann-Whitney Utest at 0.05 level of significance showed a significant effect of RFR exposure on the semen parameters considered.",
"title": ""
},
{
"docid": "a520bf66f1b54a7444f2cbe3f2da8000",
"text": "In this work we study the problem of Intrusion Detection is sensor networks and we propose a lightweight scheme that can be applied to such networks. Its basic characteristic is that nodes monitor their neighborhood and collaborate with their nearest neighbors to bring the network back to its normal operational condition. We emphasize in a distributed approach in which, even though nodes don’t have a global view, they can still detect an intrusion and produce an alert. We apply our design principles for the blackhole and selective forwarding attacks by defining appropriate rules that characterize malicious behavior. We also experimentally evaluate our scheme to demonstrate its effectiveness in detecting the afore-mentioned attacks.",
"title": ""
},
{
"docid": "95306b34302c35b3c38fd5141e472896",
"text": "We used the machine learning technique of Li et al. (PRL 114, 2015) for molecular dynamics simulations. Atomic configurations were described by feature matrix based on internal vectors, and linear regression was used as a learning technique. We implemented this approach in the LAMMPS code. The method was applied to crystalline and liquid aluminum and uranium at different temperatures and densities, and showed the highest accuracy among different published potentials. Phonon density of states, entropy and melting temperature of aluminum were calculated using this machine learning potential. The results are in excellent agreement with experimental data and results of full ab initio calculations.",
"title": ""
},
{
"docid": "3e28cbfc53f6c42bb0de2baf5c1544aa",
"text": "Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware, and data as services. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies become increasingly challenging. Game theoretic approaches have shown to gain a thorough analytical understanding of the service provisioning problem.\n In this paper we take the perspective of Software as a Service (SaaS) providers which host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality of service requirements, specified in Service Level Agreement (SLA) contracts with the end-users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper we model the service provisioning problem as a Generalized Nash game, and we propose an efficient algorithm for the run time management and allocation of IaaS resources to competing SaaSs.",
"title": ""
},
{
"docid": "13a06fb1a1bdf0df0043fe10f74443e1",
"text": "Coping with the extreme growth of the number of users is one of the main challenges for the future IEEE 802.11 networks. The high interference level, along with the conventional standardized carrier sensing approaches, will degrade the network performance. To tackle these challenges, the Dynamic Sensitivity Control (DSC) and the BSS Color scheme are considered in IEEE 802.11ax and IEEE 802.11ah, respectively. The main purpose of these schemes is to enhance the network throughput and improve the spectrum efficiency in dense networks. In this paper, we evaluate the DSC and the BSS Color scheme along with the PARTIAL-AID (PAID) feature introduced in IEEE 802.11ac, in terms of throughput and fairness. We also, exploit the performance when the aforementioned techniques are combined. The simulations show a significant gain in total throughput when these techniques are applied.",
"title": ""
},
{
"docid": "63429f5eebc2434660b0073b802127c2",
"text": "Body Area Networks are unique in that the large-scale mobility of users allows the network itself to travel across a diverse range of operating domains or even to enter new and unknown environments. This network mobility is unlike node mobility in that sensed changes in inter-network interference level may be used to identify opportunities for intelligent inter-networking, for example, by merging or splitting from other networks, thus providing an extra degree of freedom. This paper introduces the concept of context-aware bodynets for interactive environments using inter-network interference sensing. New ideas are explored at both the physical and link layers with an investigation based on a 'smart' office environment. A series of carefully controlled measurements of the mesh interconnectivity both within and between an ambulatory body area network and a stationary desk-based network were performed using 2.45 GHz nodes. Received signal strength and carrier to interference ratio time series for selected node to node links are presented. The results provide an insight into the potential interference between the mobile and static networks and highlight the possibility for automatic identification of network merging and splitting opportunities.",
"title": ""
},
{
"docid": "b34eb302108ffd515ed9fc896fa7015f",
"text": "Recent magnetoencephalography (MEG) and functional magnetic resonance imaging studies of human auditory cortex are pointing to brain areas on lateral Heschl's gyrus as the 'pitch-processing center'. Here we describe results of a combined MEG-psychophysical study designed to investigate the timing of the formation of the percept of pitch and the generality of the hypothesized 'pitch-center'. We compared the cortical and behavioral responses to Huggins pitch (HP), a stimulus requiring binaural processing to elicit a pitch percept, with responses to tones embedded in noise (TN)-perceptually similar but physically very different signals. The stimuli were crafted to separate the electrophysiological responses to onset of the pitch percept from the onset of the initial stimulus. Our results demonstrate that responses to monaural pitch stimuli are affected by cross-correlational processes in the binaural pathway. Additionally, we show that MEG illuminates processes not simply observable in behavior. Crucially, the MEG data show that, although physically disparate, both HP and TN are mapped onto similar representations by 150 ms post-onset, and provide critical new evidence that the 'pitch onset response' reflects central pitch mechanisms, in agreement with models postulating a single, central pitch extractor.",
"title": ""
},
{
"docid": "e4f31c3e7da3ad547db5fed522774f0e",
"text": "Surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, the Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. To reconstruct detailed models in limited memory, we solve this Poisson formulation efficiently using a streaming framework. Specifically, we introduce a multilevel streaming representation, which enables efficient traversal of a sparse octree by concurrently advancing through multiple streams, one per octree level. Remarkably, for our reconstruction application, a sufficiently accurate solution to the global linear system is obtained using a single iteration of cascadic multigrid, which can be evaluated within a single multi-stream pass. Finally, we explore the application of Poisson reconstruction to the setting of multi-view stereo, to reconstruct detailed 3D models of outdoor scenes from collections of Internet images.\n This is joint work with Michael Kazhdan, Matthew Bolitho, and Randal Burns (Johns Hopkins University), and Michael Goesele, Noah Snavely, Brian Curless, and Steve Seitz (University of Washington).",
"title": ""
},
{
"docid": "30aaf753d3ec72f07d4838de391524ca",
"text": "The present study was aimed to determine the effect on liver, associated oxidative stress, trace element and vitamin alteration in dogs with sarcoptic mange. A total of 24 dogs with clinically established diagnosis of sarcoptic mange, divided into two groups, severely infested group (n=9) and mild/moderately infested group (n=15), according to the extent of skin lesions caused by sarcoptic mange and 6 dogs as control group were included in the present study. In comparison to healthy control hemoglobin, PCV, and TEC were significantly (P<0.05) decreased in dogs with sarcoptic mange however, significant increase in TLC along with neutrophilia and lymphopenia was observed only in severely infested dogs. The albumin, glucose and cholesterol were significantly (P<0.05) decreased and globulin, ALT, AST and bilirubin were significantly (P<0.05) increased in severely infested dogs when compared to other two groups. Malondialdehyde (MDA) levels were significantly (P<0.01) higher in dogs with sarcoptic mange, with levels highest in severely infested groups. Activity of superoxide dismutase (SOD) (P<0.05) and catalase were significantly (P<0.01) lower in sarcoptic infested dogs when compared with the healthy control group. Zinc and copper levels in dogs with sarcoptic mange were significantly (P<0.05) lower when compared with healthy control group with the levels lowest in severely infested group. Vitamin A and vitamin C levels were significantly (P<0.05) lower in sarcoptic infested dogs when compared to healthy control. From the present study, it was concluded that sarcoptic mange in dogs affects the liver and the infestation is associated with oxidant/anti-oxidant imbalance, significant alteration in trace elements and vitamins.",
"title": ""
},
{
"docid": "41da3bc399664e62b4e07006893cdd50",
"text": "Cloud storage service is one of cloud services where cloud service provider can provide storage space to customers. Because cloud storage service has many advantages which include convenience, high computation and capacity, it attracts the user to outsource data in the cloud. However, the user outsources data directly in cloud storage service that is unsafe when outsourcing data is sensitive for the user. Therefore, ciphertext-policy attribute-based encryption is a promising cryptographic solution in cloud environment, which can be drawn up for access control by the data owner to define access policy. Unfortunately, an outsourced architecture applied with the attribute-based encryption introduces many challenges in which one of the challenges is revocation. The issue is a threat to data security in the data owner. In this paper, we survey related studies in cloud data storage with revocation and define their requirements. Then we explain and analyze four representative approaches. Finally, we provide some topics for future research",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "dd9ff422ede7f5df297fa29fdef49db3",
"text": "Courts have articulated a number of legal tests to distinguish corporate transactions that have a legitimate business or economic purpose from those carried out largely, if not solely, for favorable tax treatment. We outline an approach to analyzing the economic substance of corporate transactions based on the property rights theory of the firm and describe its application in two recent tax cases.",
"title": ""
}
] |
scidocsrr
|
59c00b34ca8e8fa0b9345fff33a7d05d
|
3D modelling of leaves from color and ToF data for robotized plant measuring
|
[
{
"docid": "f9da4bfe6dba0a6ec886758b164cd10b",
"text": "Physically based deformable models have been widely embraced by the Computer Graphics community. Many problems outlined in a previous survey by Gibson and Mirtich [GM97] have been addressed, thereby making these models interesting and useful for both offline and real-time applications, such as motion pictures and video games. In this paper, we present the most significant contributions of the past decade, which produce such impressive and perceivably realistic animations and simulations: finite element/difference/volume methods, mass-spring systems, meshfree methods, coupled particle systems and reduced deformable models based on modal analysis. For completeness, we also make a connection to the simulation of other continua, such as fluids, gases and melting objects. Since time integration is inherent to all simulated phenomena, the general notion of time discretization is treated separately, while specifics are left to the respective models. Finally, we discuss areas of application, such as elastoplastic deformation and fracture, cloth and hair animation, virtual surgery simulation, interactive entertainment and fluid/smoke animation, and also suggest areas for future research.",
"title": ""
}
] |
[
{
"docid": "4d389e4f6e33d9f5498e3071bf116a49",
"text": "This paper reviews the origins and definitions of social capital in the writings of Bourdieu, Loury, and Coleman, among other authors. It distinguishes four sources of social capital and examines their dynamics. Applications of the concept in the sociological literature emphasize its role in social control, in family support, and in benefits mediated by extrafamilial networks. I provide examples of each of these positive functions. Negative consequences of the same processes also deserve attention for a balanced picture of the forces at play. I review four such consequences and illustrate them with relevant examples. Recent writings on social capital have extended the concept from an individual asset to a feature of communities and even nations. The final sections describe this conceptual stretch and examine its limitations. I argue that, as shorthand for the positive consequences of sociability, social capital has a definite place in sociological theory. However, excessive extensions of the concept may jeopardize its heuristic value. Alejandro Portes: Biographical Sketch Alejandro Portes is professor of sociology at Princeton University and faculty associate of the Woodrow Wilson School of Public Affairs. He formerly taught at Johns Hopkins where he held the John Dewey Chair in Arts and Sciences, Duke University, and the University of Texas-Austin. In 1997 he held the Emilio Bacardi distinguished professorship at the University of Miami. In the same year he was elected president of the American Sociological Association. Born in Havana, Cuba, he came to the United States in 1960. He was educated at the University of Havana, Catholic University of Argentina, and Creighton University. He received his MA and PhD from the University of Wisconsin-Madison. 0360-0572/98/0815-0001$08.00 1 A nn u. R ev . S oc io l. 19 98 .2 4: 124 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g A cc es s pr ov id ed b y St an fo rd U ni ve rs ity M ai n C am pu s R ob er t C ro w n L aw L ib ra ry o n 03 /1 0/ 17 . F or p er so na l u se o nl y. Portes is the author of some 200 articles and chapters on national development, international migration, Latin American and Caribbean urbanization, and economic sociology. His most recent books include City on the Edge, the Transformation of Miami (winner of the Robert Park award for best book in urban sociology and of the Anthony Leeds award for best book in urban anthropology in 1995); The New Second Generation (Russell Sage Foundation 1996); Caribbean Cities (Johns Hopkins University Press); and Immigrant America, a Portrait. The latter book was designated as a centennial publication by the University of California Press. It was originally published in 1990; the second edition, updated and containing new chapters on American immigration policy and the new second generation, was published in 1996.",
"title": ""
},
{
"docid": "45d808ef2824bb57e4c1dd8d75960e63",
"text": "The use of game-based learning in the classroom has turned out to be a trend nowadays. Most game-based learning tools and platforms are based on a quiz concept where the students can score points if they can choose the correct answer among multiple answers. Based on our experience in Faculty of Electrical Engineering, Universiti Teknologi MARA, most undergraduate students have difficulty to appreciate the Computer Programming course thus demotivating them in learning any programming related courses. Game-based learning approach using a Game-based Classroom Response System (GCRS) tool known as Kahoot is used to address this issue. This paper presents students' perceptions on Kahoot activity that they experienced in the classroom. The study was carried out by distributing a survey form to 120 students. Based on the feedback, majority of students enjoyed the activity and able to attract their interest in computer programming.",
"title": ""
},
{
"docid": "88a8ea1de5ad5cb8883890c1e30b3491",
"text": "Service robots will have to accomplish more and more complex, open-ended tasks and regularly acquire new skills. In this work, we propose a new approach to the problem of generating plans for such household robots. Instead composing them from atomic actions — the common approach in robot planning — we propose to transform task descriptions on web sites like ehow.com into executable robot plans. We present methods for automatically converting the instructions from natural language into a formal, logic-based representation, for resolving the word senses using the WordNet database and the Cyc ontology, and for exporting the generated plans into the mobile robot's plan language RPL. We discuss the problem of inferring information that is missing in these descriptions and the problem of grounding the abstract task descriptions in the perception and action system, and we propose techniques for solving them. The whole system works autonomously without human interaction. It has successfully been tested with a set of about 150 natural language directives, of which up to 80% could be correctly transformed.",
"title": ""
},
{
"docid": "870674d3ab86ad52116e9f0dd4e9605c",
"text": "Due to the global need for oil production and distribution, surrounding ecosystems have been negatively affected by oil spill externalities in individual health and community diversity. Conventional land remediation techniques run the risk of leaving chemical residues, and interacting with metals in the soil. The objective of this study was to test worm compost tea, also known as vermitea, as a bioremediation method to replace current techniques used on oil contaminated soils. To test the conditions that contributed to the efficacy of the teas, I examined different teas that looked into the mode and length of pollutant exposure. I examined oil emulsification activity, presence of biosurfactant-producing bacteria colonies, microbial diversity and abundance, and applicability of the teas to artificially contaminated soils. Overall, I found that the long-term direct oil tea had a 7.42% significant increase in biosurfactant producing microbes in comparison to the control tea. However, the long-term crude soil vermitea was found to be the best type of pollutant degrading tea in terms of emulsifying activity and general applicability towards reducing oil concentrations in the soil. These results will help broaden the scientific understanding towards stimulated microbial degradation of pollution, and broaden the approaches that can be taken in restoring polluted ecosystems.",
"title": ""
},
{
"docid": "288f32db8af5789e6e6049fa4cec0334",
"text": "Trusted execution environments, and particularly the Software Guard eXtensions (SGX) included in recent Intel x86 processors, gained significant traction in recent years. A long track of research papers, and increasingly also realworld industry applications, take advantage of the strong hardware-enforced confidentiality and integrity guarantees provided by Intel SGX. Ultimately, enclaved execution holds the compelling potential of securely offloading sensitive computations to untrusted remote platforms. We present Foreshadow, a practical software-only microarchitectural attack that decisively dismantles the security objectives of current SGX implementations. Crucially, unlike previous SGX attacks, we do not make any assumptions on the victim enclave’s code and do not necessarily require kernel-level access. At its core, Foreshadow abuses a speculative execution bug in modern Intel processors, on top of which we develop a novel exploitation methodology to reliably leak plaintext enclave secrets from the CPU cache. We demonstrate our attacks by extracting full cryptographic keys from Intel’s vetted architectural enclaves, and validate their correctness by launching rogue production enclaves and forging arbitrary local and remote attestation responses. The extracted remote attestation keys affect millions of devices.",
"title": ""
},
{
"docid": "3f96a3cd2e3f795072567a3f3c8ccc46",
"text": "Good corporate reputations are critical because of their potential for value creation, but also because their intangible character makes replication by competing firms considerably more difficult. Existing empirical research confirms that there is a positive relationship between reputation and financial performance. This paper complements these findings by showing that firms with relatively good reputations are better able to sustain superior profit outcomes over time. In particular, we undertake an analysis of the relationship between corporate reputation and the dynamics of financial performance using two complementary dynamic models. We also decompose overall reputation into a component that is predicted by previous financial performance, and that which is ‘left over’, and find that each (orthogonal) element supports the persistence of above-average profits over time. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "a27b626618e225b03bec1eea8327be4d",
"text": "As a fundamental preprocessing of various multimedia applications, object proposal aims to detect the candidate windows possibly containing arbitrary objects in images with two typical strategies, window scoring and grouping. In this paper, we first analyze the feasibility of improving object proposal performance by integrating window scoring and grouping strategies. Then, we propose a novel object proposal method for RGB-D images, named elastic edge boxes. The initial bounding boxes of candidate object regions are efficiently generated by edge boxes, and further adjusted by grouping the super-pixels within elastic range to obtain more accurate candidate windows. To validate the proposed method, we construct the largest RGB-D image data set NJU1800 for object proposal with balanced object number distribution. The experimental results show that our method can effectively and efficiently generate the candidate windows of object regions and it outperforms the state-of-the-art methods considering both accuracy and efficiency.",
"title": ""
},
{
"docid": "63115b12e4a8192fdce26eb7e2f8989a",
"text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.",
"title": ""
},
{
"docid": "5d377a17d3444d6137be582cbbc6c1db",
"text": "Next generation malware will by be characterized by the intense use of polymorphic and metamorphic techniques aimed at circumventing the current malware detectors, based on pattern matching. In order to deal with this new kind of threat novel techniques have to be devised for the realization of malware detectors. Recent papers started to address such issue and this paper represents a further contribution in such a field. More precisely in this paper we propose a strategy for the detection of malicious codes that adopt the most evolved self-mutation techniques; we also provide experimental data supporting the validity of",
"title": ""
},
{
"docid": "91fbf465741c6a033a00a4aa982630b4",
"text": "This paper presents an integrated functional link interval type-2 fuzzy neural system (FLIT2FNS) for predicting the stock market indices. The hybrid model uses a TSK (Takagi–Sugano–Kang) type fuzzy rule base that employs type-2 fuzzy sets in the antecedent parts and the outputs from the Functional Link Artificial Neural Network (FLANN) in the consequent parts. Two other approaches, namely the integrated FLANN and type-1 fuzzy logic system and Local Linear Wavelet Neural Network (LLWNN) are also presented for a comparative study. Backpropagation and particle swarm optimization (PSO) learning algorithms have been used independently to optimize the parameters of all the forecasting models. To test the model performance, three well known stock market indices like the Standard’s & Poor’s 500 (S&P 500), Bombay stock exchange (BSE), and Dow Jones industrial average (DJIA) are used. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to find out the performance of all the three models. Finally, it is observed that out of three methods, FLIT2FNS performs the best irrespective of the time horizons spanning from 1 day to 1 month. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e4dd63dcf11cac8ebff7b53bae58a05a",
"text": "Earth orbiting satellites come in a wide range of shapes and sizes to meet a diverse variety of uses and applications. Large satellites with masses over 1000 kg support high-resolution remote sensing of the Earth, high bandwidth communications services, and world-class scientific studies but take lengthy developments and are costly to build and launch. The advent of commercially available, high-volume, and hence low-cost microelectronics has enabled a different approach through miniaturization. This results in physically far smaller satellites that dramatically reduce timescales and costs and that are able to provide operational and commercially viable services. This paper charts the evolution and rise of small satellites from being an early curiosity with limited utility through to the present where small satellites are a key element of modern space capabilities.",
"title": ""
},
{
"docid": "4247314290ffa50098775e2bbc41b002",
"text": "Heterogeneous integration enables the construction of silicon (Si) photonic systems, which are fully integrated with a range of passive and active elements including lasers and detectors. Numerous advancements in recent years have shown that heterogeneous Si platforms can be extended beyond near-infrared telecommunication wavelengths to the mid-infrared (MIR) (2–20 μm) regime. These wavelengths hold potential for an extensive range of sensing applications and the necessary components for fully integrated heterogeneous MIR Si photonic technologies have now been demonstrated. However, due to the broad wavelength range and the diverse assortment of MIR technologies, the optimal platform for each specific application is unclear. Here, we overview Si photonic waveguide platforms and lasers at the MIR, including quantum cascade lasers on Si. We also discuss progress toward building an integrated multispectral source, which can be constructed by wavelength beam combining the outputs from multiple lasers with arrayed waveguide gratings and duplexing adiabatic couplers.",
"title": ""
},
{
"docid": "c4b037a8818cd2c335cd88daa07f70c9",
"text": "This paper presents the findings of an outdoor thermal comfort study conducted in Hong Kong using longitudinal experiments--an alternative approach to conventional transverse surveys. In a longitudinal experiment, the thermal sensations of a relatively small number of subjects over different environmental conditions are followed and evaluated. This allows an exploration of the effects of changing climatic conditions on thermal sensation, and thus can provide information that is not possible to acquire through the conventional transverse survey. The paper addresses the effects of changing wind and solar radiation conditions on thermal sensation. It examines the use of predicted mean vote (PMV) in the outdoor context and illustrates the use of an alternative thermal index--physiological equivalent temperature (PET). The paper supports the conventional assumption that thermal neutrality corresponds to thermal comfort. Finally, predictive formulas for estimating outdoor thermal sensation are presented as functions of air temperature, wind speed, solar radiation intensity and absolute humidity. According to the formulas, for a person in light clothing sitting under shade on a typical summer day in Hong Kong where the air temperature is about 28°C and relative humidity about 80%, a wind speed of about 1.6 m/s is needed to achieve neutral thermal sensation.",
"title": ""
},
{
"docid": "55032007199b5126480d432b1c45db4a",
"text": "Concern about national security has increased after the 26/11 Mumbai attack. In this paper we look at the use of missing value and clustering algorithm for a data mining approach to help predict the crimes patterns and fast up the process of solving crime. We will concentrate on MV algorithm and Apriori algorithm with some enhancements to aid in the process of filling the missing value and identification of crime patterns. We applied these techniques to real crime data. We also use semisupervised learning technique in this paper for knowledge discovery from the crime records and to help increase the predictive accuracy. General Terms Crime data mining, MV Algorithm, Apriori Algorithm",
"title": ""
},
{
"docid": "fa471f49367e03e57e7739d253385eaf",
"text": "■ Abstract The literature on effects of habitat fragmentation on biodiversity is huge. It is also very diverse, with different authors measuring fragmentation in different ways and, as a consequence, drawing different conclusions regarding both the magnitude and direction of its effects. Habitat fragmentation is usually defined as a landscape-scale process involving both habitat loss and the breaking apart of habitat. Results of empirical studies of habitat fragmentation are often difficult to interpret because ( a) many researchers measure fragmentation at the patch scale, not the landscape scale and ( b) most researchers measure fragmentation in ways that do not distinguish between habitat loss and habitat fragmentation per se, i.e., the breaking apart of habitat after controlling for habitat loss. Empirical studies to date suggest that habitat loss has large, consistently negative effects on biodiversity. Habitat fragmentation per se has much weaker effects on biodiversity that are at least as likely to be positive as negative. Therefore, to correctly interpret the influence of habitat fragmentation on biodiversity, the effects of these two components of fragmentation must be measured independently. More studies of the independent effects of habitat loss and fragmentation per se are needed to determine the factors that lead to positive versus negative effects of fragmentation per se. I suggest that the term “fragmentation” should be reserved for the breaking apart of habitat, independent of habitat loss.",
"title": ""
},
{
"docid": "a87e49bd4a49f35099171b89d278c4d9",
"text": "Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications.",
"title": ""
},
{
"docid": "e077a3c57b1df490d418a2b06cf14b2c",
"text": "Inductive power transfer (IPT) is widely discussed for the automated opportunity charging of plug-in hybrid and electric public transport buses without moving mechanical components and reduced maintenance requirements. In this paper, the design of an on-board active rectifier and dc–dc converter for interfacing the receiver coil of a 50 kW/85 kHz IPT system is designed. Both conversion stages employ 1.2 kV SiC MOSFET devices for their low switching losses. For the dc–dc conversion, a modular, nonisolated buck+boost-type topology with coupled magnetic devices is used for increasing the power density. For the presented hardware prototype, a power density of 9.5 kW/dm3 (or 156 W/in3) is achieved, while the ac–dc efficiency from the IPT receiver coil to the vehicle battery is 98.6%. Comprehensive experimental results are presented throughout this paper to support the theoretical analysis.",
"title": ""
},
{
"docid": "cc5516333c3ed4773eec4dab874b31e9",
"text": "Communities, whose reliance on critical cyber infrastructures is growing, are threatened by a wide range of cyber events that can adversely affect these systems and networks. The development of computer security taxonomies to classify computer and network vulnerabilities and attacks has led to a greater insight into the causes, effects, mitigation, and remediation of cyber attacks. In developing these taxonomies researchers are better able to understand and address the many different attacks that can occur. No current taxonomy, however, has been developed that takes into account the community aspects of cyber attacks or other cyber events affecting communities. We present a new taxonomy that considers the motivation, methodology, and effects of cyber events that can affect communities. We include a discussion on how our taxonomy is useful to e-government, industry, and security researchers.",
"title": ""
},
{
"docid": "ddb51863430250a28f37c5f12c13c910",
"text": "Much of our understanding of human thinking is based on probabilistic models. This innovative book by Jerome R. Busemeyer and Peter D. Bruza argues that, actually, the underlying mathematical structures from quantum theory provide a much better account of human thinking than traditional models. They introduce the foundations for modelling probabilistic-dynamic systems using two aspects of quantum theory. The first, “contextuality,” is a way to understand interference effects found with inferences and decisions under conditions of uncertainty. The second, “quantum entanglement,” allows cognitive phenomena to be modelled in non-reductionist ways. Employing these principles drawn from quantum theory allows us to view human cognition and decision in a totally new light. Introducing the basic principles in an easy-to-follow way, this book does not assume a physics background or a quantum brain and comes complete with a tutorial and fully worked-out applications in important areas of cognition and decision.",
"title": ""
},
{
"docid": "b3d49bd191e0432e4306ee08b49e4c7c",
"text": "ConceptNet is a knowledge representation project, providing a large semantic graph that describes general human knowledge and how it is expressed in natural language. This paper presents the latest iteration, ConceptNet 5, including its fundamental design decisions, ways to use it, and evaluations of its coverage and accuracy.",
"title": ""
}
] |
scidocsrr
|
26dc83d69e9e8265c2416735b7b1b543
|
Detecting cheaters for multiplayer games: theory, design and implementation[1]
|
[
{
"docid": "84b4228c5fdeb8df274bf2d60651b3ac",
"text": "THE multiplayer game (MPG) market is segmented into a handful of readily identifiable genres, the most popular being first-person shooters, realtime strategy games, and role-playing games. First-person shooters (FPS) such as Quake [11], Half-Life [17], and Unreal Tournament [9] are fast-paced conflicts between up to thirty heavily armed players. Players in realtime strategy (RTS) games like Command & Conquer [19], StarCraft [8], and Age of Empires [18] or role-playing game (RPG) such as Diablo II [7] command tens or hundreds of units in battle against up to seven other players. Persistent virtual worlds such as Ultima Online [2], Everquest [12], and Lineage [14] encompass hundreds of thousands of players at a time (typically served by multiple servers). Cheating has always been a problem in computer games, and when prizes are involved can become a contractual issue for the game service provider. Here we examine a cheat where players lie about their network latency (and therefore the amount of time they have to react to their opponents) to see into the future and stay",
"title": ""
}
] |
[
{
"docid": "248adf4ee726dce737b7d0cbe3334ea3",
"text": "People can often find themselves out of their depth when they face knowledge-based problems, such as faulty technology, or medical concerns. This can also happen in everyday domains that users are simply inexperienced with, like cooking. These are common exploratory search conditions, where users don’t quite know enough about the domain to know if they are submitting a good query, nor if the results directly resolve their need or can be translated to do so. In such situations, people turn to their friends for help, or to forums like StackOverflow, so that someone can explain things to them and translate information to their specific need. This short paper describes work-in-progress within a Google-funded project focusing on Search Literacy in these situations, where improved search skills will help users to learn as they search, to search better, and to better comprehend the results. Focusing on the technology-problem domain, we present initial results from a qualitative study of questions asked and answers given in StackOverflow, and present plans for designing search engine support to help searchers learn as they search.",
"title": ""
},
{
"docid": "715d63ebb1316f7c35fd98871297b7d9",
"text": "1. Associate Professor of Oncology of the State University of Ceará; Clinical Director of the Cancer Hospital of Ceará 2. Resident in Urology of Urology Department of the Federal University of Ceará 3. Associate Professor of Urology of the State University of Ceará; Assistant of the Division of Uro-Oncology, Cancer Hospital of Ceará 4. Professor of Urology Department of the Federal University of Ceará; Chief of Division of Uro-Oncology, Cancer Hospital of Ceará",
"title": ""
},
{
"docid": "e4d1a0be0889aba00b80a2d6cdc2335b",
"text": "This study uses a multi-period structural model developed by Chen and Yeh (2006), which extends the Geske-Johnson (1987) compound option model to evaluate the performance of capital structure arbitrage under a multi-period debt structure. Previous studies exploring capital structure arbitrage have typically employed single-period structural models, which have very limited empirical scopes. In this paper, we predict the default situations of a firm using the multi-period Geske-Johnson model that assumes endogenous default barriers. The Geske-Johnson model is the only model that accounts for the entire debt structure and imputes the default barrier to the asset value of the firm. This study also establishes trading strategies and analyzes the arbitrage performance of 369 North American obligators from 2004 to 2008. Comparing the performance of capital structure arbitrage between the Geske-Johnson and CreditGrades models, we find that the extended Geske-Johnson model is more suitable than the CreditGrades model for exploiting the mispricing between equity prices and credit default swap spreads.",
"title": ""
},
{
"docid": "a258c6b5abf18cb3880e4bc7a436c887",
"text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.",
"title": ""
},
{
"docid": "7464cc07f32de5b9ed2465e4f89c019e",
"text": "It is completely amazing! Fake news and click-baits have totally invaded the cyber space. Let us face it: everybody hates them for three simple reasons. Reason #2 will absolutely amaze you. What these can achieve at the time of election will completely blow your mind! Now, we all agree, this cannot go on, you know, somebody has to stop it. So, we did this research on fake news/click-bait detection and trust us, it is totally great research, it really is! Make no mistake. This is the best research ever! Seriously, come have a look, we have it all: neural networks, attention mechanism, sentiment lexicons, author profiling, you name it. Lexical features, semantic features, we absolutely have it all. And we have totally tested it, trust us! We have results, and numbers, really big numbers. The best numbers ever! Oh, and analysis, absolutely top notch analysis. Interested? Come read the shocking truth about fake news and click-bait in the Bulgarian cyber space. You won’t believe what we have found!",
"title": ""
},
{
"docid": "957b3e0cbf7d275739cb411f0c5a1505",
"text": "Multi-task neural network architectures provide a mechanism that jointly integrates information from distinct sources. It is ideal in the context of MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT) scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic multi-task network that estimates: 1) intrinsic uncertainty through a heteroscedastic noise model for spatiallyadaptive task loss weighting and 2) parameter uncertainty through approximate Bayesian inference. This allows sampling of multiple segmentations and synCTs that share their network representation. We test our model on prostate cancer scans and show that it produces more accurate and consistent synCTs with a better estimation in the variance of the errors, state of the art results in OAR segmentation and a methodology for quality assurance in radiotherapy treatment planning.",
"title": ""
},
{
"docid": "a5f7a243e68212e211d9d89da06ceae1",
"text": "A new technique to achieve a circularly polarized probe-fed single-layer microstrip-patch antenna with a wideband axial ratio is proposed. The antenna is a modified form of the conventional E-shaped patch, used to broaden the impedance bandwidth of a basic patch antenna. By letting the two parallel slots of the E patch be unequal, asymmetry is introduced. This leads to two orthogonal currents on the patch and, hence, circularly polarized fields are excited. The proposed technique exhibits the advantage of the simplicity of the E-shaped patch design, which requires only the slot lengths, widths, and position parameters to be determined. Investigations of the effect of various dimensions of the antenna have been carried out via parametric analysis. Based on these investigations, a design procedure for a circularly polarized E-shaped patch was developed. A prototype has been designed, following the suggested procedure for the IEEE 802.11big WLAN band. The performance of the fabricated antenna was measured and compared with simulation results. Various examples with different substrate thicknesses and material types are presented and compared with the recently proposed circularly polarized U-slot patch antennas.",
"title": ""
},
{
"docid": "15dbd6af7840bdfe54609873dd1a0ad9",
"text": "As software systems become increasingly complex to build developers are turning more and more to integrating pre-built components from third party developers into their systems. This use of Commercial Off-The-Shelf (COTS) software components in system construction presents new challenges to system architects and designers. This paper is an experience report that describes issues raised when integrating COTS components, outlines strategies for integration, and presents some informal rules we have developed that ease the development and maintenance of such systems.",
"title": ""
},
{
"docid": "64de73be55c4b594934b0d1bd6f47183",
"text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.",
"title": ""
},
{
"docid": "3754cc254eed0901e6b0010170a20745",
"text": "Chronic diseases, such as Alzheimer's Disease, Diabetes, and Chronic Obstructive Pulmonary Disease, usually progress slowly over a long period of time, causing increasing burden to the patients, their families, and the healthcare system. A better understanding of their progression is instrumental in early diagnosis and personalized care. Modeling disease progression based on real-world evidence is a very challenging task due to the incompleteness and irregularity of the observations, as well as the heterogeneity of the patient conditions. In this paper, we propose a probabilistic disease progression model that address these challenges. As compared to existing disease progression models, the advantage of our model is three-fold: 1) it learns a continuous-time progression model from discrete-time observations with non-equal intervals; 2) it learns the full progression trajectory from a set of incomplete records that only cover short segments of the progression; 3) it learns a compact set of medical concepts as the bridge between the hidden progression process and the observed medical evidence, which are usually extremely sparse and noisy. We demonstrate the capabilities of our model by applying it to a real-world COPD patient cohort and deriving some interesting clinical insights.",
"title": ""
},
{
"docid": "f9d33c91e71a3e84f3b06af83fcdbb6c",
"text": "OBJECTIVES\nTo estimate the magnitude of small meaningful and substantial individual change in physical performance measures and evaluate their responsiveness.\n\n\nDESIGN\nSecondary data analyses using distribution- and anchor-based methods to determine meaningful change.\n\n\nSETTING\nSecondary analysis of data from an observational study and clinical trials of community-dwelling older people and subacute stroke survivors.\n\n\nPARTICIPANTS\nOlder adults with mobility disabilities in a strength training trial (n=100), subacute stroke survivors in an intervention trial (n=100), and a prospective cohort of community-dwelling older people (n=492).\n\n\nMEASUREMENTS\nGait speed, Short Physical Performance Battery (SPPB), 6-minute-walk distance (6MWD), and self-reported mobility.\n\n\nRESULTS\nMost small meaningful change estimates ranged from 0.04 to 0.06 m/s for gait speed, 0.27 to 0.55 points for SPPB, and 19 to 22 m for 6MWD. Most substantial change estimates ranged from 0.08 to 0.14 m/s for gait speed, 0.99 to 1.34 points for SPPB, and 47 to 49 m for 6MWD. Based on responsiveness indices, per-group sample sizes for clinical trials ranged from 13 to 42 for substantial change and 71 to 161 for small meaningful change.\n\n\nCONCLUSION\nBest initial estimates of small meaningful change are near 0.05 m/s for gait speed, 0.5 points for SPPB, and 20 m for 6MWD and of substantial change are near 0.10 m/s for gait speed, 1.0 point for SPPB, and 50 m for 6MWD. For clinical use, substantial change in these measures and small change in gait speed and 6MWD, but not SPPB, are detectable. For research use, these measures yield feasible sample sizes for detecting meaningful change.",
"title": ""
},
{
"docid": "caaca962473382e40a08f90240cc88b6",
"text": "Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called \"experimental psychosis\" by altering neurotransmitter system and in psychotherapeutic procedures (\"psycholytic\" and \"psychedelic\" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.",
"title": ""
},
{
"docid": "20af5209de71897158820f935018d877",
"text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.",
"title": ""
},
{
"docid": "1645b4f6a8c54fcc3921a2518aeb64d0",
"text": "Intestinal schistosomiasis, caused by digenetic trematodes of the genus Schistosoma, is the most prevalent water related disease that causes considerable morbidity and mortality. Although prevalence of Schistosoma mansoni infection has been reported for the present study area, earlier studies have not estimated intensity of infections in relation to periportal fibrosis, which would have been crucial for epidemiological and clinical evaluations. Hence, a community based cross sectional study was conducted from December 2011 to March 2012 to assess prevalence of infection and schistosomal periportal fibrosis in Waja-Timuga, northern Ethiopia. In a cross sectional study involving 371 randomly selected individuals, fresh stool samples were collected and processed by the Kato-Katz method and examined microscopically. Ultrasonography was used to determine status of schistosomal periportal fibrosis and to detect hepatomegaly and/or splenomegaly. Serum was collected for assay of hepatic activity. Statistical analysis was performed using STATA 11 statistical soft ware. P-value <0.05 was reported as statistically significant. The prevalence of S.mansoni infection was 73.9%, while the prevalence of schistosomal periportal fibrosis was 12.3% and mean intensity of infection was 234 eggs per gram of stool. Peak prevalence and intensity of S.mansoni infection was documented in the age range of 10–20 years. Among the study individuals, hepatomegaly was recorded in 3.7% and splenomegaly was recorded in 7.4% of the study individuals. Similarly, among the study individuals who had definite periportal fibrosis, 5.9% had elevated liver enzyme levels. The high prevalence of Schistosoma mansoni infection and schistosomal periportal fibrosis observed in the study area calls for a periodic deworming program to reduce disease, morbidity and transmission. Preventive chemotherapy complemented with other control measures is highly required for sustainable control of schistosomiasis in the study area.",
"title": ""
},
{
"docid": "d4ca93d0aeabda1b90bb3f0f16df9ee8",
"text": "Smart card technology has evolved over the last few years following notable improvements in the underlying hardware and software platforms. Advanced smart card microprocessors, along with robust smart card operating systems and platforms, contribute towards a broader acceptance of the technology. These improvements have eliminated some of the traditional smart card security concerns. However, researchers and hackers are constantly looking for new issues and vulnerabilities. In this article we provide a brief overview of the main smart card attack categories and their corresponding countermeasures. We also provide examples of well-documented attacks on systems that use smart card technology (e.g. satellite TV, EMV, proximity identification) in an attempt to highlight the importance of the security of the overall system rather than just the smart card. a 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "23684698f11bf7328420e334a4fc1096",
"text": "Periodontal disease results in attachment loss and damage to the supporting alveolar bone leading to tooth mobility. In majority of cases, the mandibular incisors are the teeth showing the first signs of mobility. The clinical management of periodontally involved teeth remains a challenge to the clinician. Splinting may be indicated for individual mobile tooth as well as for the entire dentition. The main objectives of splinting include decreasing patient discomfort, increasing occlusal and masticatory function, enhancing esthetics and improving the periodontal prognosis of mobile teeth. Fiber-reinforced composites provide one of the better alternatives for splinting of teeth. This clinical report describes a technique of splinting of periodontally involved mandibular anterior teeth using glass fiber-reinforced composite resin with a follow-up period of five years.",
"title": ""
},
{
"docid": "3d337d14e8f8cd2ded2c9f00cd3f2274",
"text": "BACKGROUND AND PURPOSE\nThe primary objective of this study is to establish the validity and reliability of a perceived medication knowledge and confidence survey instrument (Okere-Renier Survey).\n\n\nMETHODS\nTwo-stage psychometric analyses were conducted to assess reliability (Cronbach's alpha > .70) of the associated knowledge scale. To evaluate the construct validity, exploratory and confirmatory factor analyses were performed.\n\n\nRESULTS\nExploratory factor analysis (EFA) revealed three subscale measures and confirmatory factor analysis (CFA) indicated an acceptable fit to the data (goodness-of-fit index [GFI = 0.962], adjusted goodness-of-fit index [AGFI = 0.919], root mean square residual [RMR = 0.065], root mean square error of approximation [RMSEA] = 0.073). A high internal consistency with Cronbach's a of .833 and .744 were observed in study Stages 1 and 2, respectively.\n\n\nCONCLUSIONS\nThe Okere-Renier Survey is a reliable instrument for predicting patient-perceived level of medication knowledge and confidence.",
"title": ""
},
{
"docid": "0b0fac5bf220e2bb8a545e988fa5123f",
"text": "Graphite nanoplatelets have recently attracted considerable attention as a viable and inexpensive filler substitute for carbon nanotubes in nanocomposites, given the predicted excellent in-plane mechanical, structural, thermal, and electrical properties of graphite. As with carbon nanotubes, full utilization of graphite nanoplatelets in polymer nanocomposite applications will inevitably depend on the ability to achieve complete dispersion of the nano-filler component in the polymer matrix of choice. In this communication, we describe a method for preparing watersoluble polymer-coated graphitic nanoplatelets. We prepare graphite nanoplatelets via the chemical reduction of exfoliated graphite oxide nanoplatelets. Graphite oxide is produced by the oxidative treatment of graphite. It still possesses a layered structure, but is much lighter in color than graphite due to the loss of electronic conjugation brought about during the oxidation. The basal planes of the graphene sheets in graphite oxide are decorated mostly with epoxide and hydroxyl groups, in addition to carbonyl and carboxyl groups, which are located at the edges. These oxygen functionalities alter the van der Waals interactions between the layers of graphite oxide and render them hydrophilic, thus facilitating their hydration and exfoliation in aqueous media. As a result, graphite oxide readily forms stable colloidal dispersions of thin graphite oxide sheets in water. From these stable dispersions, thin ‘‘graphitic’’ nanoplatelets can be obtained by chemical deoxygenation, e.g., removal of the oxygen functionalities with partial restoration of the aromatic graphene network. It is possible that even single graphite sheets (i.e., finite-sized graphene sheets) can be accessed via graphite oxide exfoliation and a subsequent solution-based chemical reduction. In practice, reduction of water-dispersed graphite oxide nanoplatelets results in a gradual decrease in their hydrophilic character, which eventually leads to their irreversible agglomeration and precipitation. However, stable aqueous dispersions of reduced graphite oxide nanoplatelets can be prepared if the reduction is carried out in the presence of an anionic polymer. A stable water dispersion of graphite oxide nanoplatelets, prepared by exfoliation of the graphite oxide (1 mg mL) via ultrasonic treatment (Fisher Scientific FS60, 1 h), was reduced with hydrazine hydrate at 100 uC for 24 h. As the reduction proceeds, the brown-colored dispersion of exfoliated graphite oxide turns black and the reduced nanoplatelets agglomerate and eventually precipitate. This precipitated material could not be re-suspended even after prolonged ultrasonic treatment in water in the presence of surfactants such as sodium dodecylsulfate (SDS) and TRITON X-100, which have been found to successfully solubilize carbon nanotubes. Elemental analyses, coupled with Karl Fisher titration (Galbraith Laboratories), of both graphite oxide and the reduced material indicate that there is a considerable increase in C/O atomic ratio in the reduced material (10.3) compared to that in the starting graphite oxide (2.7). Hence, the reduced material can be described as consisting of partially oxidized graphitic nanoplatelets, given that a fair amount of oxygen is retained even after reduction. The black color of the reduced materials suggests a partial re-graphitization of the exfoliated graphite oxide, as observed by others. In addition to the decrease in the oxygen level, reduction of graphite oxide is accompanied by nitrogen incorporation from the reducing agent (C/N = 16.1). Attempts to reduce graphite oxide in the presence of SDS and TRITON-X100 also failed to produce a stable aqueous dispersion of graphitic nanoplatelets. However, when the reduction was carried out in the presence of poly(sodium 4-styrenesulfonate) (PSS) (Mw = 70000, Sigma-Aldrich, 10 mg mL , 10/1 w/w vs. graphite oxide), a stable black dispersion was obtained. This dispersion can be filtered through a PVDF membrane (0.2 mm pore size, Fisher Scientific) to yield PSS-coated graphitic nanoplatelets that can be re-dispersed readily in water upon mild sonication, forming black suspensions (Fig. 1). At concentrations lower than 0.1 mg mL, the dispersions obtained after a 30-minute ultrasonic treatment appear to be stable indefinitely— samples prepared over a year ago are still homogeneous to date. More concentrated dispersions would develop a small amount of precipitate after several days. However, they never fully settle, even upon months of standing. Elemental analysis of the PSS-coated platelets indicates that it contains y40% polymer as judged by its sulfur content (graphite oxide reduced without any PSS contains no sulfur at all). Its comparatively high oxygen and hydrogen Department of Mechanical Engineering, Northwestern University, 2145 Sheridan Rd., Evanston, IL 60206-3133, USA. E-mail: r-ruoff@northwestern.edu; Fax: +1(847)491-3915; Tel: +1(847)467-6596 Keck Interdisciplinary Surface Science Center, NUANCE, 2220 Campus Drive, #2036, Northwestern University, Evanston, IL 60208, USA. Fax: +1(847)491-5429; Tel: +1(847)491-5505 Department of Chemistry, 2145 Sheridan Rd., Evanston, IL 60208-3133, USA. E-mail: stn@northwestern.edu; Fax: +1(847)491-7713; Tel: +1(847)467-3347 COMMUNICATION www.rsc.org/materials | Journal of Materials Chemistry",
"title": ""
},
{
"docid": "675b2fc25618650dab047f6d7e63ca19",
"text": "Cortical processing of visual information requires that information be exchanged between neurons coding for distant regions in the visual field. It is argued that feedback connections are the best candidates for such rapid long-distance interconnections. In the integrated model, information arriving in the cortex from the magnocellular layers of the lateral geniculate nucleus is first sent and processed in the parietal cortex that is very rapidly activated by a visual stimulus. Results from this first-pass computation are then sent back by feedback connections to areas V1 and V2 that act as 'active black-boards' for the rest of the visual cortical areas: information retroinjected from the parietal cortex is used to guide further processing of parvocellular and koniocellular information in the inferotemporal cortex.",
"title": ""
},
{
"docid": "1720517b913ce3974ab92239ff8a177e",
"text": "Honeypot is a closely monitored computer resource that emulates behaviors of production host within a network in order to lure and attract the attackers. The workability and effectiveness of a deployed honeypot depends on its technical configuration. Since honeypot is a resource that is intentionally made attractive to the attackers, it is crucial to make it intelligent and self-manageable. This research reviews at artificial intelligence techniques such as expert system and case-based reasoning, in order to build an intelligent honeypot.",
"title": ""
}
] |
scidocsrr
|
5a4098d72885cbcbcffd0f1fb7eb6091
|
The beliefs behind the teacher that influences their ICT practices
|
[
{
"docid": "ecddd4f80f417dcec49021065394c89a",
"text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "2683c65d587e8febe45296f1c124e04d",
"text": "We present a new autoencoder-type architecture, that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the canonical distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.",
"title": ""
},
{
"docid": "4f096ba7fc6164cdbf5d37676d943fa8",
"text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.",
"title": ""
},
{
"docid": "1a9e2481abf23501274e67575b1c9be6",
"text": "The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the idealâ€, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utility†for the “majority†and a minimum of an individual regret for the “opponentâ€. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences. a, 1 b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.",
"title": ""
},
{
"docid": "71aae4cbccf6d3451d35528ceca8b8a9",
"text": "We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.",
"title": ""
},
{
"docid": "372c5918e55e79c0a03c14105eb50fad",
"text": "Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency, and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting’s greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early stopping strategies under which boosting is shown to be consistent based on iid samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step sizes, as known in practice through the works of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with ǫ → 0 stepsize becomes an L-margin maximizer when left to run to convergence.",
"title": ""
},
{
"docid": "efc11b77182119202190f97d705b3bb7",
"text": "In many E-commerce recommender systems, a special class of recommendation involves recommending items to users in a life cycle. For example, customers who have babies will shop on Diapers.com within a relatively long period, and purchase different products for babies within different growth stages. Traditional recommendation algorithms produce recommendation lists similar to items that the target user has accessed before (content filtering), or compute recommendation by analyzing the items purchased by the users who are similar to the target user (collaborative filtering). Such recommendation paradigms cannot effectively resolve the situation with a life cycle, i.e., the need of customers within different stages might vary significantly. In this paper, we model users’ behavior with life cycles by employing handcrafted item taxonomies, of which the background knowledge can be tailored for the computation of personalized recommendation. In particular, our method first formalizes a user’s long-term behavior using the item taxonomy, and then identifies the exact stage of the user. By incorporating collaborative filtering into recommendation, we can easily provide a personalized item list to the user through other similar users within the same stage. An empirical evaluation conducted on a purchasing data collection obtained from Diapers.com demonstrates the efficacy of our proposed method. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b3e9c251b2da6c704da6285602773afe",
"text": "It has been well established that most operating system crashes are due to bugs in device drivers. Because drivers are normally linked into the kernel address space, a buggy driver can wipe out kernel tables and bring the system crashing to a halt. We have greatly mitigated this problem by reducing the kernel to an absolute minimum and running each driver as a separate, unprivileged process in user space. In addition, we implemented a POSIX-conformant operating system as multiple user-mode processes. In this design, all that is left in kernel mode is a tiny kernel of under 3800 lines of executable code for catching interrupts, starting and stopping processes, and doing IPC. By moving nearly the entire operating system to multiple, protected user-mode processes we reduce the consequences of faults, since a driver failure no longer is fatal and does not require rebooting the computer. In fact, our system incorporates a reincarnation server that is designed to deal with such errors and often allows for full recovery, transparent to the application and without loss of data. To achieve maximum reliability, our design was guided by simplicity, modularity, least authorization, and fault tolerance. This paper discusses our lightweight approach and reports on its performance and reliability. It also compares our design to other proposals for protecting drivers using kernel wrapping and virtual machines.",
"title": ""
},
{
"docid": "7e5d83af3c6496e41c19b36b2392f076",
"text": "JavaScript is an interpreted programming language most often used for enhancing webpage interactivity and functionality. It has powerful capabilities to interact with webpage documents and browser windows, however, it has also opened the door for many browser-based security attacks. Insecure engineering practices of using JavaScript may not directly lead to security breaches, but they can create new attack vectors and greatly increase the risks of browser-based attacks. In this article, we present the first measurement study on insecure practices of using JavaScript on the Web. Our focus is on the insecure practices of JavaScript inclusion and dynamic generation, and we examine their severity and nature on 6,805 unique websites. Our measurement results reveal that insecure JavaScript practices are common at various websites: (1) at least 66.4% of the measured websites manifest the insecure practices of including JavaScript files from external domains into the top-level documents of their webpages; (2) over 44.4% of the measured websites use the dangerous eval() function to dynamically generate and execute JavaScript code on their webpages; and (3) in JavaScript dynamic generation, using the document.write() method and the innerHTML property is much more popular than using the relatively secure technique of creating script elements via DOM methods. Our analysis indicates that safe alternatives to these insecure practices exist in common cases and ought to be adopted by website developers and administrators for reducing potential security risks.",
"title": ""
},
{
"docid": "54e5cd296371e7e058a00b1835251242",
"text": "In this paper, a quasi-millimeter-wave wideband bandpass filter (BPF) is designed by using a microstrip dual-mode ring resonator and two folded half-wavelength resonators. Based on the transmission line equivalent circuit of the filter, variations of the frequency response of the filter versus the circuit parameters are investigated first by using the derived formulas and circuit simulators. Then a BPF with a 3dB fractional bandwidth (FBW) of 20% at 25.5 GHz is designed, which realizes the desired wide passband, sharp skirt property, and very wide stopband. Finally, the designed BPF is fabricated, and its measured frequency response is found agree well with the simulated result.",
"title": ""
},
{
"docid": "93d06eafb15063a7d17ec9a7429075f0",
"text": "Non-orthogonal multiple access (NOMA) is emerging as a promising, yet challenging, multiple access technology to improve spectrum utilization for the fifth generation (5G) wireless networks. In this paper, the application of NOMA to multicast cognitive radio networks (termed as MCR-NOMA) is investigated. A dynamic cooperative MCR-NOMA scheme is proposed, where the multicast secondary users serve as relays to improve the performance of both primary and secondary networks. Based on the available channel state information (CSI), three different secondary user scheduling strategies for the cooperative MCR-NOMA scheme are presented. To evaluate the system performance, we derive the closed-form expressions of the outage probability and diversity order for both networks. Furthermore, we introduce a new metric, referred to as mutual outage probability to characterize the cooperation benefit compared to non-cooperative MCR-NOMA scheme. Simulation results demonstrate significant performance gains are obtained for both networks, thanks to the use of our proposed cooperative MCR-NOMA scheme. It is also demonstrated that higher spatial diversity order can be achieved by opportunistically utilizing the CSI available for the secondary user scheduling.",
"title": ""
},
{
"docid": "92386ee2988b6d7b6f2f0b3cdcbf44ba",
"text": "In the rst part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate rule of Littlestone and Warmuth [20] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n . In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary nite set or a bounded segment of the real line.",
"title": ""
},
{
"docid": "858a5ed092f02d057437885ad1387c9f",
"text": "The current state-of-the-art singledocument summarization method generates a summary by solving a Tree Knapsack Problem (TKP), which is the problem of finding the optimal rooted subtree of the dependency-based discourse tree (DEP-DT) of a document. We can obtain a gold DEP-DT by transforming a gold Rhetorical Structure Theory-based discourse tree (RST-DT). However, there is still a large difference between the ROUGE scores of a system with a gold DEP-DT and a system with a DEP-DT obtained from an automatically parsed RST-DT. To improve the ROUGE score, we propose a novel discourse parser that directly generates the DEP-DT. The evaluation results showed that the TKP with our parser outperformed that with the state-of-the-art RST-DT parser, and achieved almost equivalent ROUGE scores to the TKP with the gold DEP-DT.",
"title": ""
},
{
"docid": "49329aef5ac732cc87b3cc78520c7ff5",
"text": "This paper surveys the previous and ongoing research on surface electromyogram (sEMG) signal processing implementation through various hardware platforms. The development of system that incorporates sEMG analysis capability is essential in rehabilitation devices, prosthesis arm/limb and pervasive healthcare in general. Most advanced EMG signal processing algorithms rely heavily on computational resource of a PC that negates the elements of portability, size and power dissipation of a pervasive healthcare system. Signal processing techniques applicable to sEMG are discussed with aim for proper execution in platform other than full-fledge PC. Performance and design parameters issues in some hardware implementation are also being pointed up. The paper also outlines the trends and alternatives solutions in developing portable and efficient EMG signal processing hardware.",
"title": ""
},
{
"docid": "1785d1d7da87d1b6e5c41ea89e447bf9",
"text": "Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given.",
"title": ""
},
{
"docid": "18e1f1171844fa27905246b9246cc975",
"text": "Autonomous robots must be able to learn and maintain models of their environments. Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are often difficult to learn and maintain in large-scale environments, particularly if momentary sensor data is highly ambiguous. This paper describes an approach that integrates both paradigms: grid-based and topoIogica1. Grid-based maps are learned using artificial neural networks and naive Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms, the approach presented here gains advantages from both worlds: accuracy/consistency and efficiency. The paper gives results for autonomous exploration, mapping and operation of a mobile robot in populated multi-room environments. @ 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "60182038191a764fd7070e8958185718",
"text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd",
"title": ""
},
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
},
{
"docid": "14bcbfcb6e7165e67247453944f37ac0",
"text": "This study investigated whether psychologists' confidence in their clinical decisions is really justified. It was hypothesized that as psychologists study information about a case (a) their confidence about the case increases markedly and steadily but (b) the accuracy of their conclusions about the case quickly reaches a ceiling. 32 judges, including 8 clinical psychologists, read background information about a published case, divided into 4 sections. After reading each section of the case, judges answered a set of 25 questions involving personality judgments about the case. Results strongly supported the hypotheses. Accuracy did not increase significantly with increasing information, but confidence increased steadily and significantly. All judges except 2 became overconfident, most of them markedly so. Clearly, increasing feelings of confidence are not a sure sign of increasing predictive accuracy about a case.",
"title": ""
},
{
"docid": "1d1ba5f131c9603fe3d919ad493a6dc1",
"text": "By its very nature, software development consists of many knowledge-intensive processes. One of the most difficult to model, however, is requirements elicitation. This paper presents a mathematical model of the requirements elicitation process that clearly shows the critical role of knowledge in its performance. One metaprocess of requirements elicitation, selection of an appropriate elicitation technique, is also captured in the model. The values of this model are: (1) improved understanding of what needs to be performed during elicitation helps analysts improve their elicitation efforts, (2) improved understanding of how elicitation techniques are selected helps less experienced analysts be as successful as more experienced analysts, and (3) as we improve our ability to perform elicitation, we improve the likelihood that the systems we create will meet their intended customers’ needs. Many papers have been written that promulgate specific elicitation methods. A few have been written that model elicitation in general. However, none have yet to model elicitation in a way that makes clear the critical role played by knowledge. This paper’s model captures the critical roles played by knowledge in both elicitation and elicitation technique selection.",
"title": ""
},
{
"docid": "632fc99930154b2caaa83254a0cc3c52",
"text": "Article history: Received 1 May 2012 Received in revised form 1 May 2014 Accepted 3 May 2014 Available online 10 May 2014",
"title": ""
}
] |
scidocsrr
|
66bbf76fd3ac85326535b5b30371d231
|
ASAP: A Self-Adaptive Prediction System for Instant Cloud Resource Demand Provisioning
|
[
{
"docid": "e06cc2a4291c800a76fd2a107d2230e4",
"text": "Surprisingly, console logs rarely help operators detect problems in large-scale datacenter services, for they often consist of the voluminous intermixing of messages from many software components written by independent developers. We propose a general methodology to mine this rich source of information to automatically detect system runtime problems. We first parse console logs by combining source code analysis with information retrieval to create composite features. We then analyze these features using machine learning to detect operational problems. We show that our method enables analyses that are impossible with previous methods because of its superior ability to create sophisticated features. We also show how to distill the results of our analysis to an operator-friendly one-page decision tree showing the critical messages associated with the detected problems. We validate our approach using the Darkstar online game server and the Hadoop File System, where we detect numerous real problems with high accuracy and few false positives. In the Hadoop case, we are able to analyze 24 million lines of console logs in 3 minutes. Our methodology works on textual console logs of any size and requires no changes to the service software, no human input, and no knowledge of the software's internals.",
"title": ""
}
] |
[
{
"docid": "312d05085096de7d4dfaaef815f35249",
"text": "Chemomechanical caries removal allies an atraumatic technique with antimicrobiotic characteristics, minimizing painful stimuli and maximally preserving healthy dental structures. The purpose of this study was to compare the cytotoxic effects of papain-based gel (Papacarie) and another caries-removing substance, Carisolv, to a nontreatment control on cultured fibroblasts in vitro and the biocompatibility in subcutaneous tissue in vivo. The cytotoxicity analysis was performed on fibroblast cultures (NIH-3T3) after 0-, 4-, 8-, and 12-hour exposure (cell viability assay) and after 1-, 3-, 5-, and 7-day exposure (survival assay). In the in vivo study, the 2 compounds were introduced into polyethylene tubes that were implanted into subcutaneous tissues of rats. After 1, 7, 14, 30, and 60 days, tissue samples were examined histologically. Cell viability did not differ between the 2 experimental groups. The control group, however, showed significantly higher percentage viability. There were no differences in cell survival between the control and experimental groups. The histological analysis revealed a moderate inflammatory response at 2 and 7 days and a mild response at 15 days, becoming almost imperceptible by 30 and 60 days in both experimental groups. The 2 tested substances exhibited acceptable biocompatibilities and demonstrated similar responses in the in vitro cytotoxicity and in vivo implantation assay.",
"title": ""
},
{
"docid": "26e60be4012b20575f3ddee16f046daa",
"text": "Natural scene character recognition is challenging due to the cluttered background, which is hard to separate from text. In this paper, we propose a novel method for robust scene character recognition. Specifically, we first use robust principal component analysis (PCA) to denoise character image by recovering the missing low-rank component and filtering out the sparse noise term, and then use a simple Histogram of oriented Gradient (HOG) to perform image feature extraction, and finally, use a sparse representation based classifier for recognition. In experiments on four public datasets, namely the Char74K dataset, ICADAR 2003 robust reading dataset, Street View Text (SVT) dataset and IIIT5K-word dataset, our method was demonstrated to be competitive with the state-of-the-art methods.",
"title": ""
},
{
"docid": "0257589dc59f1ddd4ec19a2450e3156f",
"text": "Drawing upon the literatures on beliefs about magical contagion and property transmission, we examined people's belief in a novel mechanism of human-to-human contagion, emotional residue. This is the lay belief that people's emotions leave traces in the physical environment, which can later influence others or be sensed by others. Studies 1-4 demonstrated that Indians are more likely than Americans to endorse a lay theory of emotions as substances that move in and out of the body, and to claim that they can sense emotional residue. However, when the belief in emotional residue is measured implicitly, both Indians and American believe to a similar extent that emotional residue influences the moods and behaviors of those who come into contact with it (Studies 5-7). Both Indians and Americans also believe that closer relationships and a larger number of people yield more detectable residue (Study 8). Finally, Study 9 demonstrated that beliefs about emotional residue can influence people's behaviors. Together, these finding suggest that emotional residue is likely to be an intuitive concept, one that people in different cultures acquire even without explicit instruction.",
"title": ""
},
{
"docid": "e073e8ae88b49ef2d3636a4f7f15076d",
"text": "Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.",
"title": ""
},
{
"docid": "45cbfbe0a0bcf70910a6d6486fb858f0",
"text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.",
"title": ""
},
{
"docid": "3963e1a10366748bf4e52d34cc15cc0f",
"text": "Surface electromyography (sEMG) is widely used in clinical diagnosis, rehabilitation engineering and humancomputer interaction and other fields. In this paper, we use Myo armband to collect sEMG signals. Myo armband can be worn above any elbow of any arm and it can capture the bioelectric signal generated when the arm muscles move. MYO can pass of signals through its low-power Blue-tooth, and its interference is small, which makes the signal quality really good. By collecting the sEMG signals of the upper limb forearm, we extract five eigenvalues in the time domain, and use the BP neural network classification algorithm to realize the recognition of six gestures in this paper. Experimental results show that the use of MYO for gesture recognition can get a very good recognition results, it can accurately identify the six hand movements with the average recognition rate of 93%.",
"title": ""
},
{
"docid": "3fd6d0ef0240b2fdd2a9c76a023ecab6",
"text": "In this work, an exponential spline method is developed and a nalyzed for approximating solutions of calculus of variati ons problems. The method uses a spline interpolant, which is con structed from exponential spline. It is proved to be secondrder convergent. Finally some illustrative examples are includ ed to demonstrate the applicability of the new technique. Nu merical results confirm the order of convergence predicted by the analysis.",
"title": ""
},
{
"docid": "490114176c31592da4cac2bcf75f31f3",
"text": "In this letter, we present a compact ultrawideband (UWB) antenna printed on a 50.8-μm Kapton polyimide substrate. The antenna is fed by a linearly tapered coplanar waveguide (CPW) that provides smooth transitional impedance for improved matching. The proposed design is tuned to cover the 2.2-14.3-GHz frequency range that encompasses both the 2.45-GHz Industrial, Scientific, Medical (ISM) band and the standard 3.1-10.6-GHz UWB band. Furthermore, the antenna is compared to a conventional CPW-fed antenna to demonstrate the significance of the proposed design. A parametric study is first performed on the feed of the proposed design to achieve the desired impedance matching. Next, a prototype is fabricated; measurement results show good agreement with the simulated model. Moreover, the antenna demonstrates a very low susceptibility to performance degradation due to bending effects in terms of impedance matching and far-field radiation patterns, which makes it suitable for integration within modern flexible electronic devices.",
"title": ""
},
{
"docid": "8de4182b607888e6c7cbe6d6ae8ee122",
"text": "In this article, we focus on isolated gesture recognition and explore different modalities by involving RGB stream, depth stream, and saliency stream for inspection. Our goal is to push the boundary of this realm even further by proposing a unified framework that exploits the advantages of multi-modality fusion. Specifically, a spatial-temporal network architecture based on consensus-voting has been proposed to explicitly model the long-term structure of the video sequence and to reduce estimation variance when confronted with comprehensive inter-class variations. In addition, a three-dimensional depth-saliency convolutional network is aggregated in parallel to capture subtle motion characteristics. Extensive experiments are done to analyze the performance of each component and our proposed approach achieves the best results on two public benchmarks, ChaLearn IsoGD and RGBD-HuDaAct, outperforming the closest competitor by a margin of over 10% and 15%, respectively. Our project and codes will be released at https://davidsonic.github.io/index/acm_tomm_2017.html.",
"title": ""
},
{
"docid": "f073981b6c7893dd904fb04707f5ebeb",
"text": "Plant growth-promoting rhizobacteria (PGPR) are the rhizosphere bacteria that can enhance plant growth by a wide variety of mechanisms like phosphate solubilization, siderophore production, biological nitrogen fixation, rhizosphere engineering, production of 1-Aminocyclopropane-1-carboxylate deaminase (ACC), quorum sensing (QS) signal interference and inhibition of biofilm formation, phytohormone production, exhibiting antifungal activity, production of volatile organic compounds (VOCs), induction of systemic resistance, promoting beneficial plant-microbe symbioses, interference with pathogen toxin production etc. The potentiality of PGPR in agriculture is steadily increased as it offers an attractive way to replace the use of chemical fertilizers, pesticides and other supplements. Growth promoting substances are likely to be produced in large quantities by these rhizosphere microorganisms that influence indirectly on the overall morphology of the plants. Recent progress in our understanding on the diversity of PGPR in the rhizosphere along with their colonization ability and mechanism of action should facilitate their application as a reliable component in the management of sustainable agricultural system. The progress to date in using the rhizosphere bacteria in a variety of applications related to agricultural improvement along with their mechanism of action with special reference to plant growth-promoting traits are summarized and discussed in this review.",
"title": ""
},
{
"docid": "39e9fe27f70f54424df1feec453afde3",
"text": "Ontology is a sub-field of Philosophy. It is the study of the nature of existence and a branch of metaphysics concerned with identifying the kinds of things that actually exists and how to describe them. It describes formally a domain of discourse. Ontology is used to capture knowledge about some domain of interest and to describe the concepts in the domain and also to express the relationships that hold between those concepts. Ontology consists of finite list of terms (or important concepts) and the relationships among the terms (or Classes of Objects). Relationships typically include hierarchies of classes. It is an explicit formal specification of conceptualization and the science of describing the kind of entities in the world and how they are related (W3C). Web Ontology Language (OWL) is a language for defining and instantiating web ontologies (a W3C Recommendation). OWL ontology includes description of classes, properties and their instances. OWL is used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. Such representation of terms and their interrelationships is called ontology. OWL has facilities for expressing meaning and semantics and the ability to represent machine interpretable content on the Web. OWL is designed for use by applications that need to process the content of information instead of just presenting information to humans. This is used for knowledge representation and also is useful to derive logical consequences from OWL formal semantics.",
"title": ""
},
{
"docid": "fae3b6d1415e5f1d95aa2126c14e7a09",
"text": "This paper presents an active RF phase shifter with 10 bit control word targeted toward the upcoming 5G wireless systems. The circuit is designed and fabricated using 45 nm CMOS SOI technology. An IQ vector modulator (IQVM) topology is used which provides both amplitude and phase control. The design is programmable with exhaustive digital controls available for parameters like bias voltage, resonance frequency, and gain. The frequency of operation is tunable from 12.5 GHz to 15.7 GHz. The mean angular separation between phase points is 1.5 degree at optimum amplitude levels. The rms phase error over the operating band is as low as 0.8 degree. Active area occupied is 0.18 square millimeter. The total DC power consumed from 1 V supply is 75 mW.",
"title": ""
},
{
"docid": "e34873c21f9c0dd0705e0496886137df",
"text": "This paper examines two principal categories of manipulative behaviour. The term ‘macro-manipulation’ is used to describe the lobbying of regulators to persuade them to produce regulation that is more favourable to the interests of preparers. ‘Micromanipulation’ describes the management of accounting figures to produce a biased view at the entity level. Both categories of manipulation can be viewed as attempts at creativity by financial statement preparers. The paper analyses two cases of manipulation which are considered in an ethical context. The paper concludes that the manipulations described in it can be regarded as morally reprehensible. They are not fair to users, they involve an unjust exercise of power, and they tend to weaken the authority of accounting regulators.",
"title": ""
},
{
"docid": "c07c69bf5e2fce6f9944838ce80b5b8c",
"text": "Many image editing applications rely on the analysis of image patches. In this paper, we present a method to analyze patches by embedding them to a vector space, in which the Euclidean distance reflects patch similarity. Inspired by Word2Vec, we term our approach Patch2Vec. However, there is a significant difference between words and patches. Words have a fairly small and well defined dictionary. Image patches, on the other hand, have no such dictionary and the number of different patch types is not well defined. The problem is aggravated by the fact that each patch might contain several objects and textures. Moreover, Patch2Vec should be universal because it must be able to map never-seen-before texture to the vector space. The mapping is learned by analyzing the distribution of all natural patches. We use Convolutional Neural Networks (CNN) to learn Patch2Vec. In particular, we train a CNN on labeled images with a triplet-loss objective function. The trained network encodes a given patch to a 128D vector. Patch2Vec is evaluated visually, qualitatively, and quantitatively. We then use several variants of an interactive single-click image segmentation algorithm to demonstrate the power of our method.",
"title": ""
},
{
"docid": "df5ce1a194802b0f6dac28d1a05bb08e",
"text": "This paper presents a 77-GHz CMOS frequency-modulated continuous-wave (FMCW) frequency synthesizer with the capability of reconfigurable chirps. The frequency-sweep range and sweep time of the chirp signals can be reconfigured every cycle such that the frequency-hopping random chirp signal can be realized for an FMCW radar transceiver. The frequency synthesizer adopts the fractional-N phase-locked-loop technique and is fully integrated in TSMC 65-nm digital CMOS technology. The silicon area of the synthesizer is 0.65 mm × 0.45 mm and it consumes 51.3 mW of power. The measured output phase noise of the synthesizer is -85.1 dBc/Hz at 1-MHz offset and the root-mean-square modulation frequency error is smaller than 73 kHz.",
"title": ""
},
{
"docid": "6724f1e8a34a6d9f64a30061ce7f67c0",
"text": "Mental contrasting with implementation intentions (MCII) has been found to improve selfregulation across many life domains. The present research investigates whether MCII can benefit time management. In Study 1, we asked students to apply MCII to a pressing academic problem and assessed how they scheduled their time for the upcoming week. MCII participants scheduled more time than control participants who in their thoughts either reflected on similar contents using different cognitive procedures (content control group) or applied the same cognitive procedures on different contents (format control group). In Study 2, students were taught MCII as a metacognitive strategy to be used on any upcoming concerns of the subsequent week. As compared to the week prior to the training, students in the MCII (vs. format control) condition improved in self-reported time management. In Study 3, MCII (vs. format control) helped working mothers who enrolled in a vocational business program to attend classes more regularly. The findings suggest that performing MCII on one’s everyday concerns improves time management.",
"title": ""
},
{
"docid": "85fc78cc3f71b784063b8b564e6509a9",
"text": "Numerous research papers have listed different vectors of personally identifiable information leaking via tradition al and mobile Online Social Networks (OSNs) and highlighted the ongoing aggregation of data about users visiting popular We b sites. We argue that the landscape is worsening and existing proposals (including the recent U.S. Federal Trade Commission’s report) do not address several key issues. We examined over 100 popular non-OSN Web sites across a number of categories where tens of millions of users representing d iverse demographics have accounts, to see if these sites leak private information to prominent aggregators. Our results raise considerable concerns: we see leakage in sites for every category we examined; fully 56% of the sites directly leak pieces of private information with this result growing to 75% if we also include leakage of a site userid. Sensitive search strings sent to healthcare Web sites and travel itineraries on flight reservation sites are leaked in 9 of the top 10 sites studied for each category. The community needs a clear understanding of the shortcomings of existing privac y protection measures and the new proposals. The growing disconnect between the protection measures and increasing leakage and linkage suggests that we need to move beyond the losing battle with aggregators and examine what roles first-party sites can play in protecting privacy of their use rs.",
"title": ""
},
{
"docid": "43269c32b765b0f5d5d0772e0b1c5906",
"text": "Silver nanoparticles (AgNPs) have been synthesized by Lantana camara leaf extract through simple green route and evaluated their antibacterial and catalytic activities. The leaf extract (LE) itself acts as both reducing and stabilizing agent at once for desired nanoparticle synthesis. The colorless reaction mixture turns to yellowish brown attesting the AgNPs formation and displayed UV-Vis absorption spectra. Structural analysis confirms the crystalline nature and formation of fcc structured metallic silver with majority (111) facets. Morphological studies elicit the formation of almost spherical shaped nanoparticles and as AgNO3 concentration is increased, there is an increment in the particle size. The FTIR analysis evidences the presence of various functional groups of biomolecules of LE is responsible for stabilization of AgNPs. Zeta potential measurement attests the higher stability of synthesized AgNPs. The synthesized AgNPs exhibited good antibacterial activity when tested against Escherichia coli, Pseudomonas spp., Bacillus spp. and Staphylococcus spp. using standard Kirby-Bauer disc diffusion assay. Furthermore, they showed good catalytic activity on the reduction of methylene blue by L. camara extract which is monitored and confirmed by the UV-Vis spectrophotometer.",
"title": ""
},
{
"docid": "b9cdda89b24a8595481933e268319e18",
"text": "Wireless hotspots allow users to use Internet via Wi-Fi interface, and many shops, cafés, parks, and airports provide free wireless hotspot services to attract customers. However, there is no authentication mechanism of Wi-Fi access points (APs) available in such hotspots, which makes them vulnerable to evil twin AP attacks. Such attacks are harmful because they allow to steal sensitive data from users. Today, there is no client-side mechanism that can effectively detect an evil twin AP attack without additional infrastructure supports. In this paper, we propose a mechanism CETAD leveraging public servers to detect such attacks. CETAD only requires installing an app at the client device and does not require to change the hotspot APs. CETAD explores the similarities between the legitimate APs and discrepancies between evil twin APs, and legitimate ones to detect an evil twin AP attack. Through our implementation and evaluation, we show that CETAD can detect evil twin AP attacks in various scenarios effectively.",
"title": ""
},
{
"docid": "c26346ffb5d1ea7cc18514a92f3105b8",
"text": "Ontologies are one of the core foundations of the Semantic Web. To participate in Semantic Web projects, domain experts need to be able to understand the ontologies involved. Visual notations can provide an overview of the ontology and help users to understand the connections among entities. However, the users first need to learn the visual notation before they can interpret it correctly. Controlled natural language representation would be readable right away and might be preferred in case of complex axioms, however, the structure of the ontology would remain less apparent. We propose to combine ontology visualizations with contextual ontology verbalizations of selected ontology (diagram) elements, displaying controlled natural language (CNL) explanations of OWL axioms corresponding to the selected visual notation elements. Thus, the domain experts will benefit from both the high-level overview provided by the graphical notation and the detailed textual explanations of particular elements in the diagram.",
"title": ""
}
] |
scidocsrr
|
ca3c7d5557af8b229bdc140df64ad055
|
Enrichr: a comprehensive gene set enrichment analysis web server 2016 update
|
[
{
"docid": "246f56b1b5aa4f095c6dd281a670210f",
"text": "The Allen Brain Atlas (http://www.brain-map.org) provides a unique online public resource integrating extensive gene expression data, connectivity data and neuroanatomical information with powerful search and viewing tools for the adult and developing brain in mouse, human and non-human primate. Here, we review the resources available at the Allen Brain Atlas, describing each product and data type [such as in situ hybridization (ISH) and supporting histology, microarray, RNA sequencing, reference atlases, projection mapping and magnetic resonance imaging]. In addition, standardized and unique features in the web applications are described that enable users to search and mine the various data sets. Features include both simple and sophisticated methods for gene searches, colorimetric and fluorescent ISH image viewers, graphical displays of ISH, microarray and RNA sequencing data, Brain Explorer software for 3D navigation of anatomy and gene expression, and an interactive reference atlas viewer. In addition, cross data set searches enable users to query multiple Allen Brain Atlas data sets simultaneously. All of the Allen Brain Atlas resources can be accessed through the Allen Brain Atlas data portal.",
"title": ""
}
] |
[
{
"docid": "1eafc02a19766817536f3da89230b4cf",
"text": "Basically, Bayesian Belief Networks (BBNs) as probabilistic tools provide suitable facilities for modelling process under uncertainty. A BBN applies a Directed Acyclic Graph (DAG) for encoding relations between all variables in state of problem. Finding the beststructure (structure learning) ofthe DAG is a classic NP-Hard problem in BBNs. In recent years, several algorithms are proposed for this task such as Hill Climbing, Greedy Thick Thinning and K2 search. In this paper, we introduced Simulated Annealing algorithm with complete details as new method for BBNs structure learning. Finally, proposed algorithm compared with other structure learning algorithms based on classification accuracy and construction time on valuable databases. Experimental results of research show that the simulated annealing algorithmis the bestalgorithmfrom the point ofconstructiontime but needs to more attention for classification process.",
"title": ""
},
{
"docid": "30dcd32d4b94fa5b269b711b3b0e49cd",
"text": "The basic idea of process mining is to extract knowledge from event logs recorded by an information system. Until recently, the information in these event logs was rarely used to analyze the underlying processes. Process mining aims at improving this by providing techniques and tools for discovering process, organizational, social, and performance information from event logs. Fuelled by the omnipresence of event logs in transactional information systems (cf. WFM, ERP, CRM, SCM, and B2B systems), process mining has become a vivid research area [1, 2]. In this paper we introduce the challenging process mining domain and discuss a heuristics driven process mining algorithm; the so-called “HeuristicsMiner” in detail. HeuristicsMiner is a practical applicable mining algorithm that can deal with noise, and can be used to express the main behavior (i.e. not all details and exceptions) registered in an event log. In the experimental section of this paper we introduce benchmark material (12.000 different event logs) and measurements by which the performance of process mining algorithms can be measured.",
"title": ""
},
{
"docid": "045a56e333b1fe78677b8f4cc4c20ecc",
"text": "Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.",
"title": ""
},
{
"docid": "dbe8e36bd7d1323ab4da0e1a3213f62e",
"text": "Problem: Parallels have been drawn between the rise of the internet in 1990s and the present rise of bitcoin (cryptocurrency) and underlying blockchain technology. This resulted in a widespread of media coverage due to extreme price fluctuations and increased supply and demand. Garcia et al. (2014) argues that this is driven by several social aspects including word-of-mouth communication on social media, indicating that this aspect of social media effects individual attitude formation and intention towards cryptocurrency. However, this combination of social media of antecedent of consumer acceptance is limited explored, especially in the context of technology acceptance. Purpose: The purpose of this thesis is to create further understanding in the Technology Acceptance Model with the additional construct: social influence, first suggested by Malhotra et al. (1999). Hereby, the additional construct of social media influence was added to advance the indirect effects of social media influence on attitude formation and behavioural intention towards cryptocurrency, through the processes of social influence (internalization; identification; compliance) by Kelman. Method: This study carries out a quantitative study where survey-research was used that included a total sample of 250 cases. This sample consists of individuals between 18-37 years old, where social media usage is part of the life. As a result of the data collection, analysis was conducted using multiple regression techniques. Conclusion: Analysis of the findings established theoretical validation of the appliance of the Technology Acceptance Model on digital innovation, like cryptocurrency. By adding the construct of social media, further understanding is created in the behaviour of millennials towards cryptocurrency. The evidence suggests that there are clear indirect effects of social media on attitude formation and intention towards engaging in cryptocurrency through the processes of social influence. This study should be seen as preliminary, where future research could be built upon. More specifically, in terms of consumer acceptance of cryptocurrency and the extent of influence by social media.",
"title": ""
},
{
"docid": "71759cdcf18dabecf1d002727eb9d8b8",
"text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.",
"title": ""
},
{
"docid": "53a55e8aa8b3108cdc8d015eabb3476d",
"text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.",
"title": ""
},
{
"docid": "124f40ccd178e6284cc66b88da98709d",
"text": "The tripeptide glutathione is the thiol compound present in the highest concentration in cells of all organs. Glutathione has many physiological functions including its involvement in the defense against reactive oxygen species. The cells of the human brain consume about 20% of the oxygen utilized by the body but constitute only 2% of the body weight. Consequently, reactive oxygen species which are continuously generated during oxidative metabolism will be generated in high rates within the brain. Therefore, the detoxification of reactive oxygen species is an essential task within the brain and the involvement of the antioxidant glutathione in such processes is very important. The main focus of this review article will be recent results on glutathione metabolism of different brain cell types in culture. The glutathione content of brain cells depends strongly on the availability of precursors for glutathione. Different types of brain cells prefer different extracellular glutathione precursors. Glutathione is involved in the disposal of peroxides by brain cells and in the protection against reactive oxygen species. In coculture astroglial cells protect other neural cell types against the toxicity of various compounds. One mechanism for this interaction is the supply by astroglial cells of glutathione precursors to neighboring cells. Recent results confirm the prominent role of astrocytes in glutathione metabolism and the defense against reactive oxygen species in brain. These results also suggest an involvement of a compromised astroglial glutathione system in the oxidative stress reported for neurological disorders.",
"title": ""
},
{
"docid": "9b3eabdc0101e067ad4dda88086fd68d",
"text": "Missing maxillary lateral incisors create an esthetic problem with specific orthodontic and prosthetic considerations. The aim of the present study is to evaluate the clinical success of the transmucosal flapless implant placement and immediate loading of the implants to restore the agenic lateral incisors after completing the orthodontic treatment and during the retention period.",
"title": ""
},
{
"docid": "4ac12c76112ff2085c4701130448f5d5",
"text": "A key point in the deployment of new wireless services is the cost-effective extension and enhancement of the network's radio coverage in indoor environments. Distributed Antenna Systems using Fiber-optics distribution (F-DAS) represent a suitable method of extending multiple-operator radio coverage into indoor premises, tunnels, etc. Another key point is the adoption of MIMO (Multiple Input — Multiple Output) transmission techniques which can exploit the multipath nature of the radio link to ensure reliable, high-speed wireless communication in hostile environments. In this paper novel indoor deployment solutions based on Radio over Fiber (RoF) and distributed-antenna MIMO techniques are presented and discussed, highlighting their potential in different cases.",
"title": ""
},
{
"docid": "348a5c33bde53e7f9a1593404c6589b4",
"text": "Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"title": ""
},
{
"docid": "72928de176eb35b0cfdc1fd78cca9994",
"text": "INTRODUCTION\nCongenital epulis, known as a congenital gingival granular cell tumor, is a benign tumor and very rare in newborns. Voluminous or multiple tumors can cause mechanical obstruction of the oral cavity and may result in postnatal feeding and respiratory problems.\n\n\nDISCUSSION\nWe report the clinical case of a female full-term newborn who presented a tumor on the upper gum obtruding into the oral cavity discovered at birth. The pregnancy was followed normally with three prenatal ultrasounds, which did not show abnormalities. The mass was excised under local anesthesia on the second day of life. The outcome was good after surgery and regular feedings were started on the second postoperative day. Histological examination confirmed the diagnosis of gingival tumor with granular cells and absence of signs of malignancy.\n\n\nCONCLUSION\nPrenatal diagnosis is fundamental in the therapeutic approach to this rare lesion but remains difficult because the findings are non specific and the generally late development of the tumor.",
"title": ""
},
{
"docid": "eb31d3d6264e3a6aba0753b5ba14f572",
"text": "Using aggregate product search data from Amazon.com, we jointly estimate consumer information search and online demand for consumer durable goods. To estimate the demand and search primitives, we introduce an optimal sequential search process into a model of choice and treat the observed marketlevel product search data as aggregations of individual-level optimal search sequences. The model builds on the dynamic programming framework by Weitzman (1979) and combines it with a choice model. It can accommodate highly complex demand patterns at the market level. At the individual level, the model has a number of attractive properties in estimation, including closed-form expressions for the probability distribution of alternative sets of searched goods and breaking the curse of dimensionality. Using numerical experiments, we verify the model's ability to identify the heterogeneous consumer tastes and search costs from product search data. Empirically, the model is applied to the online market for camcorders and is used to answer manufacturer questions about market structure and competition, and to address policy maker issues about the e ect of selectively lowered search costs on consumer surplus outcomes. We nd that consumer search for camcorders at Amazon.com is typically limited to little over 10 choice options, and that this a ects the estimates of own and cross elasticities. In a policy simulation, we also nd that the vast majority of the households bene t from the Amazon.com's product recommendations via lower search costs.",
"title": ""
},
{
"docid": "9d30cfbc7d254882e92cad01f5bd17c7",
"text": "Data from culture studies have revealed that Enterococcus faecalis is occasionally isolated from primary endodontic infections but frequently recovered from treatment failures. This molecular study was undertaken to investigate the prevalence of E. faecalis in endodontic infections and to determine whether this species is associated with particular forms of periradicular diseases. Samples were taken from cases of untreated teeth with asymptomatic chronic periradicular lesions, acute apical periodontitis, or acute periradicular abscesses, and from root-filled teeth associated with asymptomatic chronic periradicular lesions. DNA was extracted from the samples, and a 16S rDNA-based nested polymerase chain reaction assay was used to identify E. faecalis. This species occurred in seven of 21 root canals associated with asymptomatic chronic periradicular lesions, in one of 10 root canals associated with acute apical periodontitis, and in one of 19 pus samples aspirated from acute periradicular abscesses. Statistical analysis showed that E. faecalis was significantly more associated with asymptomatic cases than with symptomatic ones. E. faecalis was detected in 20 of 30 cases of persistent endodontic infections associated with root-filled teeth. When comparing the frequencies of this species in 30 cases of persistent infections with 50 cases of primary infections, statistical analysis demonstrated that E. faecalis was strongly associated with persistent infections. The average odds of detecting E. faecalis in cases of persistent infections associated with treatment failure were 9.1. The results of this study indicated that E. faecalis is significantly more associated with asymptomatic cases of primary endodontic infections than with symptomatic ones. Furthermore, E. faecalis was much more likely to be found in cases of failed endodontic therapy than in primary infections.",
"title": ""
},
{
"docid": "bd4316193b5cfa465dd2a5bdca990a86",
"text": "Electroporation is a fascinating cell membrane phenomenon with several existing biological applications and others likely. Although DNA introduction is the most common use, electroporation of isolated cells has also been used for: (1) introduction of enzymes, antibodies, and other biochemical reagents for intracellular assays; (2) selective biochemical loading of one size cell in the presence of many smaller cells; (3) introduction of virus and other particles; (4) cell killing under nontoxic conditions; and (5) insertion of membrane macromolecules into the cell membrane. More recently, tissue electroporation has begun to be explored, with potential applications including: (1) enhanced cancer tumor chemotherapy, (2) gene therapy, (3) transdermal drug delivery, and (4) noninvasive sampling for biochemical measurement. As presently understood, electroporation is an essentially universal membrane phenomenon that occurs in cell and artificial planar bilayer membranes. For short pulses (microsecond to ms), electroporation occurs if the transmembrane voltage, U(t), reaches 0.5-1.5 V. In the case of isolated cells, the pulse magnitude is 10(3)-10(4) V/cm. These pulses cause reversible electrical breakdown (REB), accompanied by a tremendous increase molecular transport across the membrane. REB results in a rapid membrane discharge, with the elevated U(t) returning to low values within a few microseconds of the pulse. However, membrane recovery can be orders of magnitude slower. An associated cell stress commonly occurs, probably because of chemical influxes and effluxes leading to chemical imbalances, which also contribute to eventual survival or death. Basic phenomena, present understanding of mechanism, and the existing and potential applications are briefly reviewed.",
"title": ""
},
{
"docid": "f483546c90f058aae692409449fbbabf",
"text": "In this paper, we analyze the performance limits of the slotted CSMA/CA mechanism of IEEE 802.15.4 in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beaconenabled mode is due to its flexibility for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent) on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).",
"title": ""
},
{
"docid": "57502ae793808fded7d446a3bb82ca74",
"text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication",
"title": ""
},
{
"docid": "b7a9e7afa7167fe9a22105bd88a8102d",
"text": "Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.",
"title": ""
},
{
"docid": "3ea021309fd2e729ffced7657e3a6038",
"text": "Physiological and pharmacological research undertaken on sloths during the past 30 years is comprehensively reviewed. This includes the numerous studies carried out upon the respiratory and cardiovascular systems, anesthesia, blood chemistry, neuromuscular responses, the brain and spinal cord, vision, sleeping and waking, water balance and kidney function and reproduction. Similarities and differences between the physiology of sloths and that of other mammals are discussed in detail.",
"title": ""
},
{
"docid": "3c46bd625fdaf144c8a571eafe76b945",
"text": "Neurocognitive model inspired by the putative processes in the brain has been applied to invention of novel words. This domain is proposed as the simplest way to understand creativity using experimental and computational means. Three factors are essential for creativity in this domain: knowledge of the statistical language properties, imagination constrained by this knowledge, and filtering of results that selects most interesting novel words. These principles are implemented using a simple correlation-based algorithm for auto-associative memory that learns the statistical properties of language. Results are surprisingly similar to those created by humans. Perspectives on computational models of creativity are discussed. Keywords—Creativity, brain, language processing, higher cognitive functions, neural modeling.",
"title": ""
},
{
"docid": "fc9ec90a7fb9c18a5209f462d21cf0e1",
"text": "The demand for accurate and reliable positioning in industrial applications, especially in robotics and high-precision machines, has led to the increased use of Harmonic Drives. The unique performance features of harmonic drives, such as high reduction ratio and high torque capacity in a compact geometry, justify their widespread application. However, nonlinear torsional compliance and friction are the most fundamental problems in these components and accurate modelling of the dynamic behaviour is expected to improve the performance of the system. This paper offers a model for torsional compliance of harmonic drives. A statistical measure of variation is defined, by which the reliability of the estimated parameters for different operating conditions, as well as the accuracy and integrity of the proposed model, are quantified. The model performance is assessed by simulation to verify the experimental results. Two test setups have been developed and built, which are employed to evaluate experimentally the behaviour of the system. Each setup comprises a different type of harmonic drive, namely the high load torque and the low load torque harmonic drive. The results show an accurate match between the simulation torque obtained from the identified model and the measured torque from the experiment, which indicates the reliability of the proposed model.",
"title": ""
}
] |
scidocsrr
|
b6cae67c818937f9541d78b6b8472b86
|
Parallel Selective Algorithms for Nonconvex Big Data Optimization
|
[
{
"docid": "e2a9bb49fd88071631986874ea197bc1",
"text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.",
"title": ""
}
] |
[
{
"docid": "dee76f07eb39e33e59608a2544215c0a",
"text": "We ask, and answer, the question of what’s computable by Turing machines equipped with time travel into the past: that is, closed timelike curves or CTCs (with no bound on their size). We focus on a model for CTCs due to Deutsch, which imposes a probabilistic consistency condition to avoid grandfather paradoxes. Our main result is that computers with CTCs can solve exactly the problems that are Turing-reducible to the halting problem, and that this is true whether we consider classical or quantum computers. Previous work, by Aaronson and Watrous, studied CTC computers with a polynomial size restriction, and showed that they solve exactly the problems in PSPACE, again in both the classical and quantum cases. Compared to the complexity setting, the main novelty of the computability setting is that not all CTCs have fixed-points, even probabilistically. Despite this, we show that the CTCs that do have fixed-points suffice to solve the halting problem, by considering fixed-point distributions involving infinite geometric series. The tricky part is to show that even quantum computers with CTCs can be simulated using a Halt oracle. For that, we need the Riesz representation theorem from functional analysis, among other tools. We also study an alternative model of CTCs, due to Lloyd et al., which uses postselection to “simulate” a consistency condition, and which yields BPPpath in the classical case or PP in the quantum case when subject to a polynomial size restriction. With no size limit, we show that postselected CTCs yield only the computable languages if we impose a certain finiteness condition, or all languages nonadaptively reducible to the halting problem if we don’t.",
"title": ""
},
{
"docid": "9164bd704cdb8ca76d0b5f7acda9d4ef",
"text": "In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal \"canonical\" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.",
"title": ""
},
{
"docid": "9043a5aae40471cb9f671a33725b0072",
"text": "In a software development group of IBM Retail Store Solutions, we built a non-trivial software system based on a stable standard specification using a disciplined, rigorous unit testing and build approach based on the test- driven development (TDD) practice. Using this practice, we reduced our defect rate by about 50 percent compared to a similar system that was built using an ad-hoc unit testing approach. The project completed on time with minimal development productivity impact. Additionally, the suite of automated unit test cases created via TDD is a reusable and extendable asset that will continue to improve quality over the lifetime of the software system. The test suite will be the basis for quality checks and will serve as a quality contract between all members of the team.",
"title": ""
},
{
"docid": "08f7c7d3bc473e929b4a224636f2a887",
"text": "Some existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with “deformation-fields”. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training we use a combination of perview loss and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction.",
"title": ""
},
{
"docid": "4ddfa45a585704edcca612f188cc6b78",
"text": "This paper presents a case study of using distributed word representations, word2vec in particular, for improving performance of Named Entity Recognition for the eCommerce domain. We also demonstrate that distributed word representations trained on a smaller amount of in-domain data are more effective than word vectors trained on very large amount of out-of-domain data, and that their combination gives the best results.",
"title": ""
},
{
"docid": "5cc26542d0f4602b2b257e19443839b3",
"text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.",
"title": ""
},
{
"docid": "15ccdecd20bbd9c4b93c57717cbfb787",
"text": "As a crucial challenge for video understanding, exploiting the spatial-temporal structure of video has attracted much attention recently, especially on video captioning. Inspired by the insight that people always focus on certain interested regions of video content, we propose a novel approach which will automatically focus on regions-of-interest and catch their temporal structures. In our approach, we utilize a specific attention model to adaptively select regions-of-interest for each video frame. Then a Dual Memory Recurrent Model (DMRM) is introduced to incorporate temporal structure of global features and regions-of-interest features in parallel, which will obtain rough understanding of video content and particular information of regions-of-interest. Since the attention model could not always catch the right interests, we additionally adopt semantic supervision to attend to interested regions more correctly. We evaluate our method for video captioning on two public benchmarks: the Microsoft Video Description Corpus (MSVD) and the Montreal Video Annotation Dataset (M-VAD). The experiments demonstrate that catching temporal regions-of-interest information really enhances the representation of input videos and our approach obtains the state-of-the-art results on popular evaluation metrics like BLEU-4, CIDEr, and METEOR.",
"title": ""
},
{
"docid": "a6815743923b1f46aee28534597611a9",
"text": "Prognostics focuses on predicting the future performance of a system, specifically the time at which the system no long performs its desired functionality, its time to failure. As an important aspect of prognostics, remaining useful life (RUL) prediction estimates the remaining usable life of a system, which is essential for maintenance decision making and contingency mitigation. A significant amount of research has been reported in the literature to develop prognostics models that are able to predict a system's RUL. These models can be broadly categorized into experience-based models, date-driven models, and physics-based models. However, due to system complexity, data availability, and application constraints, there is no universally accepted best model to estimate RUL. The review part of this paper specifically focused on the development of hybrid prognostics approaches, attempting to leverage the advantages of combining the prognostics models in the aforementioned different categories for RUL prediction. The hybrid approaches reported in the literature were systematically classified by the combination and interfaces of various types of prognostics models. In the case study part, a hybrid prognostics method was proposed and applied to a battery degradation case to show the potential benefit of the hybrid prognostics approach.",
"title": ""
},
{
"docid": "1dc41e5c43fc048bc1f1451eaa1ff764",
"text": "249 words) + Body (6178 words) + 4 Figures = 7,427 Total Words Luis Fernando Molina molinac1@illinois.edu (217) 244-6063 Esther Resendiz eresendi@illinois.edu (217) 244-4174 J. Riley Edwards jedward2@illinois.edu (217) 244-7417 John M. Hart j-hart3@illinois.edu (217) 244-4174 Christopher P. L. Barkan cbarkan@illinois.edu (217) 244-6338 Narendra Ahuja ahuja@illinois.edu (217) 333-1837 3 Corresponding author Molina et al. 11-1442 2 ABSTRACT Individual railroad track maintenance standards and the Federal Railroad Administration (FRA)Individual railroad track maintenance standards and the Federal Railroad Administration (FRA) Track Safety Standards require periodic inspection of railway infrastructure to ensure safe and efficient operation. This inspection is a critical, but labor-intensive task that results in large annual operating expenditures and has limitations in speed, quality, objectivity, and scope. To improve the cost-effectiveness of the current inspection process, machine vision technology can be developed and used as a robust supplement to manual inspections. This paper focuses on the development and performance of machine vision algorithms designed to recognize turnout components, as well as the performance of algorithms designed to recognize and detect defects in other track components. In order to prioritize which components are the most critical for the safe operation of trains, a risk-based analysis of the FRA Accident Database was performed. Additionally, an overview of current technologies for track and turnout component condition assessment is presented. The machine vision system consists of a video acquisition system for recording digital images of track and customized algorithms to identify defects and symptomatic conditions within the images. A prototype machine vision system has been developed for automated inspection of rail anchors and cut spikes, as well as tie recognition. Experimental test results from the system have shown good reliability for recognizing ties, anchors, and cut spikes. This machine vision system, in conjunction with defect analysis and trending of historical data, will enhance the ability for longer-term predictive assessment of the health of the track system and its components. Molina et al. 11-1442 3 INTRODUCTION Railroads conduct regular inspections of their track in order to maintain safe and efficient operation. In addition to internal railroad inspection procedures, periodic track inspections are required under the Federal Railroad Administration (FRA) Track Safety Standards. The objective of this research is to investigate the feasibility of developing a machine vision system to make track inspection more efficient, effective, and objective. In addition, interim approaches to automated track inspection are possible, which will potentially lead to greater inspection effectiveness and efficiency prior to full machine vision system development and implementation. Interim solutions include video capture using vehicle-mounted cameras, image enhancement using image-processing software, and assisted automation using machine vision algorithms (1). The primary focus of this research is inspection of North American Class I railroad mainline and siding tracks, as these generally experience the highest traffic densities. High traffic densities necessitate frequent inspection and more stringent maintenance requirements, and leave railroads less time to accomplish it. This makes them the most likely locations for cost-effective investment in new, more efficient, but potentially more capital-intensive inspection technology. The algorithms currently under development will also be adaptable to many types of infrastructure and usage, including transit and some components of high-speed rail (HSR) infrastructure. The machine vision system described in this paper was developed through an interdisciplinary research collaboration at the University of Illinois at Urbana-Champaign (UIUC) between the Computer Vision and Robotics Laboratory (CVRL) at the Beckman Institute for Advanced Science and Technology and the Railroad Engineering Program in the Department of Civil and Environmental Engineering. CURRENT TRACK INSPECTION TECHNOLOGIES USING MACHINE VISION The international railroad community has undertaken significant research to develop innovative applications for advanced technologies with the objective of improving the process of visual track inspection. The development of machine vision, one such inspection technology which uses video cameras, optical sensors, and custom designed algorithms, began in the early 1990’s with work analyzing rail surface defects (2). Machine vision systems are currently in use or under development for a variety of railroad inspection tasks, both wayside and mobile, including inspection of joint bars, surface defects in the rail, rail profile, ballast profile, track gauge, intermodal loading efficiency, railcar structural components, and railcar safety appliances (1, 3-21, 23). The University of Illinois at Urbana-Champaign (UIUC) has been involved in multiple railroad machine-vision research projects sponsored by the Association of American Railroads (AAR), BNSF Railway, NEXTRANS Region V Transportation Center, and the Transportation Research Board (TRB) High-Speed Rail IDEA Program (6-11). In this section, we provide a brief overview of machine vision condition monitoring applications currently in use or under development for inspection of railway infrastructure. Railway applications of machine vision technology have three main elements: the image acquisition system, the image analysis system, and the data analysis system (1). The attributes and performance of each of these individual components determines the overall performance of a machine vision system. Therefore, the following review includes a discussion of the overall Molina et al. 11-1442 4 machine vision system, as well as approaches to image acquisition, algorithm development techniques, lighting methodologies, and experimental results. Rail Surface Defects The Institute of Digital Image Processing (IDIP) in Austria has developed a machine vision system for rail surface inspection during the rail manufacturing process (12). Currently, rail inspection is carried out by humans and complemented with eddy current systems. The objective of this machine vision system is to replace visual inspections on rail production lines. The machine vision system uses spectral image differencing procedure (SIDP) to generate threedimensional (3D) images and detect surface defects in the rails. Additionally, the cameras can capture images at speeds up to 37 miles per hour (mph) (60 kilometers per hour (kph)). Although the system is currently being used only in rail production lines, it can also be attached to an inspection vehicle for field inspection of rail. Additionally, the Institute of Intelligent Systems for Automation (ISSIA) in Italy has been researching and developing a system for detecting rail corrugation (13). The system uses images of 512x2048 pixels in resolution, artificial light, and classification of texture to identify surface defects. The system is capable of acquiring images at speeds of up to 125 mph (200 kph). Three image-processing methods have been proposed and evaluated by IISA: Gabor, wavelet, and Gabor wavelet. Gabor was selected as the preferred processing technique. Currently, the technology has been implemented through the patented system known as Visual Inspection System for Railways (VISyR). Rail Wear The Moscow Metro and the State of Common Means of Moscow developed photonic system to measure railhead wear (14). The system consists of 4 CCD cameras and 4 laser lights mounted on an inspection vehicle. The cameras are connected to a central computer that receives images every 20 nanoseconds (ns). The system extracts the profile of the rail using two methods (cut-off and tangent) and the results are ultimately compared with pre-established rail wear templates. Tie Condition The Georgetown Rail Equipment Company (GREX) has developed and commercialized a crosstie inspection system called AURORA (15). The objective of the system is to inspect and classify the condition of timber and concrete crossties. Additionally, the system can be adapted to measure rail seat abrasion (RSA) and detect defects in fastening systems. AURORA uses high-definition cameras and high-voltage lasers as part of the lighting arrangement and is capable of inspecting 70,000 ties per hour at a speed of 30-45 mph (48-72 kph). The system has been shown to replicate results obtained by track inspectors with an accuracy of 88%. Since 2008, Napier University in Sweden has been researching the use of machine vision technology for inspection of timber crossties (16). Their system evaluates the condition of the ends of the ties and classifies them into one of two categories: good or bad. This classification is performed by evaluating quantitative parameters such as the number, length, and depth of cracks, as well as the condition of the tie plate. Experimental results showed that the system has an accuracy of 90% with respect to the correct classification of ties. Future research work includes evaluation of the center portion of the ties and integration with other non-destructive testing (NDT) applications. Molina et al. 11-1442 5 In 2003, the University of Zaragoza in Spain began research on the development of machine vision techniques to inspect concrete crossties using a stereo-metric system to measure different surface shapes (17). The system is used to estimate the deviation from the required dimensional tolerances of the concrete ties in production lines. Two CCD cameras with a resolution of 768x512 pixels are used for image capture and lasers are used for artificial lighting. The system has been shown to produce reliable results, but quantifiable results were not found in the available literature. Ballast The ISS",
"title": ""
},
{
"docid": "2b3de55ff1733fac5ee8c22af210658a",
"text": "With faster connection speed, Internet users are now making social network a huge reservoir of texts, images and video clips (GIF). Sentiment analysis for such online platform can be used to predict political elections, evaluates economic indicators and so on. However, GIF sentiment analysis is quite challenging, not only because it hinges on spatio-temporal visual contentabstraction, but also for the relationship between such abstraction and final sentiment remains unknown.In this paper, we dedicated to find outsuch relationship.We proposed a SentiPairSequence basedspatiotemporal visual sentiment ontology, which forms the midlevel representations for GIFsentiment. The establishment process of SentiPair contains two steps. First, we construct the Synset Forest to define the semantic tree structure of visual sentiment label elements. Then, through theSynset Forest, we organically select and combine sentiment label elements to form a mid-level visual sentiment representation. Our experiments indicate that SentiPair outperforms other competing mid-level attributes. Using SentiPair, our analysis frameworkcan achieve satisfying prediction accuracy (72.6%). We also opened ourdataset (GSO-2015) to the research community. GSO-2015 contains more than 6,000 manually annotated GIFs out of more than 40,000 candidates. Each is labeled with both sentiment and SentiPair Sequence.",
"title": ""
},
{
"docid": "5883597258387e83c4c5b9c1e896c818",
"text": "Techniques making use of Deep Neural Networks (DNN) have recently been seen to bring large improvements in textindependent speaker recognition. In this paper, we verify that the DNN based methods result in excellent performances in the context of text-dependent speaker verification as well. We build our system on the previously introduced HMM based ivector approach, where phone models are used to obtain frame level alignment in order to collect sufficient statistics for ivector extraction. For comparison, we experiment with an alternative alignment obtained directly from the output of DNN trained for phone classification. We also experiment with DNN based bottleneck features and their combinations with standard cepstral features. Although the i-vector approach is generally considered not suitable for text-dependent speaker verification, we show that our HMM based approach combined with bottleneck features provides truly state-of-the-art performance on RSR2015 data.",
"title": ""
},
{
"docid": "be19dab37fdd4b6170816defbc550e2e",
"text": "A new continuous transverse stub (CTS) antenna array is presented in this paper. It is built using the substrate integrated waveguide (SIW) technology and designed for beam steering applications in the millimeter waveband. The proposed CTS antenna array consists of 18 stubs that are arranged in the SIW perpendicular to the wave propagation. The performance of the proposed CTS antenna array is demonstrated through simulation and measurement results. From the experimental results, the peak gain of 11.63-16.87 dBi and maximum radiation power of 96.8% are achieved in the frequency range 27.06-36 GHz with low cross-polarization level. In addition, beam steering capability is achieved in the maximum radiation angle range varying from -43° to 3 ° depending on frequency.",
"title": ""
},
{
"docid": "87e2d691570403ae36e0a9a87099ad71",
"text": "Audiovisual translation is one of several overlapping umbrella terms that include ‘media translation’, ‘multimedia translation’, ‘multimodal translation’ and ‘screen translation’. These different terms all set out to cover the interlingual transfer of verbal language when it is transmitted and accessed both visually and acoustically, usually, but not necessarily, through some kind of electronic device. Theatrical plays and opera, for example, are clearly audiovisual yet, until recently, audiences required no technological devices to access their translations; actors and singers simply acted and sang the translated versions. Nowadays, however, opera is frequently performed in the original language with surtitles in the target language projected on to the stage. Furthermore, electronic librettos placed on the back of each seat containing translations are now becoming widely available. However, to date most research in audiovisual translation has been dedicated to the field of screen translation, which, while being both audiovisual and multimedial in nature, is specifically understood to refer to the translation of films and other products for cinema, TV, video and DVD. After the introduction of the first talking pictures in the 1920s a solution needed to be found to allow films to circulate despite language barriers. How to translate film dialogues and make movie-going accessible to speakers of all languages was to become a major concern for both North American and European film directors. Today, of course, screens are no longer restricted to cinema theatres alone. Television screens, computer screens and a series of devices such as DVD players, video game consoles, GPS navigation devices and mobile phones are also able to send out audiovisual products to be translated into scores of languages. Hence, strictly speaking, screen translation includes translations for any electronic appliance with a screen; however, for the purposes of this chapter, the term will be used mainly to refer to translations for the most popular products, namely for cinema, TV, video and DVD, and videogames. The two most widespread modalities adopted for translating products for the screen are dubbing and subtitling.1 Dubbing is a process which uses the acoustic channel for translational purposes, while subtitling is visual and involves a written translation that is superimposed on to the",
"title": ""
},
{
"docid": "7cbe504e03ab802389c48109ed1f1802",
"text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.",
"title": ""
},
{
"docid": "ce2f8135fe123e09b777bd147bec4bb3",
"text": "Supervised learning, e.g., classification, plays an important role in processing and organizing microblogging data. In microblogging, it is easy to mass vast quantities of unlabeled data, but would be costly to obtain labels, which are essential for supervised learning algorithms. In order to reduce the labeling cost, active learning is an effective way to select representative and informative instances to query for labels for improving the learned model. Different from traditional data in which the instances are assumed to be independent and identically distributed (i.i.d.), instances in microblogging are networked with each other. This presents both opportunities and challenges for applying active learning to microblogging data. Inspired by social correlation theories, we investigate whether social relations can help perform effective active learning on networked data. In this paper, we propose a novel Active learning framework for the classification of Networked Texts in microblogging (ActNeT). In particular, we study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from microblogging for labeling by taking advantage of social network structure. Experimental results on Twitter datasets show the benefit of incorporating network information in active learning and that the proposed framework outperforms existing state-of-the-art methods.",
"title": ""
},
{
"docid": "d9b261d1ed01f40ca22e7955c015d72c",
"text": "A series of experiments has investigated the relationship between the playing of background music during the performance of repetitive work and efficiency in performing such a task. The results give strong support to the contention that economic benefits can accure from the use of music in industry. The studies show that music is effective in raising efficiency in this type of work even when in competition with the unfavourable conditions produced by machine noise.",
"title": ""
},
{
"docid": "8a9191c256f62b7efce93033752059e6",
"text": "Food products fermented by lactic acid bacteria have long been used for their proposed health promoting properties. In recent years, selected probiotic strains have been thoroughly investigated for specific health effects. Properties like relief of lactose intolerance symptoms and shortening of rotavirus diarrhoea are now widely accepted for selected probiotics. Some areas, such as the treatment and prevention of atopy hold great promise. However, many proposed health effects still need additional investigation. In particular the potential benefits for the healthy consumer, the main market for probiotic products, requires more attention. Also, the potential use of probiotics outside the gastrointestinal tract deserves to be explored further. Results from well conducted clinical studies will expand and increase the acceptance of probiotics for the treatment and prevention of selected diseases.",
"title": ""
},
{
"docid": "77e5724ff3b8984a1296731848396701",
"text": "Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of timevarying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we V. Nicosia ( ) Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK e-mail: V.Nicosia@qmul.ac.uk Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy J. Tang C. Mascolo Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK M. Musolesi ( ) School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK e-mail: m.musolesi@cs.bham.ac.uk G. Russo Dipartimento di Matematica e Informatica, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy V. Latora Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy School of Mathematical Sciences, Queen Mary, University of London, E1 4NS London, UK Dipartimento di Fisica e Astronomia and INFN, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy P. Holme and J. Saramäki (eds.), Temporal Networks, Understanding Complex Systems, DOI 10.1007/978-3-642-36461-7 2, © Springer-Verlag Berlin Heidelberg 2013 15 16 V. Nicosia et al. discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
"title": ""
},
{
"docid": "85a01086e72befaccff9b8741b920fdf",
"text": "While search engines are the major sources of content discovery on online content providers and e-commerce sites, their capability is limited since textual descriptions cannot fully describe the semantic of content such as videos. Recommendation systems are now widely used in online content providers and e-commerce sites and play an important role in discovering content. In this paper, we describe how one can boost the popularity of a video through the recommendation system in YouTube. We present a model that captures the view propagation between videos through the recommendation linkage and quantifies the influence that a video has on the popularity of another video. Furthermore, we identify that the similarity in titles and tags is an important factor in forming the recommendation linkage between videos. This suggests that one can manipulate the metadata of a video to boost its popularity.",
"title": ""
},
{
"docid": "9d87c71c136264a03a74139417bd7a1e",
"text": "Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a “fast” reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (“slow”) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the “fast” RL algorithm on the current (previously unseen) MDP. We evaluate RL experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-armed bandit problems and finite MDPs. After RL is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the largescale side, we test RL on a vision-based navigation task and show that it scales up to high-dimensional problems.",
"title": ""
}
] |
scidocsrr
|
e630c9b7f1d11be7fe813c9371489332
|
Sparsity-based DOA estimation using co-prime arrays
|
[
{
"docid": "4bc74a746ef958a50bb8c542aa25860f",
"text": "A new approach to super resolution line spectrum estimation in both temporal and spatial domain using a coprime pair of samplers is proposed. Two uniform samplers with sample spacings MT and NT are used where M and N are coprime and T has the dimension of space or time. By considering the difference set of this pair of sample spacings (which arise naturally in computation of second order moments), sample locations which are O(MN) consecutive multiples of T can be generated using only O(M + N) physical samples. In order to efficiently use these O(MN) virtual samples for super resolution spectral estimation, a novel algorithm based on the idea of spatial smoothing is proposed, which can be used for estimating frequencies of sinusoids buried in noise as well as for estimating Directions-of-Arrival (DOA) of impinging signals on a sensor array. This technique allows us to construct a suitable positive semidefinite matrix on which subspace based algorithms like MUSIC can be applied to detect O(MN) spectral lines using only O(M + N) physical samples.",
"title": ""
}
] |
[
{
"docid": "6200e3a50d2e578d56ef9015149dd5fb",
"text": "This study investigated the frequency of college students' procrastination on academic tasks and the reasons for procrastination behavior. A high percentage of students reported problems with procrastination on several specific academic tasks. Self-reported procrastination was positively correlated with the number of self-paced quizzes students took late in the semester and with participation in an experimental session offered late in the semester. A factor analysis of the reasons for procrastination indicated that the factors Fear of Failure and Aversiveness of the Task accounted for most of the variance. A small but very homogeneous group of subjects endorsed items on the Fear of Failure factor that correlated significantly with self-report measures of depression, irrational cognitions, low self-esteem, delayed study behavior, anxiety, and lack of assertion. A larger and relatively heterogeneous group of subjects reported procrastinating as a result of aversiveness of the task. The Aversiveness of the Task factor did not correlate significantly with anxiety or assertion, but it did correlate significantly with'depression, irrational cognitions, low self-esteem, and delayed study behavior. These results indicate that procrastination is not solely a deficit in study habits or time management, but involves a complex interaction of behavioral, cognitive, and affective components;",
"title": ""
},
{
"docid": "23ef781d3230124360f24cc6e38fb15f",
"text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f72150d92ff4e0422ae44c3c21e8345e",
"text": "There has been a recent paradigm shift in robotics to data-driven learning for planning and control. Due to large number of experiences required for training, most of these approaches use a self-supervised paradigm: using sensors to measure success/failure. However, in most cases, these sensors provide weak supervision at best. In this work, we propose an adversarial learning framework that pits an adversary against the robot learning the task. In an effort to defeat the adversary, the original robot learns to perform the task with more robustness leading to overall improved performance. We show that this adversarial framework forces the robot to learn a better grasping model in order to overcome the adversary. By grasping 82% of presented novel objects compared to 68% without an adversary, we demonstrate the utility of creating adversaries. We also demonstrate via experiments that having robots in adversarial setting might be a better learning strategy as compared to having collaborative multiple robots. For supplementary video see: youtu.be/QfK3Bqhc6Sk",
"title": ""
},
{
"docid": "7b526ab92e31c2677fd20022a8b46189",
"text": "Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.",
"title": ""
},
{
"docid": "60ad412d0d6557d2a06e9914bbf3c680",
"text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7e7739bfddbae8cfa628d67eb582c121",
"text": "When firms implement enterprise resource planning, they need to redesign their business processes to make information flow smooth within organizations. ERP thus results in changes in processes and responsibilities. Firms cannot realize expected returns from ERP investments unless these changes are effectively managed after ERP systems are put into operation. This research proposes a conceptual framework to highlight the importance of the change management after firms implement ERP systems. Our research model is empirically tested using data collected from over 170 firms that had used ERP systems for more than one year. Our analysis reveals that the eventual success of ERP systems depends on effective change management after ERP implementation, supporting the existence of the valley of despair.",
"title": ""
},
{
"docid": "21b6eabf98a24614375cd0192126ef12",
"text": "Interior permanent magnet motors equipped with a squirrel-cage rotor are receiving lately an increased interest. Defined as line-start, line-fed or hybrid synchronous-induction motors, such machines combine the advantage of the brushless permanent magnet motors, i.e. high efficiency, constant torque for variable speed, with the high starting capability of the induction motors connected directly to the supply system. This paper proposes a unified analysis of these motors, with an emphasis on how any possible configuration may be described by using symmetrical components and two equivalent fictitious machines: positive and negative sequences. The analysis is validated on a single-phase unbalanced and on a three-phase balanced line-fed interior permanent magnet motors.",
"title": ""
},
{
"docid": "1d11b3ddedc72cdcb3002c149ea41316",
"text": "The \\emph{wavelet tree} data structure is a space-efficient technique for rank and select queries that generalizes from binary characters to an arbitrary multicharacter alphabet. It has become a key tool in modern full-text indexing and data compression because of its capabilities in compressing, indexing, and searching. We present a comparative study of its practical performance regarding a wide range of options on the dimensions of different coding schemes and tree shapes. Our results are both theoretical and experimental: (1)~We show that the run-length $\\delta$ coding size of wavelet trees achieves the 0-order empirical entropy size of the original string with leading constant 1, when the string's 0-order empirical entropy is asymptotically less than the logarithm of the alphabet size. This result complements the previous works that are dedicated to analyzing run-length $\\gamma$-encoded wavelet trees. It also reveals the scenarios when run-length $\\delta$ encoding becomes practical. (2)~We introduce a full generic package of wavelet trees for a wide range of options on the dimensions of coding schemes and tree shapes. Our experimental study reveals the practical performance of the various modifications.",
"title": ""
},
{
"docid": "25ff7c7f05c7c640447ab077efb8c84b",
"text": "Most propeller injuries occur at water recreational facilities such as those with provision for water skiing, boat racing, skin and scuba diving. Propeller injuries resulting from nautical accidents can be fatal. The sharp blades of propellers rotating at high speeds cause multiple and serious injuries such as deep laceration, chop wounds, bone fractures and mutilation of extremities. We present the autopsy reports of three people who died after colliding with boat propellers.",
"title": ""
},
{
"docid": "bf9d706685f76877a56d323423b32a5c",
"text": "BACKGROUND\nFine particulate air pollution has been linked to cardiovascular disease, but previous studies have assessed only mortality and differences in exposure between cities. We examined the association of long-term exposure to particulate matter of less than 2.5 microm in aerodynamic diameter (PM2.5) with cardiovascular events.\n\n\nMETHODS\nWe studied 65,893 postmenopausal women without previous cardiovascular disease in 36 U.S. metropolitan areas from 1994 to 1998, with a median follow-up of 6 years. We assessed the women's exposure to air pollutants using the monitor located nearest to each woman's residence. Hazard ratios were estimated for the first cardiovascular event, adjusting for age, race or ethnic group, smoking status, educational level, household income, body-mass index, and presence or absence of diabetes, hypertension, or hypercholesterolemia.\n\n\nRESULTS\nA total of 1816 women had one or more fatal or nonfatal cardiovascular events, as confirmed by a review of medical records, including death from coronary heart disease or cerebrovascular disease, coronary revascularization, myocardial infarction, and stroke. In 2000, levels of PM2.5 exposure varied from 3.4 to 28.3 microg per cubic meter (mean, 13.5). Each increase of 10 microg per cubic meter was associated with a 24% increase in the risk of a cardiovascular event (hazard ratio, 1.24; 95% confidence interval [CI], 1.09 to 1.41) and a 76% increase in the risk of death from cardiovascular disease (hazard ratio, 1.76; 95% CI, 1.25 to 2.47). For cardiovascular events, the between-city effect appeared to be smaller than the within-city effect. The risk of cerebrovascular events was also associated with increased levels of PM2.5 (hazard ratio, 1.35; 95% CI, 1.08 to 1.68).\n\n\nCONCLUSIONS\nLong-term exposure to fine particulate air pollution is associated with the incidence of cardiovascular disease and death among postmenopausal women. Exposure differences within cities are associated with the risk of cardiovascular disease.",
"title": ""
},
{
"docid": "70612623517870632503f321977af4c9",
"text": "Today, the Open Standard for Authorization (OAuth) is widely used by many service providers such as Google, Github, and Facebook. The OAuth-WebView implementation is the most widely used approach despite explicit warnings to the developers of its security and privacy risks. Previous researches have discussed these risks and proposed solutions that mandate numerous implementation's changes and/or do not assume strong attacking assumptions. In this work, we introduce SecureOAuth, a whitelist access control protection framework for the Android platform. SecureOAuth is composed of: Android library modifications, service creation, and system app creation. We have implemented a prototype of the SecureOAuth framework and evaluated it on performance and memory overhead. We also showcase examples of security threats that this framework counters. The framework hardens the OAuth-WebView implementation with bounded overhead while keeping the user's involvement to minimum. Moreover, the framework requires no implementations' changes and it assumes attackers with advanced and expert skill levels.",
"title": ""
},
{
"docid": "993d20256b3fee12e46df15e72302139",
"text": "Searching for and making decisions about information is becoming increasingly difficult as the amount of information and number of choices increases. Recommendation systems help users find items of interest of a particular type, such as movies or restaurants, but are still somewhat awkward to use. Our solution is to take advantage of the complementary strengths of personalized recommendation systems and dialogue systems, creating personalized aides. We present a system – the Adaptive Place Advisor – that treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. Individual, long-term user preferences are unobtrusively obtained in the course of normal recommendation dialogues and used to direct future conversations with the same user. We present a novel user model that influences both item search and the questions asked during a conversation. We demonstrate the effectiveness of our system in significantly reducing the time and number of interactions required to find a satisfactory item, as compared to a control group of users interacting with a non-adaptive version of the system.",
"title": ""
},
{
"docid": "490fe197e7ed6c658160c8a04ee1fc82",
"text": "Automatic concept learning from large scale imbalanced data sets is a key issue in video semantic analysis and retrieval, which means the number of negative examples is far more than that of positive examples for each concept in the training data. The existing methods adopt generally under-sampling for the majority negative examples or over-sampling for the minority positive examples to balance the class distribution on training data. The main drawbacks of these methods are: (1) As a key factor that affects greatly the performance, in most existing methods, the degree of re-sampling needs to be pre-fixed, which is not generally the optimal choice; (2) Many useful negative samples may be discarded in under-sampling. In addition, some works only focus on the improvement of the computational speed, rather than the accuracy. To address the above issues, we propose a new approach and algorithm named AdaOUBoost (Adaptive Over-sampling and Under-sampling Boost). The novelty of AdaOUBoost mainly lies in: adaptively over-sample the minority positive examples and under-sample the majority negative examples to form different sub-classifiers. And combine these sub-classifiers according to their accuracy to create a strong classifier, which aims to use fully the whole training data and improve the performance of the class-imbalance learning classifier. In AdaOUBoost, first, our clustering-based under-sampling method is employed to divide the majority negative examples into some disjoint subsets. Then, for each subset of negative examples, we utilize the borderline-SMOTE (synthetic minority over-sampling technique) algorithm to over-sample the positive examples with different size, train each sub-classifier using each of them, and get the classifier by fusing these sub-classifiers with different weights. Finally, we combine these classifiers in each subset of negative examples to create a strong classifier. We compare the performance between AdaOUBoost and the state-of-the-art methods on TRECVID 2008 benchmark with all 20 concepts, and the results show the AdaOUBoost can achieve the superior performance in large scale imbalanced data sets.",
"title": ""
},
{
"docid": "a9c4f01cfdbdde6245d99a9c5056f83f",
"text": "Brachyolmia (BO) is a heterogeneous group of skeletal dysplasias with skeletal changes limited to the spine or with minimal extraspinal features. BO is currently classified into types 1, 2, 3, and 4. BO types 1 and 4 are autosomal recessive conditions caused by PAPSS2 mutations, which may be merged together as an autosomal recessive BO (AR-BO). The clinical and radiological signs of AR-BO in late childhood have already been reported; however, the early manifestations and their age-dependent evolution have not been well documented. We report an affected boy with AR-BO, whose skeletal abnormalities were detected in utero and who was followed until 10 years of age. Prenatal ultrasound showed bowing of the legs. In infancy, radiographs showed moderate platyspondyly and dumbbell deformity of the tubular bones. Gradually, the platyspondyly became more pronounced, while the bowing of the legs and dumbbell deformities of the tubular bones diminished with age. In late childhood, the overall findings were consistent with known features of AR-BO. Genetic testing confirmed the diagnosis. Being aware of the initial skeletal changes may facilitate early diagnosis of PAPSS2-related skeletal dysplasias.",
"title": ""
},
{
"docid": "4eaa8c1af7a4f6f6c9de1e6de3f2495f",
"text": "Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.",
"title": ""
},
{
"docid": "e22f9516948725be20d8e331d5bafa56",
"text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in automatic surveillance of electrical infrastructure. For an automatic vision-based power line inspection system, detecting power lines from a cluttered background is one of the most important and challenging tasks. In this paper, a novel method is proposed, specifically for power line detection from aerial images. A pulse coupled neural filter is developed to remove background noise and generate an edge map prior to the Hough transform being employed to detect straight lines. An improved Hough transform is used by performing knowledge-based line clustering in Hough space to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective for automatic power line detection.",
"title": ""
},
{
"docid": "680d755a3a6d8fcd926eb441fad5aa57",
"text": "DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biological features of cellular systems.\nIn this paper, we propose a new framework for discovering interactions between genes based on multiple expression measurements This framework builds on the use of Bayesian networks for representing statistical dependencies. A Bayesian network is a graph-based model of joint multi-variate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe complex stochastic processes, and for providing clear methodologies for learning from (noisy) observations.\nWe start by showing how Bayesian networks can describe interactions between genes. We then present an efficient algorithm capable of learning such networks and statistical method to assess our confidence in their features. Finally, we apply this method to the S. cerevisiae cell-cycle measurements of Spellman et al. [35] to uncover biological features",
"title": ""
},
{
"docid": "224ec7b58d17f4ffb9753ac85bf29456",
"text": "This paper presents Venus, a service for securing user interaction with untrusted cloud storage. Specifically, Venus guarantees integrity and consistency for applications accessing a key-based object store service, without requiring trusted components or changes to the storage provider. Venus completes all operations optimistically, guaranteeing data integrity. It then verifies operation consistency and notifies the application. Whenever either integrity or consistency is violated, Venus alerts the application. We implemented Venus and evaluated it with Amazon S3 commodity storage service. The evaluation shows that it adds no noticeable overhead to storage operations.",
"title": ""
},
{
"docid": "2f012c2941f8434b9d52ae1942b64aff",
"text": "Classification of plants based on a multi-organ approach is very challenging. Although additional data provide more information that might help to disambiguate between species, the variability in shape and appearance in plant organs also raises the degree of complexity of the problem. Despite promising solutions built using deep learning enable representative features to be learned for plant images, the existing approaches focus mainly on generic features for species classification, disregarding the features representing plant organs. In fact, plants are complex living organisms sustained by a number of organ systems. In our approach, we introduce a hybrid generic-organ convolutional neural network (HGO-CNN), which takes into account both organ and generic information, combining them using a new feature fusion scheme for species classification. Next, instead of using a CNN-based method to operate on one image with a single organ, we extend our approach. We propose a new framework for plant structural learning using the recurrent neural network-based method. This novel approach supports classification based on a varying number of plant views, capturing one or more organs of a plant, by optimizing the contextual dependencies between them. We also present the qualitative results of our proposed models based on feature visualization techniques and show that the outcomes of visualizations depict our hypothesis and expectation. Finally, we show that by leveraging and combining the aforementioned techniques, our best network outperforms the state of the art on the PlantClef2015 benchmark. The source code and models are available at https://github.com/cs-chan/Deep-Plant.",
"title": ""
}
] |
scidocsrr
|
6de0e8c4f027268cff896f11872c785f
|
Dynamic topic detection and tracking: A comparison of HDP, C-word, and cocitation methods
|
[
{
"docid": "2855a1f420ed782317c1598c9d9c185e",
"text": "Ranking authors is vital for identifying a researcher’s impact and his standing within a scientific field. There are many different ranking methods (e.g., citations, publications, h-index, PageRank, and weighted PageRank), but most of them are topic-independent. This paper proposes topic-dependent ranks based on the combination of a topic model and a weighted PageRank algorithm. The Author-Conference-Topic (ACT) model was used to extract topic distribution of individual authors. Two ways for combining the ACT model with the PageRank algorithm are proposed: simple combination (I_PR) or using a topic distribution as a weighted vector for PageRank (PR_t). Information retrieval was chosen as the test field and representative authors for different topics at different time phases were identified. Principal Component Analysis (PCA) was applied to analyze the ranking difference between I_PR and PR_t.",
"title": ""
}
] |
[
{
"docid": "9cb682049f4a4d1291189b7cfccafb1e",
"text": "The sequencing by hybridization (SBH) of determining the order in which nucleotides should occur on a DNA string is still under discussion for enhancements on computational intelligence although the next generation of DNA sequencing has come into existence. In the last decade, many works related to graph theory-based DNA sequencing have been carried out in the literature. This paper proposes a method for SBH by integrating hypergraph with genetic algorithm (HGGA) for designing a novel analytic technique to obtain DNA sequence from its spectrum. The paper represents elements of the spectrum and its relation as hypergraph and applies the unimodular property to ensure the compatibility of relations between l-mers. The hypergraph representation and unimodular property are bound with the genetic algorithm that has been customized with a novel selection and crossover operator reducing the computational complexity with accelerated convergence. Subsequently, upon determining the primary strand, an anti-homomorphism is invoked to find the reverse complement of the sequence. The proposed algorithm is implemented in the GenBank BioServer datasets, and the results are found to prove the efficiency of the algorithm. The HGGA is a non-classical algorithm with significant advantages and computationally attractive complexity reductions ranging to $$O(n^{2} )$$ O ( n 2 ) with improved accuracy that makes it prominent for applications other than DNA sequencing like image processing, task scheduling and big data processing.",
"title": ""
},
{
"docid": "2e9d0bf42b8bb6eb8752e89eb46f2fc5",
"text": "What is the growth pattern of social networks, like Facebook and WeChat? Does it truly exhibit exponential early growth, as predicted by textbook models like the Bass model, SI, or the Branching Process? How about the count of links, over time, for which there are few published models?\n We examine the growth of several real networks, including one of the world's largest online social network, ``WeChat'', with 300 million nodes and 4.75 billion links by 2013; and we observe power law growth for both nodes and links, a fact that completely breaks the sigmoid models (like SI, and Bass). In its place, we propose NETTIDE, along with differential equations for the growth of the count of nodes, as well as links. Our model accurately fits the growth patterns of real graphs; it is general, encompassing as special cases all the known, traditional models (including Bass, SI, log-logistic growth); while still remaining parsimonious, requiring only a handful of parameters. Moreover, our NETTIDE for link growth is the first one of its kind, accurately fitting real data, and naturally leading to the densification phenomenon. We validate our model with four real, time-evolving social networks, where NETTIDE gives good fitting accuracy, and, more importantly, applied on the WeChat data, our NETTIDE forecasted more than 730 days into the future, with 3% error.",
"title": ""
},
{
"docid": "380dc2289f621b06f0085a1d8e178638",
"text": "Feature modeling is an important approach to capture the commonalities and variabilities in system families and product lines. Cardinality-based feature modeling integrates a number of existing extensions of the original feature-modeling notation from Feature-Oriented Domain Analysis. Staged configuration is a process that allows the incremental configuration of cardinality-based feature models. It can be achieved by performing a step-wise specialization of the feature model. In this paper, we argue that cardinality-based feature models can be interpreted as a special class of context-free grammars. We make this precise by specifying a translation from a feature model into a context-free grammar. Consequently, we provide a semantic interpretation for cardinalitybased feature models by assigning an appropriate semantics to the language recognized by the corresponding grammar. Finally, we give an account on how feature model specialization can be formalized as transformations on the grammar equivalent of feature models.",
"title": ""
},
{
"docid": "5e6a2439641793594087d0543fcaec99",
"text": "Background: Virtual Machine (VM) consolidation is an effective technique to improve resource utilization and reduce energy footprint in cloud data centers. It can be implemented in a centralized or a distributed fashion. Distributed VM consolidation approaches are currently gaining popularity because they are often more scalable than their centralized counterparts and they avoid a single point of failure. Objective: To present a comprehensive, unbiased overview of the state-of-the-art on distributed VM consolidation approaches. Method: A Systematic Mapping Study (SMS) of the existing distributed VM consolidation approaches. Results: 19 papers on distributed VM consolidation categorized in a variety of ways. The results show that the existing distributed VM consolidation approaches use four types of algorithms, optimize a number of different objectives, and are often evaluated with experiments involving simulations. Conclusion: There is currently an increasing amount of interest on developing and evaluating novel distributed VM consolidation approaches. A number of research gaps exist where the focus of future research may be directed.",
"title": ""
},
{
"docid": "018d05daa52fb79c17519f29f31026d7",
"text": "The aim of this paper is to review conceptual and empirical literature on the concept of distributed leadership (DL) in order to identify its origins, key arguments and areas for further work. Consideration is given to the similarities and differences between DL and related concepts, including ‘shared’, ‘collective’, ‘collaborative’, ‘emergent’, ‘co-’ and ‘democratic’ leadership. Findings indicate that, while there are some common theoretical bases, the relative usage of these concepts varies over time, between countries and between sectors. In particular, DL is a notion that has seen a rapid growth in interest since the year 2000, but research remains largely restricted to the field of school education and of proportionally more interest to UK than US-based academics. Several scholars are increasingly going to great lengths to indicate that, in order to be ‘distributed’, leadership need not necessarily be widely ‘shared’ or ‘democratic’ and, in order to be effective, there is a need to balance different ‘hybrid configurations’ of practice. The paper highlights a number of areas for further attention, including three factors relating to the context of much work on DL (power and influence; organizational boundaries and context; and ethics and diversity), and three methodological and developmental challenges (ontology; research methods; and leadership development, reward and recognition). It is concluded that descriptive and normative perspectives which dominate the literature should be supplemented by more critical accounts which recognize the rhetorical and discursive significance of DL in (re)constructing leader– follower identities, mobilizing collective engagement and challenging or reinforcing traditional forms of organization.",
"title": ""
},
{
"docid": "a2223d57a866b0a0ef138e52fb515b84",
"text": "This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on stateof-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.",
"title": ""
},
{
"docid": "43cd94df4a686b89ab6ca5e2782f5a54",
"text": "Relational databases scattered over the web are generally opaque to regular web crawling tools. To address this concern, many RDB-to-RDF approaches have been proposed over the last years. In this paper, we propose a detailed review of seventeen RDB-to-RDF initiatives, considering end-to-end projects that delivered operational tools. The different tools are classified along three major axes: mapping description language, mapping implementation and data retrieval method. We analyse the motivations, commonalities and differences between existing approaches. The expressiveness of existing mapping languages is not always sufficient to produce semantically rich data and make it usable, interoperable and linkable. We therefore briefly present various strategies investigated in the literature to produce additional knowledge. Finally, we show that R2RML, the W3C recommendation for describing RDB to RDF mappings, may not apply to all needs in the wide scope of RDB to RDF translation applications, leaving space for future extensions.",
"title": ""
},
{
"docid": "ffd84e3418a6d1d793f36bfc2efed6be",
"text": "Anterior cingulate cortex (ACC) is a part of the brain's limbic system. Classically, this region has been related to affect, on the basis of lesion studies in humans and in animals. In the late 1980s, neuroimaging research indicated that ACC was active in many studies of cognition. The findings from EEG studies of a focal area of negativity in scalp electrodes following an error response led to the idea that ACC might be the brain's error detection and correction device. In this article, these various findings are reviewed in relation to the idea that ACC is a part of a circuit involved in a form of attention that serves to regulate both cognitive and emotional processing. Neuroimaging studies showing that separate areas of ACC are involved in cognition and emotion are discussed and related to results showing that the error negativity is influenced by affect and motivation. In addition, the development of the emotional and cognitive roles of ACC are discussed, and how the success of this regulation in controlling responses might be correlated with cingulate size. Finally, some theories are considered about how the different subdivisions of ACC might interact with other cortical structures as a part of the circuits involved in the regulation of mental and emotional activity.",
"title": ""
},
{
"docid": "ed83ce40780419961a8c5eeca780636e",
"text": "Some objects in our environment are strongly tied to motor actions, a phenomenon called object affordance. A cup, for example, affords us to reach out to it and grasp it by its handle. Studies indicate that merely viewing an affording object triggers motor activations in the brain. The present study investigated whether object affordance would also result in an attention bias, that is, whether observers would rather attend to graspable objects within reach compared to non-graspable but reachable objects or to graspable objects out of reach. To this end, we conducted a combined reaction time and motion tracking study with a table in a virtual three-dimensional space. Two objects were positioned on the table, one near, the other one far from the observer. In each trial, two graspable objects, two non-graspable objects, or a combination of both was presented. Participants were instructed to detect a probe appearing on one of the objects as quickly as possible. Detection times served as indirect measure of attention allocation. The motor association with the graspable object was additionally enhanced by having participants grasp a real object in some of the trials. We hypothesized that visual attention would be preferentially allocated to the near graspable object, which should be reflected in reduced reaction times in this condition. Our results confirm this assumption: probe detection was fastest at the graspable object at the near position compared to the far position or to a non-graspable object. A follow-up experiment revealed that in addition to object affordance per se, immediate graspability of an affording object may also influence this near-space advantage. Our results suggest that visuospatial attention is preferentially allocated to affording objects which are immediately graspable, and thus establish a strong link between an object' s motor affordance and visual attention.",
"title": ""
},
{
"docid": "7bd3f6b7b2f79f08534b70c16be91c02",
"text": "This paper describes a dual-loop delay-locked loop (DLL) which overcomes the problem of a limited delay range by using multiple voltage-controlled delay lines (VCDLs). A reference loop generates quadrature clocks, which are then delayed with controllable amounts by four VCDLs and multiplexed to generate the output clock in a main loop. This architecture enables the DLL to emulate the infinite-length VCDL with multiple finite-length VCDLs. The DLL incorporates a replica biasing circuit for low-jitter characteristics and a duty cycle corrector immune to prevalent process mismatches. A test chip has been fabricated using a 0.25m CMOS process. At 400 MHz, the peak-to-peak jitter with a quiet 2.5-V supply is 54 ps, and the supply-noise sensitivity is 0.32 ps/mV.",
"title": ""
},
{
"docid": "65da855a28cff9bf67c9f5e42aae9b02",
"text": "Barnacles are a persistent fouling problem in the marine environment, although their effects (eg reduced fuel efficiency, increased corrosion) can be reduced through the application of antifouling or fouling-release coatings to marine structures. However, the developments of fouling-resistant coatings that are cost-effective and that are not deleterious to the marine environment are continually being sought. The incorporation of proteolytic enzymes into coatings has been suggested as one potential option. In this study, the efficacy of a commercially available serine endopeptidase, Alcalase as an antifoulant is assessed and its mode of action on barnacle cypris larvae investigated. In situ atomic force microscopy (AFM) of barnacle cyprid adhesives during exposure to Alcalase supported the hypothesis that Alcalase reduces the effectiveness of the cyprid adhesives, rather than deterring the organisms from settling. Quantitative behavioural tracking of cyprids, using Ethovision 3.1, further supported this observation. Alcalase removed cyprid 'footprint' deposits from glass surfaces within 26 min, but cyprid permanent cement became resistant to attack by Alcalase within 15 h of expression, acquiring a crystalline appearance in its cured state. It is concluded that Alcalase has antifouling potential on the basis of its effects on cyprid footprints, un-cured permanent cement and its non-toxic mode of action, providing that it can be successfully incorporated into a coating.",
"title": ""
},
{
"docid": "50b316a52bdfacd5fe319818d0b22962",
"text": "Artificial neural networks (ANN) are used to predict 1) degree program completion, 2) earned hours, and 3) GPA for college students. The feed forward neural net architecture is used with the back propagation learning function and the logistic activation function. The database used for training and validation consisted of 17,476 student transcripts from Fall 1983 through Fall 1994. It is shown that age, gender, race, ACT scores, and reading level are significant in predicting the degree program completion, earned hours, and GPA. Of the three, earned hours proved the most difficult to predict.",
"title": ""
},
{
"docid": "e027a08aaf7d67e77cb637a449ee99f1",
"text": "Cross-client data deduplication has been widely used to eliminate redundant storage overhead in cloud storage system. Recently, Abadi et al. introduced the primitive of MLE2 with nice security properties for secure and efficient data deduplication. However, besides the computationally expensive noninteractive zero-knowledge proofs, their fully randomized scheme (R-MLE2) requires the inefficient equality-testing algorithm to identify all duplicate ciphertexts. Thus, an interesting challenging problem is how to reduce the overhead of R-MLE2 and propose an efficient construction for R-MLE2. In this paper, we introduce a new primitive called μR-MLE2, which gives a partial positive answer for this challenging problem. We propose two schemes: static scheme and dynamic scheme, where the latter one allows tree adjustment by increasing some computation cost. Our main trick is to use the interactive protocol based on static or dynamic decision trees. The advantage gained from it is, by interacting with clients, the server will reduce the time complexity of deduplication equality test from linear time to efficient logarithmic time over the whole data items in the database. The security analysis and the performance evaluation show that our schemes are Path-PRV-CDA2 secure and achieve several orders of magnitude higher performance for data equality test than R-MLE2 scheme when the number of data items is relatively large.",
"title": ""
},
{
"docid": "4fd4828e4845d22d54ebe7b936402d48",
"text": "Agriculture is the mainstay of the Indian economy. Almost 70% people depend on it & shares major part of the GDP. Diseases in crops mostly on the leaves affects on the reduction of both quality and quantity of agricultural products. Perception of human eye is not so much stronger so as to observe minute variation in the infected part of leaf. In this paper, we are providing software solution to automatically detect and classify plant leaf diseases. In this we are using image processing techniques to classify diseases & quickly diagnosis can be carried out as per disease. This approach will enhance productivity of crops. It includes several steps viz. image acquisition, image pre-processing, segmentation, features extraction and neural network based classification.",
"title": ""
},
{
"docid": "ac0a6e663caa3cb8cdcb1a144561e624",
"text": "A two-stage process is performed by human operator for cleaning windows. The first being the application of cleaning fluid, which is usually achieved by using a wetted applicator. The aim of this task being to cover the whole window area in the shortest possible time. This depends on two parameters: the size of the applicator and the path which the applicator travels without significantly overlapping previously wetted area. The second is the removal of cleaning fluid by a squeegee blade without spillage on to other areas of the facade or previously cleaned areas of glass. This is particularly difficult for example if the window is located on the roof of a building and cleaning is performed from inside by the human window cleaner.",
"title": ""
},
{
"docid": "a80e4f3282646e406ce37964cf6bb933",
"text": "In this paper we present a new semantic smoothing vector space kernel (S-VSM) for text documents clustering. In the suggested approach semantic relatedness between words is used to smooth the similarity and the representation of text documents. The basic hypothesis examined is that considering semantic relatedness between two text documents may improve the performance of the text document clustering task. For our experimental evaluation we analyze the performance of several semantic relatedness measures when embedded in the proposed (S-VSM) and present results with respect to different experimental conditions, such as: (i) the datasets used, (ii) the underlying knowledge sources of the utilized measures, and (iii) the clustering algorithms employed. To the best of our knowledge, the current study is the first to systematically compare, analyze and evaluate the impact of semantic smoothing in text clustering based on ‘wisdom of linguists’, e.g., WordNets, ‘wisdom of crowds’, e.g., Wikipedia, and ‘wisdom of corpora’, e.g., large text corpora represented with the traditional Bag of Words (BoW) model. Three semantic relatedness measures for text are considered; two knowledge-based (Omiotis [1] that uses WordNet, and WLM [2] that uses Wikipedia), and one corpus-based (PMI [3] trained on a semantically tagged SemCor version). For the comparison of different experimental conditions we use the BCubed F-Measure evaluation metric which satisfies all formal constraints of good quality cluster. The experimental results show that the clustering performance based on the S-VSM is better compared to the traditional VSM model and compares favorably against the standard GVSM kernel which uses word co-occurrences to compute the latent similarities between document terms. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "de4e2e131a0ceaa47934f4e9209b1cdd",
"text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.",
"title": ""
},
{
"docid": "8d581aef7779713f3cb9f236fb83d7ff",
"text": "Sandro Botticelli was one of the most esteemed painters and draughtsmen among Renaissance artists. Under the patronage of the De' Medici family, he was active in Florence during the flourishing of the Renaissance trend towards the reclamation of lost medical and anatomical knowledge of ancient times through the dissection of corpses. Combining the typical attributes of the elegant courtly style with hallmarks derived from the investigation and analysis of classical templates, he left us immortal masterpieces, the excellence of which incomprehensibly waned and was rediscovered only in the 1890s. Few know that it has already been reported that Botticelli concealed the image of a pair of lungs in his masterpiece, The Primavera. The present investigation provides evidence that Botticelli embedded anatomic imagery of the lung in another of his major paintings, namely, The Birth of Venus. Both canvases were most probably influenced and enlightened by the neoplatonic philosophy of the humanist teachings in the De' Medici's circle, and they represent an allegorical celebration of the cycle of life originally generated by the Divine Wind or Breath. This paper supports the theory that because of the anatomical knowledge to which he was exposed, Botticelli aimed to enhance the iconographical meaning of both the masterpieces by concealing images of the lung anatomy within them.",
"title": ""
},
{
"docid": "11ed7e0742ddb579efe6e1da258b0d3c",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] |
scidocsrr
|
9fb6202d6d18b99a484bb9a3b41b1132
|
Car Number Plate Recognition (CNPR) system using multiple template matching
|
[
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
},
{
"docid": "a81c87374e7ea9a3066f643ac89bfd2b",
"text": "Image edge detection is a process of locating the e dg of an image which is important in finding the approximate absolute gradient magnitude at each point I of an input grayscale image. The problem of getting an appropriate absolute gradient magnitude for edges lies in the method used. The Sobel operator performs a 2-D spatial gradient measurement on images. Transferri ng a 2-D pixel array into statistically uncorrelated data se t enhances the removal of redundant data, as a result, reduction of the amount of data is required to represent a digital image. The Sobel edge detector uses a pair of 3 x 3 convolution masks, one estimating gradient in the x-direction and the other estimating gradient in y–direction. The Sobel detector is incredibly sensit ive o noise in pictures, it effectively highlight them as edges. Henc e, Sobel operator is recommended in massive data communication found in data transfer.",
"title": ""
}
] |
[
{
"docid": "275afb5836acf741593f6bac90e5ffce",
"text": "We propose algorithms to address the spectrum efficiency and fairness issues of multi band multiuser Multiple-Input and Multiple-Output (MIMO) cognitive ad-hoc networks. To improve the transmission efficiency of the MIMO system, a cross layer antenna selection algorithm is proposed. Using the transmission efficiency results, user data rate of the cognitive ad-hoc network is determined. Objective function for the average data rate of the multi band multiuser cognitive MIMO ad-hoc network is also defined. For the average data rate objective function, primary users interference is considered as performance constraint. Furthermore, using the user data rate results, a learning-based channel allocation algorithm is proposed. Finally, numerical results are presented for performance evaluation of the proposed antenna selection and channel allocation algorithms.",
"title": ""
},
{
"docid": "97353be7c54dd2ded69815bf93545793",
"text": "In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.",
"title": ""
},
{
"docid": "39debcb0aa41eec73ff63a4e774f36fd",
"text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.",
"title": ""
},
{
"docid": "625b96d21cb9ff05785aa34c98c567ff",
"text": "The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.",
"title": ""
},
{
"docid": "8bc418be099f14d677d3fdfbfa516248",
"text": "The present study examines the influence of social context on the use of emoticons in Internet communication. Secondary school students (N = 158) responded to short internet chats. Social context (task-oriented vs. socio-emotional) and valence of the context (positive vs. negative) were manipulated in these chats. Participants were permitted to respond with text, emoticon or a combination of both. Results showed that participants used more emoticons in socio-emotional than in task-oriented social contexts. Furthermore, students used more positive emoticons in positive contexts and more negative emoticons in negative contexts. An interaction was found between valence and kind of context; in negative, task-oriented contexts subjects used the least emoticons. Results are related to research about the expression of emotions in face-to-face interaction. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "813a45c7cae19fcd548a8b95a670d65a",
"text": "In this paper, conical monopole type UWB antenna which suppress dual bands is proposed. The SSRs were arranged in such a way that the interaction of the magnetic field with them enables the UWB antenna to reject the dual bands using the resonance of SRRs. The proposed conical monopole antenna has a return loss less than -10dB and antenna gain greater than 5dB at 2GHz~11GHz frequency band, except the suppressed bands. The return loss and gain at WiMAX and WLAN bands is greater than -3dB and less than 0dB respectively.",
"title": ""
},
{
"docid": "ca3c3dec83821747896d44261ba2f9ad",
"text": "Building discriminative representations for 3D data has been an important task in computer graphics and computer vision research. Convolutional Neural Networks (CNNs) have shown to operate on 2D images with great success for a variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible and promising next step. Unfortunately, the computational complexity of 3D CNNs grows cubically with respect to voxel resolution. Moreover, since most 3D geometry representations are boundary based, occupied regions do not increase proportionately with the size of the discretization, resulting in wasted computation. In this work, we represent 3D spaces as volumetric fields, and propose a novel design that employs field probing filters to efficiently extract features from them. Each field probing filter is a set of probing points — sensors that perceive the space. Our learning algorithm optimizes not only the weights associated with the probing points, but also their locations, which deforms the shape of the probing filters and adaptively distributes them in 3D space. The optimized probing points sense the 3D space “intelligently”, rather than operating blindly over the entire domain. We show that field probing is significantly more efficient than 3DCNNs, while providing state-of-the-art performance, on classification tasks for 3D object recognition benchmark datasets.",
"title": ""
},
{
"docid": "62b345b0aa68a909fbbded8ba18ea75c",
"text": "The transmission of malaria is highly variable and depends on a range of climatic and anthropogenic factors. In addition, the dispersal of Anopheles mosquitoes is a key determinant that affects the persistence and dynamics of malaria. Simple, lumped-population models of malaria prevalence have been insufficient for predicting the complex responses of malaria to environmental changes. A stochastic lattice-based model that couples a mosquito dispersal and a susceptible-exposed-infected-recovered epidemics model was developed for predicting the dynamics of malaria in heterogeneous environments. The It$$\\hat{o}$$ o^ approximation of stochastic integrals with respect to Brownian motion was used to derive a model of stochastic differential equations. The results show that stochastic equations that capture uncertainties in the life cycle of mosquitoes and interactions among vectors, parasites, and hosts provide a mechanism for the disruptions of malaria. Finally, model simulations for a case study in the rural area of Kilifi county, Kenya are presented. A stochastic lattice-based integrated malaria model has been developed. The applicability of the model for capturing the climate-driven hydrologic factors and demographic variability on malaria transmission has been demonstrated.",
"title": ""
},
{
"docid": "dfaa6e183e70cbacc5c9de501993b7af",
"text": "Traditional buildings consume more of the energy resources than necessary and generate a variety of emissions and waste. The solution to overcoming these problems will be to build them green and smart. One of the significant components in the concept of smart green buildings is using renewable energy. Solar energy and wind energy are intermittent sources of energy, so these sources have to be combined with other sources of energy or storage devices. While batteries and/or supercapacitors are an ideal choice for short-term energy storage, regenerative hydrogen-oxygen fuel cells are a promising candidate for long-term energy storage. This paper is to design and test a green building energy system that consists of renewable energy, energy storage, and energy management. The paper presents the architecture of the proposed green building energy system and a simulation model that allows for the study of advanced control strategies for the green building energy system. An example green building energy system is tested and simulation results show that the variety of energy source and storage devices can be managed very well.",
"title": ""
},
{
"docid": "a827f7ceabd844453dcf81cf7f87c7db",
"text": "Steganography means hiding the secret message within an ordinary message and extraction of it as its destination. In the texture synthesis process here re-samples smaller texture image which gives a new texture image with a similar local appearance. In the existing system, work is done for the texture synthesis process but the embedding capacity of those systems is very low. In the project introduced the method SURTDS (steganography using reversible texture synthesis) for enhancing the embedding capacity of the system by using the difference expansion method with texture synthesis. Initially, this system evaluates the binary value of the secret image and converts this value into a decimal value. The process of embedding is performed by using the difference expansion techniques. Difference expansion computes the average and difference in a patch and embedded the value one by one. This system improves the embedding capacity of the stego image. The experimental result has verified that this system improves the embedding capacity of the SURTDS is better than the existing system.",
"title": ""
},
{
"docid": "61a782f8797b76d6d5ce581729c3cfc0",
"text": "Wordnets are lexico-semantic resources essential in many NLP tasks. Princeton WordNet is the most widely known, and the most influential, among them. Wordnets for languages other than English tend to adopt unquestioningly WordNet’s structure and its net of lexicalised concepts. We discuss a large wordnet constructed independently of WordNet, upon a model with a small yet significant difference. A mapping onto WordNet is under way; the large portions already linked open up a unique perspective on the comparison of similar but not fully compatible lexical resources. We also try to characterise numerically a wordnet’s aptitude for NLP applications.",
"title": ""
},
{
"docid": "336d83fd5628d9325fed0d88c56bc617",
"text": "Influence of fruit development and ripening on the changes in physico-chemical properties, antiradical activity and the accumulation of polyphenolic compounds were investigated in Maoluang fruits. Total phenolics content (TP) was assayed according to the Folin-Ciocalteu method, and accounted for 19.60-8.66 mg GAE/g f.w. The TP gradually decreased from the immature to the over ripe stages. However, the total anthocyanin content (TA) showed the highest content at the over ripe stage, with an average value of 141.94 mg/100 g f.w. The antiradical activity (AA) of methanolic extracts from Maoluang fruits during development and ripening were determined with DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging. The highest AA was observed at the immature stage accompanied by the highest content of gallic acid and TP. Polyphenols were quantified by HPLC. The level of procyanidin B2, procyanidin B1, (+)-catechin, (–)-epicatechin, rutin and tran-resveratrol as the main polyphenol compounds, increased during fruit development and ripening. Other phenolic acids such as gallic, caffeic, and ellagic acids significantly decreased (p < 0.05) during fruit development and ripening. At over ripe stage, Maoluang possess the highest antioxidants. Thus, the over ripe stage would be the appropriate time to harvest when taking nutrition into consideration. This existing published information provides a helpful daily diet guide and useful guidance for industrial utilization of Maoluang fruits.",
"title": ""
},
{
"docid": "105f34c3fa2d4edbe83d184b7cf039aa",
"text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.",
"title": ""
},
{
"docid": "82917c4e6fb56587cc395078c14f3bb7",
"text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.",
"title": ""
},
{
"docid": "8d7af01e003961cbf2a473abe32d8b7e",
"text": "This paper presents a series of control strategies for soft compliant manipulators. We provide a novel approach to control multi-fingered tendon-driven foam hands using a CyberGlove and a simple ridge regression model. The results achieved include complex posing, dexterous grasping and in-hand manipulations. To enable efficient data sampling and a more intuitive design process of foam robots, we implement and evaluate a finite element based simulation. The accuracy of this model is evaluated using a Vicon motion capture system. We then use this simulation to solve inverse kinematics and compare the performance of supervised learning, reinforcement learning, nearest neighbor and linear ridge regression methods in terms of their accuracy and sample efficiency.",
"title": ""
},
{
"docid": "96973058d3ca943f3621dfe843baf631",
"text": "Many organizations are gradually catching up with the tide of adopting agile practices at workplace, but they seem to be struggling with how to choose the agile practices and mix them into their IT software project development and management. These organizations have already had their own development styles, many of which have adhered to the traditional plan-driven methods such as waterfall. The inherent corporate culture of resisting to change or hesitation to abandon what they have established for a whole new methodology hampers the process change. In this paper, we will review the current state of agile adoption in business organizations and propose a new approach to IT project development and management by blending Scrum, an agile method, into traditional plan-driven project development and management. The management activity involved in Scrum is discussed, the team and meeting composing of Scrum are investigated, the challenges and benefits of applying Scrum in traditional IT project development and management are analyzed, the blending structure is illustrated and discussed, and the iterative process with Scrum and planned process without Scrum are compared.",
"title": ""
},
{
"docid": "c31ffcb1514f437313c2f3f0abaf3a88",
"text": "Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.",
"title": ""
},
{
"docid": "245a31291c7d8fbaac249f9e4585c652",
"text": "A recent advancement of the mobile web has enabled features previously only found in natively developed apps. Thus, arduous development for several platforms or using cross-platform approaches was required. The novel approach, coined Progressive Web Apps, can be implemented through a set of concepts and technologies on any web site that meets certain requirements. In this paper, we argue for progressive web apps as a possibly unifying technology for web apps and native apps. After an introduction of features, we scrutinize the performance. Two cross-platform mobile apps and one Progressive Web App have been developed for comparison purposes, and provided in an open source repository for results’ validity verification. We aim to spark interest in the academic community, as a lack of academic involvement was identified as part of the literature search.",
"title": ""
},
{
"docid": "c0db1cd3688a18c853331772dbdfdedc",
"text": "In this review we describe the challenges and opportunities for creating magnetically active metamaterials in the optical part of the spectrum. The emphasis is on the sub-wavelength periodic metamaterials whose unit cell is much smaller than the optical wavelength. The conceptual differences between microwave and optical metamaterials are demonstrated. We also describe several theoretical techniques used for calculating the effective parameters of plasmonic metamaterials: the effective dielectric permittivity eff(ω) and magnetic permeability μeff(ω). Several examples of negative permittivity and negative permeability plasmonic metamaterials are used to illustrate the theory. c © 2008 Elsevier Ltd. All rights reserved. PACS: 42.70.-a; 41.20.Gz; 78.67.Bf",
"title": ""
},
{
"docid": "2e7b03f13b1c33a42b3ff77886e0683e",
"text": "Internet of Things (IoT), cloud computing and integrated deployment are becoming central topics in Internet development. In this paper, a cloud containerization solution, Docker, has been introduced to the original \"Bluetooth Based Software Defined Function\" (BT-SDF), a framework designed to simplify the IoT function redefining process. With the assistance of Docker and its clustering, Docker Swarm, the BT-SDF will be transformed to a scalable, extensible and flexible IoT function redefining framework named \"Cloud and Bluetooth based software defined function\".",
"title": ""
}
] |
scidocsrr
|
6c9ea1fc40a5839741027f008ab7a70e
|
Link Prediction via Matrix Completion
|
[
{
"docid": "228f2487760407daf669676ce3677609",
"text": "The limitation of using low electron doses in non-destructive cryo-electron tomography of biological specimens can be partially offset via averaging of aligned and structurally homogeneous subsets present in tomograms. This type of sub-volume averaging is especially challenging when multiple species are present. Here, we tackle the problem of conformational separation and alignment with a \"collaborative\" approach designed to reduce the effect of the \"curse of dimensionality\" encountered in standard pair-wise comparisons. Our new approach is based on using the nuclear norm as a collaborative similarity measure for alignment of sub-volumes, and by exploiting the presence of symmetry early in the processing. We provide a strict validation of this method by analyzing mixtures of intact simian immunodeficiency viruses SIV mac239 and SIV CP-MAC. Electron microscopic images of these two virus preparations are indistinguishable except for subtle differences in conformation of the envelope glycoproteins displayed on the surface of each virus particle. By using the nuclear norm-based, collaborative alignment method presented here, we demonstrate that the genetic identity of each virus particle present in the mixture can be assigned based solely on the structural information derived from single envelope glycoproteins displayed on the virus surface.",
"title": ""
}
] |
[
{
"docid": "0cec4473828bf542d97b20b64071a890",
"text": "The effectiveness of knowledge transfer using classification algorithms depends on the difference between the distribution that generates the training examples and the one from which test examples are to be drawn. The task can be especially difficult when the training examples are from one or several domains different from the test domain. In this paper, we propose a locally weighted ensemble framework to combine multiple models for transfer learning, where the weights are dynamically assigned according to a model's predictive power on each test example. It can integrate the advantages of various learning algorithms and the labeled information from multiple training domains into one unified classification model, which can then be applied on a different domain. Importantly, different from many previously proposed methods, none of the base learning method is required to be specifically designed for transfer learning. We show the optimality of a locally weighted ensemble framework as a general approach to combine multiple models for domain transfer. We then propose an implementation of the local weight assignments by mapping the structures of a model onto the structures of the test domain, and then weighting each model locally according to its consistency with the neighborhood structure around the test example. Experimental results on text classification, spam filtering and intrusion detection data sets demonstrate significant improvements in classification accuracy gained by the framework. On a transfer learning task of newsgroup message categorization, the proposed locally weighted ensemble framework achieves 97% accuracy when the best single model predicts correctly only on 73% of the test examples. In summary, the improvement in accuracy is over 10% and up to 30% across different problems.",
"title": ""
},
{
"docid": "6e051906ec3deac14acb249ea4982d2e",
"text": "Recent attempts to fabricate surfaces with custom reflectance functions boast impressive angular resolution, yet their spatial resolution is limited. In this paper we present a method to construct spatially varying reflectance at a high resolution of up to 220dpi, orders of magnitude greater than previous attempts, albeit with a lower angular resolution. The resolution of previous approaches is limited by the machining, but more fundamentally, by the geometric optics model on which they are built. Beyond a certain scale geometric optics models break down and wave effects must be taken into account. We present an analysis of incoherent reflectance based on wave optics and gain important insights into reflectance design. We further suggest and demonstrate a practical method, which takes into account the limitations of existing micro-fabrication techniques such as photolithography to design and fabricate a range of reflection effects, based on wave interference.",
"title": ""
},
{
"docid": "565a8ea886a586dc8894f314fa21484a",
"text": "BACKGROUND\nThe Entity Linking (EL) task links entity mentions from an unstructured document to entities in a knowledge base. Although this problem is well-studied in news and social media, this problem has not received much attention in the life science domain. One outcome of tackling the EL problem in the life sciences domain is to enable scientists to build computational models of biological processes with more efficiency. However, simply applying a news-trained entity linker produces inadequate results.\n\n\nMETHODS\nSince existing supervised approaches require a large amount of manually-labeled training data, which is currently unavailable for the life science domain, we propose a novel unsupervised collective inference approach to link entities from unstructured full texts of biomedical literature to 300 ontologies. The approach leverages the rich semantic information and structures in ontologies for similarity computation and entity ranking.\n\n\nRESULTS\nWithout using any manual annotation, our approach significantly outperforms state-of-the-art supervised EL method (9% absolute gain in linking accuracy). Furthermore, the state-of-the-art supervised EL method requires 15,000 manually annotated entity mentions for training. These promising results establish a benchmark for the EL task in the life science domain. We also provide in depth analysis and discussion on both challenges and opportunities on automatic knowledge enrichment for scientific literature.\n\n\nCONCLUSIONS\nIn this paper, we propose a novel unsupervised collective inference approach to address the EL problem in a new domain. We show that our unsupervised approach is able to outperform a current state-of-the-art supervised approach that has been trained with a large amount of manually labeled data. Life science presents an underrepresented domain for applying EL techniques. By providing a small benchmark data set and identifying opportunities, we hope to stimulate discussions across natural language processing and bioinformatics and motivate others to develop techniques for this largely untapped domain.",
"title": ""
},
{
"docid": "530cb20db77c76d229fd90e73b3a65ca",
"text": "While automatic response generation for building chatbot s ys ems has drawn a lot of attention recently, there is limited understanding on when we need to c onsider the linguistic context of an input text in the generation process. The task is challeng ing, as messages in a conversational environment are short and informal, and evidence that can in dicate a message is context dependent is scarce. After a study of social conversation data cra wled from the web, we observed that some characteristics estimated from the responses of messa ges are discriminative for identifying context dependent messages. With the characteristics as we ak supervision, we propose using a Long Short Term Memory (LSTM) network to learn a classifier. O ur method carries out text representation and classifier learning in a unified framewor k. Experimental results show that the proposed method can significantly outperform baseline meth ods on accuracy of classification.",
"title": ""
},
{
"docid": "95ac40af0bc68a69a1f56fdb358c149e",
"text": "This paper presents an approach to the study of cognitive activities in collaborative software development. This approach has been developed by a multidisciplinary team made up of software engineers and cognitive psychologists. The basis of this approach is to improve our understanding of software development by observing professionals at work. The goal is to derive lines of conduct or good practices based on observations and analyses of the processes that are naturally used by software engineers. The strategy involved is derived from a standard approach in cognitive science. It is based on the videotaping of the activities of software engineers, transcription of the videos, coding of the transcription, defining categories from the coded episodes and defining cognitive behaviors or dialogs from the categories. This project presents two original contributions that make this approach generic in software engineering. The first contribution is the introduction of a formal hierarchical coding scheme, which will enable comparison of various types of observations. The second is the merging of psychological and statistical analysis approaches to build a cognitive model. The details of this new approach are illustrated with the initial data obtained from the analysis of technical review meetings.",
"title": ""
},
{
"docid": "df6b7ae7be0721be558a4a65074f1b78",
"text": "Despite the increasing importance and popularity of association football forecasting systems there is no agreed method of evaluating their accuracy. We have classified the evaluators used into two broad categories: those which consider only the prediction for the observed outcome; and those which consider the predictions for the unobserved as well as observed outcome. We highlight fundamental inconsistencies between them and demonstrate that they produce wildly different conclusions about the accuracy of four different forecasting systems (Fink Tank/Castrol Predictor, Bet365, Odds Wizard, and pi-football) based on recent Premier league data. None of the existing evaluators satisfy a set of simple theoretical benchmark criteria. Hence, it is dangerous to assume that any existing evaluator can adequately assess the performance of football forecasting systems and, until evaluators are developed that address all the benchmark criteria, it is best to use multiple types of predictive evaluators (preferably based on posterior validation).",
"title": ""
},
{
"docid": "c139f6b162c5dd9a849a28ece14ea097",
"text": "Digital documents are vulnerable to being copied. Most existing copy detection prototypes employ an exhaustive sentence-based comparison method in comparing a potential plagiarized document against a repository of legal or original documents to identify plagiarism activities. This approach is not scalable due to the potentially large number of original documents and the large number of sentences in each document . Furthermore, the security level of existing mechanisms is quite weak; a plagiarized document could simply by-pass the detection mechanisms by performing a minor modification on each sentence. In this paper, we propose a copy detection mechanism that will el iminate unnecessary comparisons. This is based on the observation that comparisons between two documents addressing different subjects are not necessary. We describe the design and implementation of our exper imental proto type called CHECK. The results of some exploratory experiments will be illust rated and the security level of our mechanism will be discussed.",
"title": ""
},
{
"docid": "a10a51d1070396e1e8a8b186af18f87d",
"text": "An upcoming trend for automobile manufacturers is to provide firmware updates over the air (FOTA) as a service. Since the firmware controls the functionality of a vehicle, security is important. To this end, several secure FOTA protocols have been developed. However, the secure FOTA protocols only solve the security for the transmission of the firmware binary. Once the firmware is downloaded, an attacker could potentially modify its contents before it is flashed to the corresponding ECU'S ROM. Thus, there is a need to extend the flashing procedure to also verify that the correct firmware has been flashed to the ECU. We present a framework for self-verification of firmware updates over the air. We include a verification code in the transmission to the vehicle, and after the firmware has been flashed, the integrity of the memory contents can be verified using the verification code. The verification procedure entails only simple hash functions and is thus suitable for the limited resources in the vehicle. Virtualization techniques are employed to establish a trusted computing base in the ECU, which is then used to perform the verification. The proposed framework allows the ECU itself to perform self-verification and can thus ensure the successful flashing of the firmware.",
"title": ""
},
{
"docid": "2ea626f0e1c4dfa3d5a23c80d8fbf70c",
"text": "Although research studies in education show that use of technology can help student learning, its use is generally affected by certain barriers. In this paper, we first identify the general barriers typically faced by K-12 schools, both in the United States as well as other countries, when integrating technology into the curriculum for instructional purposes, namely: (a) resources, (b) institution, (c) subject culture, (d) attitudes and beliefs, (e) knowledge and skills, and (f) assessment. We then describe the strategies to overcome such barriers: (a) having a shared vision and technology integration plan, (b) overcoming the scarcity of resources, (c) changing attitudes and beliefs, (d) conducting professional development, and (e) reconsidering assessments. Finally, we identify several current knowledge gaps pertaining to the barriers and strategies of technology integration, and offer pertinent recommendations for future research.",
"title": ""
},
{
"docid": "42ec2c6dfab766d76c71507827ba2275",
"text": "In this era of internet transfer of information is in digital form using multimedia files such as image, video, audio, etc. which relies on secure communication techniques to convey information safely. Due to the frequent transfer of videos over the internet nowadays they have become a good cover media for secure and covert communication in the form of video steganography. For efficient video steganography, it must fulfill its basic requirements such as capacity, imperceptibility, and robustness. In order to make a balance between imperceptibility and robustness, an efficient video steganography scheme is proposed for Standard Definition (SD) and High Definition (HD) videos. This scheme employs DWT (discrete wavelet transforms) for embedding the secret message inside the video frames utilizing only luminance (Y) component, and the security of the proposed scheme is strengthen by pre-processing the secret message with encryption before embedding. The embedding process is done by utilizing the middle-frequency sub-bands after applying second level 2-D DWT to the video frames to decompose it into 16 sub-bands. The performance of the proposed scheme is tested on different videos with quality metrics including peak signal to noise ratio (PSNR), structural similarity (SSIM) index, bit error rate (BER) and also by applying Gaussian and salt & pepper noise attacks. Moreover, the scheme is tested for the different level of compression on stego-video and also compared with U and V components used while embedding. Experimental results show that for both types of videos (HD and SD) the proposed scheme is able to achieve high imperceptibility. Further, it also provides robustness against different types of noise attacks and different compression levels which makes the proposed scheme evident for secure data transmission.",
"title": ""
},
{
"docid": "8a399beb6c89bbd2e8de9a4fd135c74f",
"text": "This paper presents findings emerging from the EU-Funded Games and Learning Alliance (GALA) Network of Excellence on serious games (SGs) that has a focus upon pedagogy-driven design of SGs. The overall framework presented in this paper concerns some elements that we consider key for the design of a new generation of more pedagogically-effective and easily author-able SGs: pedagogical perspectives, learning goals mapping to game mechanics and cognition-based SGs models. The paper also includes two corresponding illustrative case studies, developed by the network, on (1) the analysis of the Re-mission game based on the developed analytical framework of learning and game mechanics and (2) the Sandbox SGs cognition-based model in the Travel in Europe project. Practitioner notes: What is already known about this topic: Early studies have demonstrated the valuable contributions of game-based approaches in education and training. Early work has revealed particular strengths in some entertainment game mechanics, some of which look suited for serious and educational games. There are however no established practices, framework, models in serious games design and development (mechanics, development framework, etc.). Games and learning have separate mechanics and Serious Games Mechanics have yet to be defined. What this paper adds: An overall and integrated view of pedagogy-driven considerations related to SG design. The introduction to an analytical model that maps learning mechanics to game mechanics (LMGM model) towards defining Serious Games mechanics essential for the design and development of games for learning. The discussion on a cognition-based SG design model, the Sandbox SG model (SBSG), that seems to have the potential to contextualise learning in a meaningful virtual environment. Implications for practice and/or policy: The two models will support the design of a game-based learning environment that encapsulates various learning theories and approaches (in particular considering education at schools and universities) be they objectivist, associative, cognitive or situative, and combine contents with mechanics that sustain interest and engagement. The definition of Serious Games Mechanics will help bridge the gap between the expected learning outcomes and the desired positive engagement with the game-based learning environment. Formal deployment of game-based intervention for training and education can be further encouraged if pedagogy plays a key role in the design, development and deployment of serious games. The role of educators/practitioners in the development process is also key to encouraging the uptake of game-based intervention. Introduction The increasing use of pervasive and ubiquitous digital gaming technologies gives a significant opportunity for enhancing education methods, in particular thanks to the games’ ability to appeal a wide population. Yet, despite the digital games’ potential in terms of interactivity, immersion and engagement, much work must still be done to understand how to better design, administrate and evaluate digital games across different learning contexts and targets (Alvarez and Michaud, 2008; Ulicsac, 2010; de Freitas and Liarokapis, 2011). Also, games have now evolved exploiting a variety of modules ranging from social networking and multiplayer online facilities to advanced natural interaction modalities (ISFE, 2010). A new typology of digital games has emerged, namely serious games (SG) for education and training, that are designed with a primary pedagogical goal. Samples such as America’s Army or Code of Everand have become increasingly popular, reaching large numbers of players and engaging them for long periods of time. Early studies undertaken in the US and Europe attest to the valuable contributions of game-based approaches in education (e.g. Kato et al., 2008; Knight et al., 2010). And improvements can be further achieved by better understanding the target audience, pedagogic perspectives, so to make learning experiences more immersive, relevant and engaging. A recent survey by the International Software Federation of Europe (ISFE, 2010) revealed that 74% of those aged 16-19 considered themselves as gamers (n=3000), while 60% of those 20-24, 56% 25-29 and 38% 30-44 considered themselves regular players of games. And the projected growth figures for SGs currently stand at 47% per year until 2015 (Alvarez and Michaud, 2008). The importance of pedagogy at the heart of game development is alien to digital game development for entertainment. The absence of game mechanics and dynamics specifically designed and dedicated for learning purposes is an issue, which makes such intervention unsuited for educational purposes. Certainly, the SGs’ educational potential and actual effectiveness may vary appreciably as a consequence of the pedagogical choices made a priori by the game designer (Squire, 2005). Thus, a more thought-out design is key to meet the end-user and stakeholder requirements that are twofold, on the entertainment and education sides. On the one hand, it is undeniable that a fine-tuned pedagogy plays a major role in sustaining learning effectiveness (Bellotti et al., 2010). On the other hand, one of the biggest problems of educational games to date is the inadequate integration of educational and game design principles (e.g. Kiili, 2005; 2008; Lim et al., 2011) and this is also due to the fact that digital game designers and educational experts do not usually share a common vocabulary (Bellotti et al., 2011). In this paper we report the working experience of the Games and Learning Alliance (GALA, www.galanoe.eu) Network of Excellence, funded by the European 7 th Research Framework Programme, which brings together both the research and industry SG development communities, especially from the context of Technology Enhanced Learning in order to give the nascent industry opportunities to develop, find new markets and business models and utilise IP and scientific and pedagogic outcomes. This paper presents the GALA reflections on these topics that rely on a systematic review methodology (eg. Connolly et al., 2012) and the study of models and frameworks that have been developed by the GALA partners. The paper’s main added value consists in providing an overall and integrated view of pedagogically driven design of SGs. The paper begins with an examination of the pedagogical perspectives of SGs and highlights an analytical view of the importance of mapping game mechanics to pedagogical goals. A promising cognition-based model for SG development is discussed, demonstrating some specific development strategies, which are opening up new possibilities for efficient SG production. This paper highlights illustrative case studies on the Remission and Travel in Europe games, developed by the network and associate partners Pedagogical perspective of SGs Pedagogy lies at the heart of the distinction of what is considered as games for learning compared to other entertainment games. From a pedagogical perspective, SGs are not designed mainly for entertainment purposes (Michael and Chen, 2006), but to exploit the game appeal and the consequent player motivation to support development of knowledge and skills (Doughty et al., 2009). SGs offer an effective and engaging experience (Westera et al, 2008) and careful balancing to achieve symbiosis between pedagogy and game-play is needed (Dror, 2008a). Naively transcribing existing material and instructional methods into a SG domain is detrimental (Bruckman, 1999; Dror, 2008b). SGs should have knowledge transference as a core part of their game mechanics (Shute et al, 2009; Baek, 2010). Thus, understanding how game mechanics can relate to relevant educational strategies is needed. Pedagogy is the practice of learning theory, and applying learning theory in practice is a craft that has been developed in traditional education and training contexts for many hundreds of years. In SGs, however, the standard approach has been to take established theories of learning such as associative, cognitive or situative (de Freitas and Jameson, 2012), and to seek to extend these theories within virtual and game environments. Given the many theories of learning available as candidates for application, this approach is arbitrary and possibly ineffectual. However, it is fair to say that, in general, games have to date largely implemented task-centred and cognitive theories; in particular, experiential learning and scaffolded learning approaches have been tested in game environments. In a few cases game use has led to the development of new learning theories such as the exploratory learning model (de Freitas and Neumann, 2009); however well established theories mainly prevail. A key issue for SG design is to match the desired learning outcomes with the typical game characteristics. Games are quite varied in terms of features and can potentially offer different kinds of learning experience. So, it is urgent to understand how different game elements can contribute to an effective facilitation of learning and appropriate measures supporting effectiveness assessment are needed. Measures should include both learning outcomes (knowledge transfer including cognitive and skill-based abilities) and engagement (affective learning experience). Schiphorst (2007) stated that technology should be designed “as” experience and not only “for” experience. The reason why games are good learning environments is because they allow the learner to live through experiences, interact with learning objects and have social interactions with others including teachers and peers. Real value exists in designing learning experiences to support an exploratory and open-ended model of learning to encourage learners to make their own reflections and summations and to come to an understanding in their own way (de Freitas and Neumann, 2009). These two aspects of SGs (engagement",
"title": ""
},
{
"docid": "33f3f6ca25b8abec09d961a4ed72770a",
"text": "We develop a formal, type-theoretic account of the basic mechanisms of object-oriented programming: encapsulation, message passing, subtyping, and inheritance. By modeling object encapsulation in terms of existential types instead of the recursive records used in other recent studies, we obtain a substantial simpliication both in the model of objects and in the underlying typed-calculus.",
"title": ""
},
{
"docid": "1eb4805e6874ea1882a995d0f1861b80",
"text": "The Asian-Pacific Association for the Study of the Liver (APASL) convened an international working party on the \"APASL consensus statements and recommendation on management of hepatitis C\" in March, 2015, in order to revise \"APASL consensus statements and management algorithms for hepatitis C virus infection (Hepatol Int 6:409-435, 2012)\". The working party consisted of expert hepatologists from the Asian-Pacific region gathered at Istanbul Congress Center, Istanbul, Turkey on 13 March 2015. New data were presented, discussed and debated to draft a revision. Participants of the consensus meeting assessed the quality of cited studies. Finalized recommendations on treatment of hepatitis C are presented in this review.",
"title": ""
},
{
"docid": "d7c84c8282526c46d63e93091861e04d",
"text": "We propose a sketch-based two-step neural model for generating structured queries (SQL) based on a user’s request in natural language. The sketch is obtained by using placeholders for specific entities in the SQL query, such as column names, table names, aliases and variables, in a process similar to semantic parsing. The first step is to apply a sequence-to-sequence (SEQ2SEQ) model to determine the most probable SQL sketch based on the request in natural language. Then, a second network designed as a dual-encoder SEQ2SEQ model using both the text query and the previously obtained sketch is employed to generate the final SQL query. Our approach shows improvements over previous approaches on two recent large datasets (WikiSQL and SENLIDB) suitable for data-driven solutions for natural language interfaces for databases.",
"title": ""
},
{
"docid": "5214f391d5b152f9809bec1f6f069d21",
"text": "Abstract—Magnetic resonance imaging (MRI) is an important diagnostic imaging technique for the early detection of brain cancer. Brain cancer is one of the most dangerous diseases occurring commonly among human beings. The chances of survival can be increased if the cancer is detected at its early stage. MRI brain image plays a vital role in assisting radiologists to access patients for diagnosis and treatment. Studying of medical image by the Radiologist is not only a tedious and time consuming process but also accuracy depends upon their experience. So, the use of computer aided systems becomes very necessary to overcome these limitations. Even though several automated methods are available, still segmentation of MRI brain image remains as a challenging problem due to its complexity and there is no standard algorithm that can produce satisfactory results. In this review paper, various current methodologies of brain image segmentation using automated algorithms that are accurate and requires little user interaction are reviewed and their advantages, disadvantages are discussed. This review paper guides in combining two or more methods together to produce accurate results.",
"title": ""
},
{
"docid": "841ae5c6d2dcdbafcbec534fa46dfa1e",
"text": "In a recent paper, Levy and Goldberg [2] pointed out an interesting connection between prediction-based word embedding models and count models based on pointwise mutual information. Under certain conditions, they showed that both models end up optimizing equivalent objective functions. This paper explores this connection in more detail and lays out the factors leading to differences between these models. We find that the most relevant differences from an optimization perspective are (i) predict models work in a low dimensional space where embedding vectors can interact heavily; (ii) since predict models have fewer parameters, they are less prone to overfitting. Motivated by the insight of our analysis, we show how count models can be regularized in a principled manner and provide closed-form solutions for L1 and L2 regularization. Finally, we propose a new embedding model with a convex objective and the additional benefit of being intelligible.",
"title": ""
},
{
"docid": "232d020c8b006063151050f3c5a67a3d",
"text": "An experimental approach to cut-mark investigation has proved particularly successful and should arguably be a prerequisite for individuals interested in developing standard methods to study butchery data. This paper offers a brief review of the criteria used to investigate cut marks and subsequently outlines recent research that has integrated results from replication studies of archaeological tools and cut marks with written resources to study historic butchery practices. The case is made for a degree of standardization to be incorporated into the recording of butchery data and for the integration of evidence from the analysis of cut marks and tool signatures. While the call for standardization is not without precedent the process would benefit from a suitable model: one is proposed herein based in large part on experimental replication and personal vocational experience gained in the modern butchery trade. Furthermore, the paper identifies issues that need to be kept at the forefront of an experimental approach to butchery investigation and places emphasis on the use of modern analogy and cultural theory as a means of improving our interpretation of cut-mark data.",
"title": ""
},
{
"docid": "ee16956200a7950366c7204a1d0e94c9",
"text": "Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n*) algorithms are known for this problem [3,4,11,19]. This paper reviews important results for sequential algorithms and describes previous work on parallel algorithms for hierarchical clustering. Parallel algorithms to perform hierarchical clustering using several distance metrics are then described. Optimal PRAM algorithms using n/log n processors are given for the average link, complete link, centroid, median, and minimum variance metrics. Optimal butterfly and tree algorithms using n/log n processors are given for the centroid, median, and minimum variance metrics. Optimal asymptotic speedups are achieved for the best practical algorithm to perform clustering using the single link metric on a n/log n processor PRAh4, butterfly, or tree.",
"title": ""
},
{
"docid": "6b2860d90c6e5f3200b2d5fd3bd5da37",
"text": "BACKGROUND\nFrontal fibrosing alopecia (FFA) is an acquired scarring alopecia currently considered a clinical variant of lichen planopilaris (LPP). Our purpose was to examine the clinicopathological features of FFA. In addition, we investigated the similarities and differences between FFA and LPP.\n\n\nMETHODS\nBiopsies from the scalp lesions of eight patients with FFA and eight patients with LPP were microscopically analyzed. Two cases of FFA and four cases of LPP were studied using direct immunofluorescence.\n\n\nRESULTS\nIn spite of the completely different clinical characteristics of FFA and LPP patients, the histopathological findings for the two entities were similar. Common microscopic findings for both FFA and LPP included an inflammatory lymphocytic infiltrate involving the isthmus and infundibulum of the hair follicles, the presence of apoptotic cells in the external root sheath, and a concentric fibrosis surrounding the hair follicles that resulted in their destruction with subsequent scarring alopecia. Biopsies taken from FFA patients showed less follicular inflammation and more apoptotic cells than those from LPP patients. In some cases of LPP, the inflammatory infiltrate involved the interfollicular epidermis, a finding never present in our FFA cases. Direct immunofluorescence was negative in the two cases of FFA studied and showed deposits of immunoglobulins and/or complement in two of the four LPP cases examined.\n\n\nCONCLUSIONS\nThe characteristic findings for FFA were more prominent apoptosis and less inflammation than found in LPP, along with spared interfollicular epidermis. FFA cases showed a rather characteristic histopathological pattern, although we could not find any clear-cut histological differences between FFA and LPP.",
"title": ""
},
{
"docid": "84a01029714dfef5d14bc4e2be78921e",
"text": "Integrating frequent pattern mining with interactive visualization for temporal event sequence analysis poses many interesting research questions and challenges. We review and reflect on some of these challenges based on our experiences working on event sequence data from two domains: web analytics and application logs. These challenges can be organized using a three-stage framework: pattern mining, pattern pruning and interactive visualization.",
"title": ""
}
] |
scidocsrr
|
89d1b9e0fc2d35058a88b50565026c6c
|
Inductorless DC-AC Cascaded H-Bridge Multilevel Boost Inverter for Electric/Hybrid Electric Vehicle Applications
|
[
{
"docid": "913709f4fe05ba2783c3176ed00015fe",
"text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "7e7d4a3ab8fe57c6168835fa1ab3b413",
"text": "Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multicore CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics.",
"title": ""
},
{
"docid": "a2cbc2b95b1988dae97d501c141e161d",
"text": "We present a fast and simple method to compute bundled layouts of general graphs. For this, we first transform a given graph drawing into a density map using kernel density estimation. Next, we apply an image sharpening technique which progressively merges local height maxima by moving the convolved graph edges into the height gradient flow. Our technique can be easily and efficiently implemented using standard graphics acceleration techniques and produces graph bundlings of similar appearance and quality to state-of-the-art methods at a fraction of the cost. Additionally, we show how to create bundled layouts constrained by obstacles and use shading to convey information on the bundling quality. We demonstrate our method on several large graphs.",
"title": ""
},
{
"docid": "180a271a86f9d9dc71cc140096d08b2f",
"text": "This communication demonstrates for the first time the capability to independently control the real and imaginary parts of the complex propagation constant in planar, printed circuit board compatible leaky-wave antennas. The structure is based on a half-mode microstrip line which is loaded with an additional row of periodic metallic posts, resulting in a substrate integrated waveguide SIW with one of its lateral electric walls replaced by a partially reflective wall. The radiation mechanism is similar to the conventional microstrip leaky-wave antenna operating in its first higher-order mode, with the novelty that the leaky-mode leakage rate can be controlled by virtue of a sparse row of metallic vias. For this topology it is demonstrated that it is possible to independently control the antenna pointing angle and main lobe beamwidth while achieving high radiation efficiencies, thus providing low-cost, low-profile, simply fed, and easily integrable leaky-wave solutions for high-gain frequency beam-scanning applications. Several prototypes operating at 15 GHz have been designed, simulated, manufactured and tested, to show the operation principle and design flexibility of this one dimensional leaky-wave antenna.",
"title": ""
},
{
"docid": "64a14e3dfc292fb4d1dc16160e89dedf",
"text": "Approaches to climate change impact, adaptation and vulnerability assessment: towards a classification framework to serve decision-making.",
"title": ""
},
{
"docid": "9cab244eeb45f9553fc25ecca2c37bbd",
"text": "BACKGROUND\nPeriorbital skin hyperpigmentation, so-called dark circles, is of major concern for many people. However, only a few reports refer to the morbidity and treatment, and as far as the authors know, there are no reports of the condition in Asians.\n\n\nMETHODS\nA total of 18 Japanese patients underwent combined therapy using Q-switched ruby laser to eliminate dermal pigmentation following topical bleaching treatment with tretinoin aqueous gel and hydroquinone ointment performed initially (6 weeks) to reduce epidermal melanin. Both steps were repeated two to four times until physical clearance of the pigmentation was confirmed and patient satisfaction was achieved. Skin biopsy was performed at baseline in each patient and at the end of treatment in three patients, all with informed consent. Clinical and histologic appearances of periorbital hyperpigmentation were evaluated and rated as excellent, good, fair, poor, or default.\n\n\nRESULTS\nSeven of 18 patients (38.9 percent) showed excellent clearing after treatment and eight (44.4 percent) were rated good. Only one (5.6 percent) was rated fair and none was rated poor. Postinflammatory hyperpigmentation was observed in only two patients (11.1 percent). Histologic examination showed obvious epidermal hyperpigmentation in 10 specimens. Dermal pigmentation was observed in all specimens but was not considered to be melanocytosis. Remarkable reduction of dermal pigmentation was observed in the biopsy specimens of three patients after treatment.\n\n\nCONCLUSION\nThe new treatment protocol combining Q-switched ruby laser and topical bleaching treatment using tretinoin and hydroquinone is considered effective for improvement of periorbital skin hyperpigmentation, with a low incidence of postinflammatory hyperpigmentation.",
"title": ""
},
{
"docid": "3a2740b7f65841f7eb4f74a1fb3c9b65",
"text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.",
"title": ""
},
{
"docid": "9a6a724f8aa0ae4fa9de1367f8661583",
"text": "In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.",
"title": ""
},
{
"docid": "7efa3543711bc1bb6e3a893ed424b75d",
"text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.",
"title": ""
},
{
"docid": "56205e79e706e05957cb5081d6a8348a",
"text": "Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "9ae435f5169e867dc9d4dc0da56ec9fb",
"text": "Renewable energy is currently the main direction of development of electric power. Because of its own characteristics, the reliability of renewable energy generation is low. Renewable energy generation system needs lots of energy conversion devices which are made of power electronic devices. Too much power electronic components can damage power quality in microgrid. High Frequency AC (HFAC) microgrid is an effective way to solve the problems of renewable energy generation system. Transmitting electricity by means of HFAC is a novel idea in microgrid. Although the HFAC will cause more loss of power, it can improve the power quality in microgrid. HFAC can also reduce the impact of fluctuations of renewable energy in microgrid. This paper mainly simulates the HFAC with Matlab/Simulink and analyzes the feasibility of HFAC in microgrid.",
"title": ""
},
{
"docid": "2e167507f8b44e783d60312c0d71576d",
"text": "The goal of this paper is to study different techniques to predict stock price movement using the sentiment analysis from social media, data mining. In this paper we will find efficient method which can predict stock movement more accurately. Social media offers a powerful outlet for people’s thoughts and feelings it is an enormous ever-growing source of texts ranging from everyday observations to involved discussions. This paper contributes to the field of sentiment analysis, which aims to extract emotions and opinions from text. A basic goal is to classify text as expressing either positive or negative emotion. Sentiment classifiers have been built for social media text such as product reviews, blog posts, and even twitter messages. With increasing complexity of text sources and topics, it is time to re-examine the standard sentiment extraction approaches, and possibly to redefine and enrich the definition of sentiment. Next, unlike sentiment analysis research to date, we examine sentiment expression and polarity classification within and across various social media streams by building topical datasets within each stream. Different data mining methods are used to predict market more efficiently along with various hybrid approaches. We conclude that stock prediction is very complex task and various factors should be considered for forecasting the market more accurately and efficiently.",
"title": ""
},
{
"docid": "bfae60b46b97cf2491d6b1136c60f6a6",
"text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.",
"title": ""
},
{
"docid": "73bee7e59be3d6a044965a512abdf115",
"text": "The underlaying equations for the models we consider are hyperbolic systems of conservation laws in one dimension: ut + f(u)x = 0, where x ∈ R, u ∈ R and Df(u) is assumed to have real distinct eigenvalues. The main mathematical novelty is to describe the dynamics on a network, represented by a directed topological graph, instead of a real line. The more advanced results are available for the scalar case, i.e. n = 1.",
"title": ""
},
{
"docid": "6bda457a005dbb2ff6abf84392d7b197",
"text": "One of the major problems in developing media mix models is that the data that is generally available to the modeler lacks sufficient quantity and information content to reliably estimate the parameters in a model of even moderate complexity. Pooling data from different brands within the same product category provides more observations and greater variability in media spend patterns. We either directly use the results from a hierarchical Bayesian model built on the category dataset, or pass the information learned from the category model to a brand-specific media mix model via informative priors within a Bayesian framework, depending on the data sharing restriction across brands. We demonstrate using both simulation and real case studies that our category analysis can improve parameter estimation and reduce uncertainty of model prediction and extrapolation.",
"title": ""
},
{
"docid": "b7e78ca489cdfb8efad03961247e12f2",
"text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling",
"title": ""
},
{
"docid": "c82cecc94eadfa9a916d89a9ee3fac21",
"text": "In this paper, we develop a supply chain network model consisting of manufacturers and retailers in which the demands associated with the retail outlets are random. We model the optimizing behavior of the various decision-makers, derive the equilibrium conditions, and establish the finite-dimensional variational inequality formulation. We provide qualitative properties of the equilibrium pattern in terms of existence and uniqueness results and also establish conditions under which the proposed computational procedure is guaranteed to converge. Finally, we illustrate the model through several numerical examples for which the equilibrium prices and product shipments are computed. This is the first supply chain network equilibrium model with random demands for which modeling, qualitative analysis, and computational results have been obtained.",
"title": ""
},
{
"docid": "6de71e8106d991d2c3d2b845a9e0a67e",
"text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.",
"title": ""
},
{
"docid": "dd01a74456f7163e3240ebde99cad89e",
"text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\"(objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 milliseconds) until the mass-energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass-energy difference leads to sufficient separation of space-time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum classical reduction occurs. Unlike the random, \"subjective reduction\"(SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a self-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for postreduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\"which tune and \"orchestrate\"the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\"(\"B>Orch OR\", and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500 milliseconds) will elicit Orch OR. In providing a connection among 1) pre-conscious to conscious transition, 2) fundamental space-time notions, 3) noncomputability, and 4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\", we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed.",
"title": ""
},
{
"docid": "018b25742275dd628c58208e5bd5a532",
"text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.",
"title": ""
}
] |
scidocsrr
|
7ab28f756e9158baafba01b918080a8b
|
Modeling and Simulation of 5 DOF educational robot arm
|
[
{
"docid": "ed35d80dd3af3acbe75e5122b2378756",
"text": "We present a system whereby the human voice may specify continuous control signals to manipulate a simulated 2D robotic arm and a real 3D robotic arm. Our goal is to move towards making accessible the manipulation of everyday objects to individuals with motor impairments. Using our system, we performed several studies using control style variants for both the 2D and 3D arms. Results show that it is indeed possible for a user to learn to effectively manipulate real-world objects with a robotic arm using only non-verbal voice as a control mechanism. Our results provide strong evidence that the further development of non-verbal voice controlled robotics and prosthetic limbs will be successful.",
"title": ""
}
] |
[
{
"docid": "bc167cd7a932b1dbd6f7ac9981e863e2",
"text": "An 8-bit successive approximation (SA) analog-to- digital converter (ADC) in 0.18 mum CMOS dedicated for energy-limited applications is presented. The SA ADC achieves a wide effective resolution bandwidth (ERBW) by applying only one bootstrapped switch, thereby preserving the desired low power characteristic. Measurement results show that at a supply voltage of 0.9 V and an output rate of 200 kS/s, the SA ADC performs a peak signal-to-noise-and-distortion ratio of 47.4 dB and an ERBW up to its Nyquist bandwidth (100 kHz). It consumes 2.47 muW in the test, corresponding to a figure of merit of 65 f J/conversion-step.",
"title": ""
},
{
"docid": "df22aa6321c86b0aec44778c7293daca",
"text": "BACKGROUND\nAtopic dermatitis (AD) is characterized by dry skin and a hyperactive immune response to allergens, 2 cardinal features that are caused in part by epidermal barrier defects. Tight junctions (TJs) reside immediately below the stratum corneum and regulate the selective permeability of the paracellular pathway.\n\n\nOBJECTIVE\nWe evaluated the expression/function of the TJ protein claudin-1 in epithelium from AD and nonatopic subjects and screened 2 American populations for single nucleotide polymorphisms in the claudin-1 gene (CLDN1).\n\n\nMETHODS\nExpression profiles of nonlesional epithelium from patients with extrinsic AD, nonatopic subjects, and patients with psoriasis were generated using Illumina's BeadChips. Dysregulated intercellular proteins were validated by means of tissue staining and quantitative PCR. Bioelectric properties of epithelium were measured in Ussing chambers. Functional relevance of claudin-1 was assessed by using a knockdown approach in primary human keratinocytes. Twenty-seven haplotype-tagging SNPs in CLDN1 were screened in 2 independent populations with AD.\n\n\nRESULTS\nWe observed strikingly reduced expression of the TJ proteins claudin-1 and claudin-23 only in patients with AD, which were validated at the mRNA and protein levels. Claudin-1 expression inversely correlated with T(H)2 biomarkers. We observed a remarkable impairment of the bioelectric barrier function in AD epidermis. In vitro we confirmed that silencing claudin-1 expression in human keratinocytes diminishes TJ function while enhancing keratinocyte proliferation. Finally, CLDN1 haplotype-tagging SNPs revealed associations with AD in 2 North American populations.\n\n\nCONCLUSION\nCollectively, these data suggest that an impairment in tight junctions contributes to the barrier dysfunction and immune dysregulation observed in AD subjects and that this may be mediated in part by reductions in claudin-1.",
"title": ""
},
{
"docid": "262be71d64eef2534fab547ec3db6b9a",
"text": "In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies.",
"title": ""
},
{
"docid": "3851a77360fb2d6df454c1ee19c59037",
"text": "Plantar fasciitis affects nearly 1 million persons in the United States at any one time. Conservative therapies have been reported to successfully treat 90% of plantar fasciitis cases; however, for the remaining cases, only invasive therapeutic solutions remain. This investigation studied newly emerging technology, low-level laser therapy. From September 2011 to June 2013, 69 subjects were enrolled in a placebo-controlled, randomized, double-blind, multicenter study that evaluated the clinical utility of low-level laser therapy for the treatment of unilateral chronic fasciitis. The volunteer participants were treated twice a week for 3 weeks for a total of 6 treatments and were evaluated at 5 separate time points: before the procedure and at weeks 1, 2, 3, 6, and 8. The pain rating was recorded using a visual analog scale, with 0 representing \"no pain\" and 100 representing \"worst pain.\" Additionally, Doppler ultrasonography was performed on the plantar fascia to measure the fascial thickness before and after treatment. Study participants also completed the Foot Function Index. At the final follow-up visit, the group participants demonstrated a mean improvement in heel pain with a visual analog scale score of 29.6 ± 24.9 compared with the placebo subjects, who reported a mean improvement of 5.4 ± 16.0, a statistically significant difference (p < .001). Although additional studies are warranted, these data have demonstrated that low-level laser therapy is a promising treatment of plantar fasciitis.",
"title": ""
},
{
"docid": "61f434d5c0b693dd779659106ea35cd4",
"text": "Malignant melanoma has one of the most rapidly increasing incidences in the world and has a considerable mortality rate. Early diagnosis is particularly important since melanoma can be cured with prompt excision. Dermoscopy images play an important role in the non-invasive early detection of melanoma [1]. However, melanoma detection using human vision alone can be subjective, inaccurate and poorly reproducible even among experienced dermatologists. This is attributed to the challenges in interpreting images with diverse characteristics including lesions of varying sizes and shapes, lesions that may have fuzzy boundaries, different skin colors and the presence of hair [2]. Therefore, the automatic analysis of dermoscopy images is a valuable aid for clinical decision making and for image-based diagnosis to identify diseases such as melanoma [1-4].",
"title": ""
},
{
"docid": "9faec965b145160ee7f74b80a6c2d291",
"text": "Several skin substitutes are available that can be used in the management of hand burns; some are intended as temporary covers to expedite healing of shallow burns and others are intended to be used in the surgical management of deep burns. An understanding of skin biology and the relative benefits of each product are needed to determine the optimal role of these products in hand burn management.",
"title": ""
},
{
"docid": "f6ae855fbb4dee8f98c55aafae28a762",
"text": "Air pollution has significant influence on the concentration of constituents in the atmosphere leading to effects like global warming and acid rains. To avoid such adverse imbalances in the nature, an air pollution monitoring system is utmost important. This paper attempts to develop an effective solution for pollution monitoring using wireless sensor networks (WSN) on a real time basis namely real time wireless air pollution monitoring system. Commercially available discrete gas sensors for sensing concentration of gases like CO2, NO2, CO and O2 are calibrated using appropriate calibration technologies. These pre-calibrated gas sensors are then integrated with the wireless sensor motes for field deployment at the campus and the Hyderabad city using multi hop data aggregation algorithm. A light weight middleware and a web interface to view the live pollution data in the form of numbers and charts from the test beds was developed and made available from anywhere on the internet. Other parameters like temperature and humidity were also sensed along with gas concentrations to enable data analysis through data fusion techniques. Experimentation carried out using the developed wireless air pollution monitoring system under different physical conditions show that the system collects reliable source of real time fine-grain pollution data.",
"title": ""
},
{
"docid": "b401c0a7209d98aea517cf0e28101689",
"text": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"title": ""
},
{
"docid": "5150c2218353b3ce0aeed2230df82c73",
"text": "Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) is a disease characterized by intense and debilitating fatigue not due to physical activity that has persisted for at least 6 months, post-exertional malaise, unrefreshing sleep, and accompanied by a number of secondary symptoms, including sore throat, memory and concentration impairment, headache, and muscle/joint pain. In patients with post-exertional malaise, significant worsening of symptoms occurs following physical exertion and exercise challenge serves as a useful method for identifying biomarkers for exertion intolerance. Evidence suggests that intestinal dysbiosis and systemic responses to gut microorganisms may play a role in the symptomology of ME/CFS. As such, we hypothesized that post-exertion worsening of ME/CFS symptoms could be due to increased bacterial translocation from the intestine into the systemic circulation. To test this hypothesis, we collected symptom reports and blood and stool samples from ten clinically characterized ME/CFS patients and ten matched healthy controls before and 15 minutes, 48 hours, and 72 hours after a maximal exercise challenge. Microbiomes of blood and stool samples were examined. Stool sample microbiomes differed between ME/CFS patients and healthy controls in the abundance of several major bacterial phyla. Following maximal exercise challenge, there was an increase in relative abundance of 6 of the 9 major bacterial phyla/genera in ME/CFS patients from baseline to 72 hours post-exercise compared to only 2 of the 9 phyla/genera in controls (p = 0.005). There was also a significant difference in clearance of specific bacterial phyla from blood following exercise with high levels of bacterial sequences maintained at 72 hours post-exercise in ME/CFS patients versus clearance in the controls. These results provide evidence for a systemic effect of an altered gut microbiome in ME/CFS patients compared to controls. Upon exercise challenge, there were significant changes in the abundance of major bacterial phyla in the gut in ME/CFS patients not observed in healthy controls. In addition, compared to controls clearance of bacteria from the blood was delayed in ME/CFS patients following exercise. These findings suggest a role for an altered gut microbiome and increased bacterial translocation following exercise in ME/CFS patients that may account for the profound post-exertional malaise experienced by ME/CFS patients.",
"title": ""
},
{
"docid": "18153ed3c2141500e0f245e3846df173",
"text": "This paper presents the modeling and simulation of a 25 kV 50 Hz AC traction system using power system block set (PSB) / SIMULINK software package. The three-phase system with substations, track section with rectifier-fed DC locomotives and a detailed traction load are included in the model. The model has been used to study the effect of loading and fault conditions in 25 kV AC traction. The relay characteristic proposed is a combination of two quadrilaterals in the X-R plane. A brief summary of the hardware set-up used to implement and test the relay characteristic using a Texas Instruments TMS320C50 digital signal processor (DSP) has also been presented.",
"title": ""
},
{
"docid": "1690778a3ccfa6d0bf93a848a19e57e3",
"text": "F a l l 1. Frau H., Hausfrau, 45 Jahre. Mutter yon 4 Kindern. Lungentuberkulose I. Grades; Tabes dorsalis. S t a t u s beim Eintr i t t : Kleine Frau yon mittlerem Ernii, hrungszustand. Der Thorax ist schleeht entwickelt; Supraund Infraklavikulargruben sind beiderseits tier eingesunken. Die Briiste sind klein, schlaff und h~ngen herunter. Die Mammillen sind sehr stark entwickelt. I. R i i n t g e n b i l d vom 15. 6. 1921. Dorso-ventrale Aufnahme. Es zeigt uns einen schmalen, schlecht entwickelten Thorax. Die I. C. R. sind auf der 1. Seite schm~ler als r. L i n k s : ])er Hilus zeigt einige kleine Schatten, yon denen aus feine Strange nach oben und nach unten verlaufen. Abw~rts neben dem Herzschatten zieht ein derberer Strang. Auf der V. Rippe vorn, ziemlich genau in der Mitre zwischen Wirbelsi~ule und lateraler Thoraxwand findet sich ein fast kreisAbb. 1. runder Schatten yon 1,1 cm Durchmesser. Der Schatten iiberragt die R/~nder der V. Rippe nicht. Um diesen Schatten herum verl~uft ein ca. 1 mm breiter hellerer ringfSrmiger Streifen, auf den nach aul~en der Rippenschatten folgt. Zwei Querfinger unterhalb dieses kleinen Schattens ist der untere Rand der Mamma deutlich sichtbar. ]:)as H e r z ist nach beiden Seiten verbreitert . R e c h t s : Die Spitze ist leicht abgeschattet, der YIilus ausgepr~gter als 1. Naeh unten ziehen einige feine Str/~nge. Im Schatten der V. Rippe vorn finder sieh wie 1. ungef~hr in dvr Mitre zwischen Wirbelsiiule und lateraler Thoraxwand ein dem linksseitigen Schatten entsprechender vollkommen kreisrunder Fleck mit dem ])urchmesser 1,2 cm, der die Rippenr/s nicht iiberragt. Um ihn herum zieht sich ein hellerer Ring, auf den nach aullen der Rippenschatten folgt. Der untere Rand der r. Mamma ist deutlich. W~hrend der 1. Schatten gleichm~flig erscheint, findet sich im Schatten r. in der Mitte eine etwas hellere Partie (Abb. 1).",
"title": ""
},
{
"docid": "7d11d25dc6cd2822d7f914b11b7fe640",
"text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.",
"title": ""
},
{
"docid": "d18181640e98086732e5f32682e12127",
"text": "This paper proposes a novel context-aware joint entity and word-level relation extraction approach through semantic composition of words, introducing a Table Filling Multi-Task Recurrent Neural Network (TF-MTRNN) model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies. The proposed neural network architecture is capable of modeling multiple relation instances without knowing the corresponding relation arguments in a sentence. The experimental results show that a simple approach of piggybacking candidate entities to model the label dependencies from relations to entities improves performance. We present state-of-the-art results with improvements of 2.0% and 2.7% for entity recognition and relation classification, respectively on CoNLL04 dataset.",
"title": ""
},
{
"docid": "f16d93249254118060ce81b2f92faca5",
"text": "Radiologists are critically interested in promoting best practices in medical imaging, and to that end, they are actively developing tools that will optimize terminology and reporting practices in radiology. The RadLex® vocabulary, developed by the Radiological Society of North America (RSNA), is intended to create a unifying source for the terminology that is used to describe medical imaging. The RSNA Reporting Initiative has developed a library of reporting templates to integrate reusable knowledge, or meaning, into the clinical reporting process. This report presents the initial analysis of the intersection of these two major efforts. From 70 published radiology reporting templates, we extracted the names of 6,489 reporting elements. These terms were reviewed in conjunction with the RadLex vocabulary and classified as an exact match, a partial match, or unmatched. Of 2,509 unique terms, 1,017 terms (41%) matched exactly to RadLex terms, 660 (26%) were partial matches, and 832 reporting terms (33%) were unmatched to RadLex. There is significant overlap between the terms used in the structured reporting templates and RadLex. The unmatched terms were analyzed using the multidimensional scaling (MDS) visualization technique to reveal semantic relationships among them. The co-occurrence analysis with the MDS visualization technique provided a semantic overview of the investigated reporting terms and gave a metric to determine the strength of association among these terms.",
"title": ""
},
{
"docid": "7256d6c5bebac110734275d2f985ab31",
"text": "The location-based social networks (LBSN) enable users to check in their current location and share it with other users. The accumulated check-in data can be employed for the benefit of users by providing personalized recommendations. In this paper, we propose a context-aware location recommendation system for LBSNs using a random walk approach. Our proposed approach considers the current context (i.e., current social relations, personal preferences and current location) of the user to provide personalized recommendations. We build a graph model of LBSNs for performing a random walk approach with restart. Random walk is performed to calculate the recommendation probabilities of the nodes. A list of locations are recommended to users after ordering the nodes according to the estimated probabilities. We compare our algorithm, CLoRW, with popularity-based, friend-based and expert-based baselines, user-based collaborative filtering approach and a similar work in the literature. According to experimental results, our algorithm outperforms these approaches in all of the test cases.",
"title": ""
},
{
"docid": "27195c18451d0e7c7d0ed73bd5af5d44",
"text": "Clustering is a method by which nodes are hierarchically organized on the basis of their relative proximity to one another. Routes can be recorded hierarchically, across clusters, to increase routing flexibility. Hierarchical routing greatly increases the scalability of routing in ad hoc networks by increasing the robustness of routes. This paper presents the Adaptive Routing using Clusters (ARC) protocol, a protocol that creates a cluster hierarchy composed of cluster leaders and gateway nodes to interconnect clusters. ARC introduces a new algorithm for cluster leader revocation that eliminates the ripple effect caused by leadership changes. Further, ARC utilizes a limited broadcast algorithm for reducing the impact of network floods. The performance of ARC is evaluated by comparing it both with other clustering schemes and with an on-demand ad hoc routing protocol. It is shown that the cluster topology created by ARC is more stable than that created by other clustering algorithms and that the use of ARC can result in throughput increases of over 100%. Copyright 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "4795eb084a07f619ebdede2fb6915f71",
"text": "This paper reports on a Gallium Nitride High Electron Mobility Transistor (GaN HEMT) Monolithic Microwave Integrated Circuit (MMIC) high power amplifier (HPA), which features high power and high gain over C-Ku band 115 % relative bandwidth. A C-Ku band (6 ∼ 18 GHz) GaN HEMT MMIC amplifier was manufactured and measured. The circuit dimension is 4.8 mm by 4 mm. The fabricated MMIC HPA derived an averaged output power of 20 W with averaged power gain of 9.6 dB over C-Ku band. The output power is state-of-the-art output power for GaN HEMT MMIC amplifiers with more than 100 % relative bandwidth and up to Ku band operation frequency.",
"title": ""
},
{
"docid": "f827c29bb9dd6073e626b7457775000c",
"text": "Inter vehicular communication is a technology where vehicles act as different nodes to form a network. In a vehicular network different vehicles communicate among each other via wireless access .Authentication is very crucial security service for inter vehicular communication (IVC) in Vehicular Information Network. It is because, protecting vehicles from any attempt to cause damage (misuse) to their private data and the attacks on their privacy. In this survey paper, we investigate the authentication issues for vehicular information network architecture based on the communication principle of named data networking (NDN). This paper surveys the most emerging paradigm of NDN in vehicular information network. So, we aims this survey paper helps to improve content naming, addressing, data aggregation and mobility for IVC in the vehicular information network.",
"title": ""
}
] |
scidocsrr
|
2aca62beb39f20ebd65fd85820d46ab5
|
Does the Directivity of a Virtual Agent's Speech Influence the Perceived Social Presence?
|
[
{
"docid": "18a985c7960ee6c94f3f8bde503c07ce",
"text": "Computer-controlled, human-like virtual agents (VAs), are often embedded into immersive virtual environments (IVEs) in order to enliven a scene or to assist users. Certain constraints need to be fulfilled, e.g., a collision avoidance strategy allowing users to maintain their personal space. Violating this flexible protective zone causes discomfort in real-world situations and in IVEs. However, no studies on collision avoidance for small-scale IVEs have been conducted yet. Our goal is to close this gap by presenting the results of a controlled user study in a CAVE. 27 participants were immersed in a small-scale office with the task of reaching the office door. Their way was blocked either by a male or female VA, representing their co-worker. The VA showed different behavioral patterns regarding gaze and locomotion. Our results indicate that participants preferred collaborative collision avoidance: they expect the VA to step aside in order to get more space to pass while being willing to adapt their own walking paths.",
"title": ""
}
] |
[
{
"docid": "6021388395ddd784422a22d30dac8797",
"text": "Introduction: The European Directive 2013/59/EURATOM requires patient radiation dose information to be included in the medical report of radiological procedures. To provide effective communication to the patient, it is necessary to first assess the patient's level of knowledge regarding medical exposure. The goal of this work is to survey patients’ current knowledge level of both medical exposure to ionizing radiation and professional disciplines and communication means used by patients to garner information. Material and Methods: A questionnaire was designed comprised of thirteen questions: 737 patients participated in the survey. The data were analysed based on population age, education, and number of radiological procedures received in the three years prior to survey. Results: A majority of respondents (56.4%) did not know which modality uses ionizing radiation. 74.7% had never discussed with healthcare professionals the risk concerning their medical radiological procedures. 70.1% were not aware of the professionals that have expertise to discuss the use of ionizing radiation for medical purposes, and 84.7% believe it is important to have the radiation dose information stated in the medical report. Conclusion: Patients agree with new regulations that it is important to know the radiation level related to the medical exposure, but there is little awareness in terms of which modalities use X-Rays and the professionals and channels that can help them to better understand the exposure information. To plan effective communication, it is essential to devise methods and adequate resources for key professionals (medical physicists, radiologists, referring physicians) to convey correct and effective information.",
"title": ""
},
{
"docid": "8e9c75f7971d75ed72b97756356e3c2c",
"text": "We present the results from the third shared task on multimodal machine translation. In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech. The image can be used in addition to (or instead of) the source sentence. This year the task was extended with a third target language (Czech) and a new test set. In addition, a variant of this task was introduced with its own test set where the source sentence is given in multiple languages: English, French and German, and participating systems are required to generate a translation in Czech. Seven teams submitted 45 different systems to the two variants of the task. Compared to last year, the performance of the multimodal submissions improved, but text-only systems remain competitive.",
"title": ""
},
{
"docid": "ffb65e7e1964b9741109c335f37ff607",
"text": "To build a redundant medium-voltage converter, the semiconductors must be able to turn OFF different short circuits. The most challenging one is a hard turn OFF of a diode which is called short-circuit type IV. Without any protection measures this short circuit destroys the high-voltage diode. Therefore, a novel three-level converter with an increased short-circuit inductance is used. In this paper several short-circuit measurements on a 6.5 kV diode are presented which explain the effect of the protection measures. Moreover, the limits of the protection scheme are presented.",
"title": ""
},
{
"docid": "479e962b8ed5d1b8f03280b209c27249",
"text": "A feedforward network is proposed which lends itself to cost-effective implementations in digital hardware and has a fast forward-pass capability. It differs from the conventional model in restricting its synapses to the set {−1, 0, 1} while allowing unrestricted offsets. Simulation results on the ‘onset of diabetes’ data set and a handwritten numeral recognition database indicate that the new network, despite having strong constraints on its synapses, has a generalization performance similar to that of its conventional counterpart. I. Hardware Implementation Ease of hardware implementation is the key feature that distinguishes the feedforward network from competing statistical and machine learning techniques. The most distinctive characteristic of the graph of that network is its homogeneous modularity. Because of its modular architecture, the natural implementation of this network is a parallel one, whether in software or in hardware. The digital, electronic implementation holds considerable interest – the modular architecture of the feedforward network is well matched with VLSI design tools and therefore lends itself to cost-effective mass production. There is, however, a hitch which makes this union between the feedforward network and digital hardware far from ideal: the network parameters (weights) and its internal functions (dot product, activation functions) are inherently analog. It is too much to expect a network trained in an analog (or high-resolution digital) environment to behave satisfactorily when transplanted into typically low-resolution hardware. Use of the digital approximation of a continuous activation function, and/or range-limiting of weights should, in general, lead to an unsatisfactory approximation. The solution to this problem may lie in a bottom-up approach – instead of trying to fit a trained, but inherently analog network in digital hardware, train the network in such a way that it is suitable for direct digital implementation after training. This approach is the basis of the network proposed here. This network, with synapses from {−1, 0, 1} and continuous offsets, can be formed without using a conventional multiplier. This reduction in complexity, plus the fact that all synapses require no more than a single bit each for storage, makes these networks very attractive. It is possible that the severity of the {−1, 0, 1} restric1Offsets are also known as thresholds as well as biases. 2A zero-valued synapse indicates the absence of a synapse! tion may weaken the approximation capability of this network, however our experiments on classification tasks indicate otherwise. Comfort is also provided by a result on approximation in C(R) [4]. That result, the Multiplier-Free Network (MFN) existence theorem, guarantees that networks with input-layer synapses from the set {−1, 1}, no output-layer synapses, unrestricted offsets, and a single hidden layer of neurons requiring only sign adjustment, addition, and hyperbolic tangent activation functions, can approximate all functions of one variable with any desired accuracy. The constraints placed upon the network weights may result in an increase in the necessary number of hidden neurons required to achieve a given degree of accuracy on most learning tasks. It should also be noted that the hardware implementation benefits are valid only when the MFN has been trained, as the learning task still requires high-resolution arithmetic. This makes the MFN unsuitable for in-situ learning. Moreover, high-resolution offsets and activation function are required during training and for the trained network. II. Approximation in C(R) Consider the function f̂ :",
"title": ""
},
{
"docid": "6767096adc28681387c77a68a3468b10",
"text": "This study investigates fifty small and medium enterprises by using a survey approach to find out the key factors that are determinants to EDI adoption. Based upon the existing model, the study uses six factors grouped into three categories, namely organizational, environmental and technological aspects. The findings indicate that factors such as perceived benefits government support and management support are significant determinants of EDI adoption. The remaining factors like organizational culture, motivation to use EDI and task variety remain insignificant. Based upon the analysis of data, recommendations are made.",
"title": ""
},
{
"docid": "1ada0fc6b22bba07d9baf4ccab437671",
"text": "Tree-based path planners have been shown to be well suited to solve various high dimensional motion planning problems. Here we present a variant of the Rapidly-Exploring Random Tree (RRT) path planning algorithm that is able to explore narrow passages or difficult areas more effectively. We show that both workspace obstacle information and C-space information can be used when deciding which direction to grow. The method includes many ways to grow the tree, some taking into account the obstacles in the environment. This planner works best in difficult areas when planning for free flying rigid or articulated robots. Indeed, whereas the standard RRT can face difficulties planning in a narrow passage, the tree based planner presented here works best in these areas",
"title": ""
},
{
"docid": "835309dca26f0c3fc5a750f9957092da",
"text": "Offline training and testing are playing an essential role in design and evaluation of intelligent vehicle vision algorithms. Nevertheless, long-term inconvenience concerning traditional image datasets is that manually collecting and annotating datasets from real scenes lack testing tasks and diverse environmental conditions. For that virtual datasets can make up for these regrets. In this paper, we propose to construct artificial scenes for evaluating the visual intelligence of intelligent vehicles and generate a new virtual dataset called “ParallelEye-CS”. First of all, the actual track map data is used to build 3D scene model of Chinese Flagship Intelligent Vehicle Proving Center Area, Changshu. Then, the computer graphics and virtual reality technologies are utilized to simulate the virtual testing tasks according to the Chinese Intelligent Vehicles Future Challenge (IVFC) tasks. Furthermore, the Unity3D platform is used to generate accurate ground-truth labels and change environmental conditions. As a result, we present a viable implementation method for constructing artificial scenes for traffic vision research. The experimental results show that our method is able to generate photorealistic virtual datasets with diverse testing tasks.",
"title": ""
},
{
"docid": "7ba4375393aac729b8f549c1e4109ec2",
"text": "Due to their capacity-achieving property, polar codes have become one of the most attractive channel codes. To date, the successive-cancellation list (SCL) decoding algorithm is the primary approach that can guarantee outstanding error-correcting performance of polar codes. However, the hardware designs of the original SCL decoder have a large silicon area and a long decoding latency. Although some recent efforts can reduce either the area or latency of SCL decoders, these two metrics still cannot be optimized at the same time. This brief, for the first time, proposes a general log-likelihood-ratio (LLR) based SCL decoding algorithm with multibit decision. This new algorithm, referred to as LLR - 2K b-SCL, can determine 2K bits simultaneously for arbitrary K with the use of LLR messages. In addition, a reduced-data-width scheme is presented to reduce the critical path of the sorting block. Then, based on the proposed algorithm, a VLSI architecture of the new SCL decoder is developed. Synthesis results show that, for an example (1024, 512) polar code with list size 4, the proposed LLR - 2K b - SCL decoders achieve a significant reduction in both area and latency as compared to prior works. As a result, the hardware efficiencies of the proposed designs with K = 2 and 3 are 2.33 times and 3.32 times of that of the state-of-the-art works, respectively.",
"title": ""
},
{
"docid": "25c25864ac5584b99aacbda88bda6203",
"text": "Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans. Previous work in music generation has mainly been focused on creating a single melody. More recent work on polyphonic music modeling, centered around time series probability density estimation, has met some partial success. In particular, there has been a lot of work based off of Recurrent Neural Networks combined with Restricted Boltzmann Machines (RNNRBM) and other similar recurrent energy based models. Our approach, however, is to perform end-to-end learning and generation with deep neural nets alone.",
"title": ""
},
{
"docid": "ac8cef535e5038231cdad324325eaa37",
"text": "There are mainly two types of Emergent Self-Organizing Maps (ESOM) grid structures in use: hexgrid (honeycomb like) and quadgrid (trellis like) maps. In addition to that, the shape of the maps may be square or rectangular. This work investigates the effects of these different map layouts. Hexgrids were found to have no convincing advantage over quadgrids. Rectangular maps, however, are distinctively superior to square maps. Most surprisingly, rectangular maps outperform square maps for isotropic data, i.e. data sets with no particular primary direction.",
"title": ""
},
{
"docid": "38024169edcf1272efc7013b68d1c5cb",
"text": "Fractal dimension measures the geometrical complexity of images. Lacunarity being a measure of spatial heterogeneity can be used to differentiate between images that have similar fractal dimensions but different appearances. This paper presents a method to combine fractal dimension (FD) and lacunarity for better texture recognition. For the estimation of the fractal dimension an improved algorithm is presented. This algorithm uses new box-counting measure based on the statistical distribution of the gray levels of the ‘‘boxes’’. Also for the lacunarity estimation, new and faster gliding-box method is proposed, which utilizes summed area tables and Levenberg–Marquardt method. Methods are tested using Brodatz texture database (complete set), a subset of the Oulu rotation invariant texture database (Brodatz subset), and UIUC texture database (partial). Results from the tests showed that combining fractal dimension and lacunarity can improve recognition of textures. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "779280c897c09ce0017dfc7848f803b7",
"text": "With increasing storage capacities on current PCs, searching the World Wide Web has ironically become more efficient than searching one’s own personal computer. The recently introduced desktop search engines are a first step towards coping with this problem, but not yet a satisfying solution. The reason for that is that desktop search is actually quite different from its web counterpart. Documents on the desktop are not linked to each other in a way comparable to the web, which means that result ranking is poor or even inexistent, because algorithms like PageRank cannot be used for desktop search. On the other hand, desktop search could potentially profit from a lot of implicit and explicit semantic information available in emails, folder hierarchies, browser cache contexts and others. This paper investigates how to extract and store these activity based context information explicitly as RDF metadata and how to use them, as well as additional background information and ontologies, to enhance desktop search.",
"title": ""
},
{
"docid": "347e7b80b2b0b5cd5f0736d62fa022ae",
"text": "This article presents the results of an interview study on how people perceive and play social network games on Facebook. During recent years, social games have become the biggest genre of games if measured by the number of registered users. These games are designed to cater for large audiences in their design principles and values, a free-to-play revenue model and social network integration that make them easily approachable and playable with friends. Although these games have made the headlines and have been seen to revolutionize the game industry, we still lack an understanding of how people perceive and play them. For this article, we interviewed 18 Finnish Facebook users from a larger questionnaire respondent pool of 134 people. This study focuses on a user-centric approach, highlighting the emergent experiences and the meaning-making of social games players. Our findings reveal that social games are usually regarded as single player games with a social twist, and as suffering partly from their design characteristics, while still providing a wide spectrum of playful experiences for different needs. The free-to-play revenue model provides an easy access to social games, but people disagreed with paying for additional content for several reasons.",
"title": ""
},
{
"docid": "c2845a8a4f6c2467c7cd3a1a95a0ca37",
"text": "In this report I introduce ReSuMe a new supervised learning method for Spiking Neural Networks. The research on ReSuMe has been primarily motivated by the need of inventing an efficient learni ng method for control of movement for the physically disabled. Howeve r, thorough analysis of the ReSuMe method reveals its suitability not on ly to the task of movement control, but also to other real-life applicatio ns including modeling, identification and control of diverse non-statio nary, nonlinear objects. ReSuMe integrates the idea of learning windows, known from t he spikebased Hebbian rules, with a novel concept of remote supervis ion. General overview of the method, the basic definitions, the netwo rk architecture and the details of the learning algorithm are presented . The properties of ReSuMe such as locality, computational simplicity a nd the online processing suitability are discussed. ReSuMe learning abi lities are illustrated in a verification experiment.",
"title": ""
},
{
"docid": "9a75902f8e91aaabaca6e235a91c33f3",
"text": "This article presents and discusses the implementation of a direct volume rendering system for the Web, which articulates a large portion of the rendering task in the client machine. By placing the rendering emphasis in the local client, our system takes advantage of its power, while at the same time eliminates processing from unreliable bottlenecks (e.g. network). The system developed articulates in efficient manner the capabilities of the recently released WebGL standard, which makes available the accelerated graphic pipeline (formerly unusable). The dependency on specially customized hardware is eliminated, and yet efficient rendering rates are achieved. The Web increasingly competes against desktop applications in many scenarios, but the graphical demands of some of the applications (e.g. interactive scientific visualization by volume rendering), have impeded their successful settlement in Web scenarios. Performance, scalability, accuracy, security are some of the many challenges that must be solved before visual Web applications popularize. In this publication we discuss both performance and scalability of the volume rendering by WebGL ray-casting in two different but challenging application domains: medical imaging and radar meteorology.",
"title": ""
},
{
"docid": "a4fb1919a1bf92608a55bc3feedf897d",
"text": "We develop an algebraic framework, Logic Programming Doctrines, for the syntax, proof theory, operational semantics and model theory of Horn Clause logic programming based on indexed premonoidal categories. Our aim is to provide a uniform framework for logic programming and its extensions capable of incorporating constraints, abstract data types, features imported from other programming language paradigms and a mathematical description of the state space in a declarative manner. We define a new way to embed information about data into logic programming derivations by building a sketch-like description of data structures directly into an indexed category of proofs. We give an algebraic axiomatization of bottom-up semantics in this general setting, describing categorical models as fixed points of a continuous operator. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dc81f63623020220eba19f4f6ae545e0",
"text": "In this paper, a new technique for human identification task based on heart sound signals has been proposed. It utilizes a feature level fusion technique based on canonical correlation analysis. For this purpose a robust pre-processing scheme based on the wavelet analysis of the heart sounds is introduced. Then, three feature vectors are extracted depending on the cepstral coefficients of different frequency scale representation of the heart sound namely; the mel, bark, and linear scales. Among the investigated feature extraction methods, experimental results show that the mel-scale is the best with 94.4% correct identification rate. Using a hybrid technique combining MFCC and DWT, a new feature vector is extracted improving the system's performance up to 95.12%. Finally, canonical correlation analysis is applied for feature fusion. This improves the performance of the proposed system up to 99.5%. The experimental results show significant improvements in the performance of the proposed system over methods adopting single feature extraction.",
"title": ""
},
{
"docid": "e5c625ceaf78c66c2bfb9562970c09ec",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
},
{
"docid": "bf15c81fc0cdf463dd2ef81f54f097f8",
"text": "We present an approach to multi-target tracking that has expressive potential beyond the capabilities of chain-shaped hidden Markov models, yet has significantly reduced complexity. Our framework, which we call tracking-by-selection, is similar to tracking-by-detection in that it separates the tasks of detection and tracking, but it shifts temporal reasoning from the tracking stage to the detection stage. The core feature of tracking-by-selection is that it reasons about path hypotheses that traverse the entire video instead of a chain of single-frame object hypotheses. A traditional chain-shaped tracking-by-detection model is only able to promote consistency between one frame and the next. In tracking-by-selection, path hypotheses exist across time, and encouraging long-term temporal consistency is as simple as rewarding path hypotheses with consistent image features. One additional advantage of tracking-by-selection is that it results in a dramatically simplified model that can be solved exactly. We adapt an existing tracking-by-detection model to the tracking-by-selection framework, and show improved performance on a challenging dataset.",
"title": ""
}
] |
scidocsrr
|
a5c96ee7a17e998288e8735bc7bcc63f
|
Human-Intent Detection and Physically Interactive Control of a Robot Without Force Sensors
|
[
{
"docid": "56316a77e260d8122c4812d684f4d223",
"text": "Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.",
"title": ""
},
{
"docid": "9b1d9cc24177c040d165bdf1fee1459e",
"text": "This paper addresses the field of humanoid and personal robotics—its objectives, motivations, and technical problems. The approach described in the paper is based on the analysis of humanoid and personal robots as an evolution from industrial to advanced and service robotics driven by the need for helpful machines, as well as a synthesis of the dream of replicating humans. The first part of the paper describes the development of anthropomorphic components for humanoid robots, with particular regard to anthropomorphic sensors for vision and touch, an eight-d.o.f. arm, a three-fingered hand with sensorized fingertips, and control schemes for grasping. Then, the authors propose a user-oriented designmethodology for personal robots, anddescribe their experience in the design, development, and validation of a real personal robot composed of a mobile unit integrating some of the anthropomorphic components introduced previously and aimed at operating in a distributedworking environment. Based on the analysis of experimental results, the authors conclude that humanoid robotics is a tremendous and attractive technical and scientific challenge for robotics research. The real utility of humanoids has still to be demonstrated, but personal assistance can be envisaged as a promising application domain. Personal robotics also poses difficult technical problems, especially related to the need for achieving adequate safety, proper human–robot interaction, useful performance, and affordable cost. When these problems are solved, personal robots will have an excellent chance for significant application opportunities, especially if integrated into future home automation systems, and if supported by the availability of humanoid robots. © 2001 John Wiley & Sons, Inc.",
"title": ""
}
] |
[
{
"docid": "4c48aa985223ae9317c5f73361b5e7a3",
"text": "Low-dropout voltage regulators (LDOs) have been extensively used on-chip to supply voltage for various circuit blocks. Digital LDOs (DLDO) have recently attracted circuit designers for their low voltage operating capability and load current scalability. Existing DLDO techniques suffer from either poor transient performance due to slow digital control loop or poor DC load regulation due to low loop gain. A dual-loop architecture to improve the DC load regulation and transient performance is proposed in this work. The proposed regulator uses a fast control loop for improved transient response and an analog assisted dynamic reference correction loop for an improved DC load regulation. The design achieved a DC load regulation of 0.005mV/mA and a settling time of 139ns while regulating loads up to 200mA. The proposed DLDO is designed in 28nm FD-SOI technology with a 0.027mm2 active area.",
"title": ""
},
{
"docid": "b252aea38a537a22ab34fdf44e9443d2",
"text": "The objective of this study is to describe the case of a patient presenting advanced epidermoid carcinoma of the penis associated to myiasis. A 41-year-old patient presenting with a necrotic lesion of the distal third of the penis infested with myiasis was attended in the emergency room of our hospital and was submitted to an urgent penectomy. This is the first case of penile cancer associated to myiasis described in the literature. This case reinforces the need for educative campaigns to reduce the incidence of this disease in developing countries.",
"title": ""
},
{
"docid": "8c3ecd27a695fef2d009bbf627820a0d",
"text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.",
"title": ""
},
{
"docid": "d166f4cd01d22d7143487b691138023c",
"text": "Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user’s Bitcoin transactions can be linked to compromise the user’s anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin’s blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin ↔ voucher exchange. Our schemes are practical, secure and anonymous.",
"title": ""
},
{
"docid": "2bd8a66a3e3cfafc9b13fd7ec47e86fc",
"text": "Psidium guajava Linn. (Guava) is used not only as food but also as folk medicine in subtropical areas around the world because of its pharmacologic activities. In particular, the leaf extract of guava has traditionally been used for the treatment of diabetes in East Asia and other countries. Many pharmacological studies have demonstrated the ability of this plant to exhibit antioxidant, hepatoprotective, anti-allergy, antimicrobial, antigenotoxic, antiplasmodial, cytotoxic, antispasmodic, cardioactive, anticough, antidiabetic, antiinflamatory and antinociceptive activities, supporting its traditional uses. Suggesting a wide range of clinical applications for the treatment of infantile rotaviral enteritis, diarrhoea and diabetes.",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "6154efdd165c7323c1ba9ec48e63cfc6",
"text": "A RANSAC based procedure is described for detecting inliers corresponding to multiple models in a given set of data points. The algorithm we present in this paper (called multiRANSAC) on average performs better than traditional approaches based on the sequential application of a standard RANSAC algorithm followed by the removal of the detected set of inliers. We illustrate the effectiveness of our approach on a synthetic example and apply it to the problem of identifying multiple world planes in pairs of images containing dominant planar structures.",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
},
{
"docid": "061e10ca5d2b4e807878e5eec0827b28",
"text": "Uplift modeling is a machine learning technique that aims to model treatment effects heterogeneity. It has been used in business and health sectors to predict the effect of a specific action on a given individual. Despite its advantages, uplift models show high sensitivity to noise and disturbance, which leads to unreliable results. In this paper we show different approaches to address the problem of uplift modeling, we demonstrate how disturbance in data can affect uplift measurement. We propose a new approach, we call it Pessimistic Uplift Modeling, that minimizes disturbance effects. We compared our approach with the existing uplift methods, on simulated and real datasets. The experiments show that our approach outperforms the existing approaches, especially in the case of high noise data environment.",
"title": ""
},
{
"docid": "f292b8666eb78e4d881777fee35123f7",
"text": "Abstract. We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0 − 1 discrete optimization problem on n variables, then we solve the robust counterpart by solving at most n + 1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0 − 1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. We also show that the robust counterpart of an NP -hard α-approximable 0 − 1 discrete optimization problem, remains α-approximable. Finally, we propose an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.",
"title": ""
},
{
"docid": "dd52742343462b3106c18274c143928b",
"text": "This paper presents a descriptive account of the social practices surrounding the iTunes music sharing of 13 participants in one organizational setting. Specifically, we characterize adoption, critical mass, and privacy; impression management and access control; the musical impressions of others that are created as a result of music sharing; the ways in which participants attempted to make sense of the dynamic system; and implications of the overlaid technical, musical, and corporate topologies. We interleave design implications throughout our results and relate those results to broader themes in a music sharing design space.",
"title": ""
},
{
"docid": "72d51fd4b384f4a9c3f6fe70606ab120",
"text": "Cloud Computing is a flexible, cost-effective, and proven delivery platform for providing business or consumer IT services over the Internet. However, cloud Computing presents an added level of risk because essential services are often outsourced to a third party, which makes it harder to maintain data security and privacy, support data and service availability, and demonstrate compliance. Cloud Computing leverages many technologies (SOA, virtualization, Web 2.0); it also inherits their security issues, which we discuss here, identifying the main vulnerabilities in this kind of systems and the most important threats found in the literature related to Cloud Computing and its environment as well as to identify and relate vulnerabilities and threats with possible solutions.",
"title": ""
},
{
"docid": "471579f955f8b68a357c8780a7775cc9",
"text": "In addition to practitioners who care for male patients, with the increased use of high-resolution anoscopy, practitioners who care for women are seeing more men in their practices as well. Some diseases affecting the penis can impact on their sexual partners. Many of the lesions and neoplasms of the penis occur on the vulva as well. In addition, there are common and rare lesions unique to the penis. A review of the scope of penile lesions and neoplasms that may present in a primary care setting is presented to assist in developing a differential diagnosis if such a patient is encountered, as well as for practitioners who care for their sexual partners. A familiarity will assist with recognition, as well as when consultation is needed.",
"title": ""
},
{
"docid": "47e11b1d734b1dcacc182e55d378f2a2",
"text": "Experience replay plays an important role in the success of deep reinforcement learning (RL) by helping stabilize the neural networks. It has become a new norm in deep RL algorithms. In this paper, however, we showcase that varying the size of the experience replay buffer can hurt the performance even in very simple tasks. The size of the replay buffer is actually a hyper-parameter which needs careful tuning. Moreover, our study of experience replay leads to the formulation of the Combined DQN algorithm, which can significantly outperform primitive DQN in some tasks.",
"title": ""
},
{
"docid": "efc341c0a3deb6604708b6db361bfba5",
"text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.",
"title": ""
},
{
"docid": "7aa5bf782622f2f0247dce09dcb23077",
"text": "In the wake of the digital revolution we will see a dramatic transformation of our economy and societal institutions. While the benefits of this transformation can be massive, there are also tremendous risks. The fundaments of autonomous decision-making, human dignity, and democracy are shaking. After the automation of production processes and vehicle operation, the automation of society is next. This is moving us to a crossroads: we must decide between a society in which the actions are determined in a top-down way and then implemented by coercion or manipulative technologies or a society, in which decisions are taken in a free and participatory way. Modern information and communication systems enable both, but the latter has economic and strategic benefits.",
"title": ""
},
{
"docid": "170e7a72a160951e880f18295d100430",
"text": "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are used to construct capsules in the first capsule layer. Capsule layers are connected via dynamic routing mechanism. The last capsule layer consists of only one capsule to produce a vector output. The length of this vector output is used to measure the plausibility of the triple. Our proposed CapsE obtains state-of-the-art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17 dataset.",
"title": ""
},
{
"docid": "f96db30bac65af7c9315fae0f9bb7b7e",
"text": "Combining MIMO with OFDM, it is possible to significantly reduce receiver complexity as OFDM greatly simplifies equalization at the receiver. MIMO-OFDM is currently being considered for a number of developing wireless standards; consequently, the study of MIMO-OFDM in realistic environments is of great importance. This paper describes an approach for prototyping a MIMO-OFDM systems using a flexible software defined radio (SDR) system architecture in conjunction with commercially available hardware. An emphasis on software permits a focus on algorithm and system design issues rather than implementation and hardware configuration. The penalty of this flexibility, however, is that the ease of use comes at the expense of overall throughput. To illustrate the benefits of the proposed architecture, applications to MIMO-OFDM system prototyping and preliminary MIMO channel measurements are presented. A detailed description of the hardware is provided along with downloadable software to reproduce the system.",
"title": ""
},
{
"docid": "9e7ec69d26ead38692ee0059980538c8",
"text": "A dynamic control system design has been a great demand in the control engineering community, with many applications particularly in the field of flight control. This paper presents investigations into the development of a dynamic nonlinear inverse-model based control of a twin rotor multi-input multi-output system (TRMS). The TRMS is an aerodynamic test rig representing the control challenges of modern air vehicle. A model inversion control with the developed adaptive model is applied to the system. An adaptive neuro-fuzzy inference system (ANFIS) is augmented with the control system to improve the control response. To demonstrate the applicability of the methods, a simulated hovering motion of the TRMS, derived from experimental data is considered in order to evaluate the tracking properties and robustness capacities of the inverse- model control technique.",
"title": ""
},
{
"docid": "df67da08931ed6d0d100ff857c2b1ced",
"text": "Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.",
"title": ""
}
] |
scidocsrr
|
5cffc236db2765f3925a57401d746f06
|
Quarterly Time-Series Forecasting With Neural Networks
|
[
{
"docid": "ab813ff20324600d5b765377588c9475",
"text": "Estimating the flows of rivers can have significant economic impact, as this can help in agricultural water management and in protection from water shortages and possible flood damage. The first goal of this paper is to apply neural networks to the problem of forecasting the flow of the River Nile in Egypt. The second goal of the paper is to utilize the time series as a benchmark to compare between several neural-network forecasting methods.We compare between four different methods to preprocess the inputs and outputs, including a novel method proposed here based on the discrete Fourier series. We also compare between three different methods for the multistep ahead forecast problem: the direct method, the recursive method, and the recursive method trained using a backpropagation through time scheme. We also include a theoretical comparison between these three methods. The final comparison is between different methods to perform longer horizon forecast, and that includes ways to partition the problem into the several subproblems of forecasting K steps ahead.",
"title": ""
}
] |
[
{
"docid": "72f6f6484499ccaa0188d2a795daa74c",
"text": "Road detection is one of the most important research areas in driver assistance and automated driving field. However, the performance of existing methods is still unsatisfactory, especially in severe shadow conditions. To overcome those difficulties, first we propose a novel shadow-free feature extractor based on the color distribution of road surface pixels. Then we present a road detection framework based on the extractor, whose performance is more accurate and robust than that of existing extractors. Also, the proposed framework has much low-complexity, which is suitable for usage in practical systems.",
"title": ""
},
{
"docid": "e148d17a78b3b8e144bf0db5a218bd97",
"text": "Novel synchronous machines with doubly salient structure and permanent magnets (PMs) in stator yoke have been developed in this paper. The stator is constituted by T-shaped lamination segments sandwiched with circumferentially magnetized PMs with alternate polarity, while the rotor is identical to that of switched reluctance machines (SRMs). The stator pole number is multiples of six, which is the number of stator poles in a unit machine. Similar to variable flux reluctance machines (VFRMs), the rotor pole numbers in the novel machines are not restricted to those in SRMs. When the stator and rotor pole numbers differ by one (or the number of multiples), the novel synchronous machines show sinusoidal bipolar phase flux linkage and back electromotive force (EMF), which make the machines suitable for brushless ac operation. Moreover, two prototype machines with six-pole stator and five-pole, seven-pole rotors are designed and optimized by 2-D finite element analysis. It shows that, compared with VFRMs, the novel machines can produce ~70 % higher torque density with the same copper loss and machine size. Meanwhile, the proposed machines have negligible reluctance torque due to very low saliency ratio. Experimental results of back EFM, cogging torque, and average torque on the prototypes are provided to validate the analysis.",
"title": ""
},
{
"docid": "a914d26b2086e20a7452f0634574820d",
"text": "In this paper, we provide a semantic foundation for role-related concepts in enterprise modelling. We use a conceptual modelling framework to provide a well-founded underpinning for these concepts. We review a number of enterprise modelling approaches in light of the concepts described. This allows us to understand the various approaches, to contrast them and to identify problems in the definition and/or usage of these concepts.",
"title": ""
},
{
"docid": "60c9355aba12e84461519f28b157c432",
"text": "Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use ad hoc gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues. We prove that learnable gates in a recurrent model formally provide quasiinvariance to general time transformations in the input data. We recover part of the LSTM architecture from a simple axiomatic approach. This result leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new chrono initialization is shown to greatly improve learning of long term dependencies, with minimal implementation effort. Recurrent neural networks (e.g. (Jaeger, 2002)) are a standard machine learning tool to model and represent temporal data; mathematically they amount to learning the parameters of a parameterized dynamical system so that its behavior optimizes some criterion, such as the prediction of the next data in a sequence. Handling long term dependencies in temporal data has been a classical issue in the learning of recurrent networks. Indeed, stability of a dynamical system comes at the price of exponential decay of the gradient signals used for learning, a dilemma known as the vanishing gradient problem (Pascanu et al., 2012; Hochreiter, 1991; Bengio et al., 1994). This has led to the introduction of recurrent models specifically engineered to help with such phenomena. Use of feedback connections (Hochreiter & Schmidhuber, 1997) and control of feedback weights through gating mechanisms (Gers et al., 1999) partly alleviate the vanishing gradient problem. The resulting architectures, namely long short-term memories (LSTMs (Hochreiter & Schmidhuber, 1997; Gers et al., 1999)) and gated recurrent units (GRUs (Chung et al., 2014)) have become a standard for treating sequential data. Using orthogonal weight matrices is another proposed solution to the vanishing gradient problem, thoroughly studied in (Saxe et al., 2013; Le et al., 2015; Arjovsky et al., 2016; Wisdom et al., 2016; Henaff et al., 2016). This comes with either computational overhead, or limitation in representational power. Furthermore, restricting the weight matrices to the set of orthogonal matrices makes forgetting of useless information difficult. The contribution of this paper is threefold: ∙ We show that postulating invariance to time transformations in the data (taking invariance to time warping as an axiom) necessarily leads to a gate-like mechanism in recurrent models (Section 1). This provides a clean derivation of part of the popular LSTM and GRU architectures from first principles. In this framework, gate values appear as time contraction or time dilation coefficients, similar in spirit to the notion of time constant introduced in (Mozer, 1992). ∙ From these insights, we provide precise prescriptions on how to initialize gate biases (Section 2) depending on the range of time dependencies to be captured. It has previously been advocated that setting the bias of the forget gate of LSTMs to 1 or 2 provides overall good performance (Gers & Schmidhuber, 2000; Jozefowicz et al., 2015). The viewpoint here 1 ar X iv :1 80 4. 11 18 8v 1 [ cs .L G ] 2 3 M ar 2 01 8 Published as a conference paper at ICLR 2018 explains why this is reasonable in most cases, when facing medium term dependencies, but fails when facing long to very long term dependencies. ∙ We test the empirical benefits of the new initialization on both synthetic and real world data (Section 3). We observe substantial improvement with long-term dependencies, and slight gains or no change when short-term dependencies dominate. 1 FROM TIME WARPING INVARIANCE TO GATING When tackling sequential learning problems, being resilient to a change in time scale is crucial. Lack of resilience to time rescaling implies that we can make a problem arbitrarily difficult simply by changing the unit of measurement of time. Ordinary recurrent neural networks are highly nonresilient to time rescaling: a task can be rendered impossible for an ordinary recurrent neural network to learn, simply by inserting a fixed, small number of zeros or whitespaces between all elements of the input sequence. An explanation is that, with a given number of recurrent units, the class of functions representable by an ordinary recurrent network is not invariant to time rescaling. Ideally, one would like a recurrent model to be able to learn from time-warped input data x(c(t)) as easily as it learns from data x(t), at least if the time warping c(t) is not overly complex. The change of time c may represent not only time rescalings, but, for instance, accelerations or decelerations of the phenomena in the input data. We call a class of models invariant to time warping, if for any model in the class with input data x(t), and for any time warping c(t), there is another (or the same) model in the class that behaves on data x(c(t)) in the same way the original model behaves on x(t). (In practice, this will only be possible if the warping c is not too complex.) We will show that this is deeply linked to having gating mechanisms in the model. Invariance to time rescaling Let us first discuss the simpler case of a linear time rescaling. Formally, this is a linear transformation of time, that is c : R+ −→ R+ t ↦−→ αt (1) with α > 0. For instance, receiving a new input character every 10 time steps only, would correspond to α = 0.1. Studying time transformations is easier in the continuous-time setting. The discrete time equation of a basic recurrent network with hidden state ht, ht+1 = tanh (Wx xt +Wh ht + b) (2) can be seen as a time-discretized version of the continuous-time equation1 dh(t) dt = tanh (︀ Wx x(t) +Wh h(t) + b )︀ − h(t) (3) namely, (2) is the Taylor expansion h(t+ δt) ≈ h(t) + δt dh(t) dt with discretization step δt = 1. Now imagine that we want to describe time-rescaled data x(αt) with a model from the same class. Substituting t← c(t) = αt, x(t)← x(αt) and h(t)← h(αt) and rewriting (3) in terms of the new variables, the time-rescaled model satisfies2 dh(t) dt = α tanh (︀ Wx x(t) +Wh h(t) + b )︀ − αh(t). (4) However, when translated back to a discrete-time model, this no longer describes an ordinary RNN but a leaky RNN (Jaeger, 2002, §8.1). Indeed, taking the Taylor expansion of h(t+ δt) with δt = 1 in (4) yields the recurrent model ht+1 = α tanh (Wx xt +Wh ht + b) + (1− α)ht (5) We will use indices ht for discrete time and brackets h(t) for continuous time. More precisely, introduce a new time variable T and set the model and data with variable T to H(T ) := h(c(T )) and X(T ) := x(c(T )). Then compute dH(T ) dT . Then rename H to h, X to x and T to t to match the original notation.",
"title": ""
},
{
"docid": "bfc349d95143237cc1cf55f77cb2044f",
"text": "Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.",
"title": ""
},
{
"docid": "44e3ca0f64566978c3e0d0baeaa93543",
"text": "Many applications of fast Fourier transforms (FFT’s), such as computer tomography, geophysical signal processing, high-resolution imaging radars, and prediction filters, require high-precision output. An error analysis reveals that the usual method of fixed-point computation of FFT’s of vectors of length2 leads to an average loss of/2 bits of precision. This phenomenon, often referred to as computational noise, causes major problems for arithmetic units with limited precision which are often used for real-time applications. Several researchers have noted that calculation of FFT’s with algebraic integers avoids computational noise entirely, see, e.g., [1]. We will combine a new algorithm for approximating complex numbers by cyclotomic integers with Chinese remaindering strategies to give an efficient algorithm to compute -bit precision FFT’s of length . More precisely, we will approximate complex numbers by cyclotomic integers in [ 2 2 ] whose coefficients, when expressed as polynomials in 2 2 , are bounded in absolute value by some integer . For fixed our algorithm runs in time (log( )), and produces an approximation with worst case error of (1 2 ). We will prove that this algorithm has optimal worst case error by proving a corresponding lower bound on the worst case error of any approximation algorithm for this task. The main tool for designing the algorithms is the use of the cyclotomic units, a subgroup of finite index in the unit group of the cyclotomic field. First implementations of our algorithms indicate that they are fast enough to be used for the design of low-cost high-speed/highprecision FFT chips.",
"title": ""
},
{
"docid": "aa1a97f8f6f9f1c2627f63e1ec13e8cf",
"text": "In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.",
"title": ""
},
{
"docid": "90ba548ae91dbd94ea547a372422181f",
"text": "The hypothesis that Attention-Deficit/Hyperactivity Disorder (ADHD) reflects a primary inhibitory executive function deficit has spurred a substantial literature. However, empirical findings and methodological issues challenge the etiologic primacy of inhibitory and executive deficits in ADHD. Based on accumulating evidence of increased intra-individual variability in ADHD, we reconsider executive dysfunction in light of distinctions between 'hot' and 'cool' executive function measures. We propose an integrative model that incorporates new neuroanatomical findings and emphasizes the interactions between parallel processing pathways as potential loci for dysfunction. Such a reconceptualization provides a means to transcend the limits of current models of executive dysfunction in ADHD and suggests a plan for future research on cognition grounded in neurophysiological and developmental considerations.",
"title": ""
},
{
"docid": "453d5d826e0292245f8fa12ec564c719",
"text": "Work with patient H.M., beginning in the 1950s, established key principles about the organization of memory that inspired decades of experimental work. Since H.M., the study of human memory and its disorders has continued to yield new insights and to improve understanding of the structure and organization of memory. Here we review this work with emphasis on the neuroanatomy of medial temporal lobe and diencephalic structures important for memory, multiple memory systems, visual perception, immediate memory, memory consolidation, the locus of long-term memory storage, the concepts of recollection and familiarity, and the question of how different medial temporal lobe structures may contribute differently to memory functions.",
"title": ""
},
{
"docid": "1ceb1718fe3200853204d795c80481ab",
"text": "Open-circuit-voltage (OCV) data is widely used for characterizing battery properties under different conditions. It contains important information that can help to identify battery state-of-charge (SOC) and state-of-health (SOH). While various OCV models have been developed for battery SOC estimation, few have been designed for SOH monitoring. In this paper, we propose a unified OCV model that can be applied for both SOC estimation and SOH monitoring. Improvements in SOC estimation using the new model compared to other existing models are demonstrated. Moreover, it is shown that the proposed OCV model can be used to perform battery SOH monitoring as it effectively captures aging information based on incremental capacity analysis (ICA). Parametric analysis and model complexity reduction are also addressed. Experimental data is used to illustrate the effectiveness of the model and its simplified version in the application context of SOC estimation and SOH monitoring. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "574b980883ffb73dd7cddf62f627c6b1",
"text": "Getting computers to understand and process audio recordings in terms of their musical content is a difficult challenge. We describe a method in which general, polyphonic audio recordings of music can be aligned to symbolic score information in standard MIDI files. Because of the difficulties of polyphonic transcription, we perform matching directly on acoustic features that we extract from MIDI and audio. Polyphonic audio matching can be used for polyphonic score following, building intelligent editors that understand the content of recorded audio, and the analysis of expressive performance.",
"title": ""
},
{
"docid": "170e2b0f15d9485bb3c00026c6c384a8",
"text": "Chatbots are a rapidly expanding application of dialogue systems with companies switching to bot services for customer support, and new applications for users interested in casual conversation. One style of casual conversation is argument; many people love nothing more than a good argument. Moreover, there are a number of existing corpora of argumentative dialogues, annotated for agreement and disagreement, stance, sarcasm and argument quality. This paper introduces Debbie, a novel arguing bot, that selects arguments from conversational corpora, and aims to use them appropriately in context. We present an initial working prototype of Debbie, with some preliminary evaluation and describe future work.",
"title": ""
},
{
"docid": "f8c4fd23f163c0a604569b5ecf4bdefd",
"text": "The goal of interactive machine learning is to help scientists and engineers exploit more specialized data from within their deployed environment in less time, with greater accuracy and fewer costs. A basic introduction to the main components is provided here, untangling the many ideas that must be combined to produce practical interactive learning systems. This article also describes recent developments in machine learning that have significantly advanced the theoretical and practical foundations for the next generation of interactive tools.",
"title": ""
},
{
"docid": "a3c011d846fed4f910cd3b112767ccc1",
"text": "Tooth morphometry is known to be influenced by cultural, environmental and racial factors. Tooth size standards can be used in age and sex determination. One hundred models (50 males & 50 females) of normal occlusion were evaluated and significant correlations (p<0.001) were found to exist between the combined maxillary incisor widths and the maxillary intermolar and interpremolar arch widths. The study establishes the morphometric criterion for premolar and molar indices and quantifies the existence of a statistically significant sexual dimorphism in arch widths (p<0.02). INTRODUCTION Teeth are an excellent material in living and non-living populations for anthropological, genetic, odontologic and forensic investigations 1 .Their morphometry is known to be influenced by cultural, environmental and racial factors. The variations in tooth form are a common occurrence & these can be studied by measurements. Out of the two proportionswidth and length, the former is considered to be more important 2 . Tooth size standards can be used in age and sex determination 3 . Whenever it is possible to predict the sex, identification is simplified because then only missing persons of one sex need to be considered. In this sense identification of sex takes precedence over age 4 . Various features like tooth morphology and crown size are characteristic for males and females 5 .The present study on the maxillary arch takes into account the premolar arch width, molar arch width and the combined width of the maxillary central incisors in both the sexes. Pont's established constant ratio's between tooth sizes and arch widths in French population which came to be known as premolar and molar indices 6 .In the ideal dental arch he concluded that the ratio of combined incisor width to transverse arch width was .80 in the premolar area and .64 in the molar area. There has been a recent resurgence of interest in the clinical use of premolar and molar indices for establishing dental arch development objectives 7 . The present study was conducted to ascertain whether or not Pont's Index can be used reliably on north Indians and to establish the norms for the same. MATERIAL AND METHODS SELECTION CRITERIA One hundred subjects, fifty males and fifty females in the age group of 17-21 years were selected for the study as attrition is considered to be minimal for this age group. The study was conducted on the students of Sudha Rustagi College of Dental Sciences & Research, Faridabad, Haryana. INCLUSION CRITERIA Healthy state of gingival and peridontium.",
"title": ""
},
{
"docid": "bf85be55fefe866d6cf35161bfa08836",
"text": "Today, video distribution platforms use adaptive video streaming to deliver the maximum Quality of Experience to a wide range of devices connected to the Internet through different access networks. Among the techniques employed to implement video adaptivity, the stream-switching over HTTP is getting a wide acceptance due to its deployment and implementation simplicity. Recently it has been shown that the client-side algorithms proposed so far generate an on-off traffic pattern that may lead to unfairness and underutilization when many video flows share a bottleneck. In this paper we propose ELASTIC (fEedback Linearization Adaptive STreamIng Controller), a client-side controller designed using feedback control theory that does not generate an on-off traffic pattern. By employing a controlled testbed, allowing bandwidth capacity and delays to be set, we compare ELASTIC with other client-side controllers proposed in the literature. In particular, we have checked to what extent the considered algorithms are able to: 1) fully utilize the bottleneck, 2) fairly share the bottleneck, 3) obtain a fair share when TCP greedy flows share the bottleneck with video flows. The obtained results show that ELASTIC achieves a very high fairness and is able to get the fair share when coexisting with TCP greedy flows.",
"title": ""
},
{
"docid": "7490197babcd735c48e1c42af03c8473",
"text": "Clustering is one of the most fundamental tasks in data analysis and machine learning. It is central to many data-driven applications that aim to separate the data into groups with similar patterns. Moreover, clustering is a complex procedure that is affected significantly by the choice of the data representation method. Recent research has demonstrated encouraging clustering results by learning effectively these representations. In most of these works a deep auto-encoder is initially pre-trained to minimize a reconstruction loss, and then jointly optimized with clustering centroids in order to improve the clustering objective. Those works focus mainly on the clustering phase of the procedure, while not utilizing the potential benefit out of the initial phase. In this paper we propose to optimize an auto-encoder with respect to a discriminative pairwise loss function during the auto-encoder pre-training phase. We demonstrate the high accuracy obtained by the proposed method as well as its rapid convergence (e.g. reaching above 92% accuracy on MNIST during the pre-training phase, in less than 50 epochs), even with small networks.",
"title": ""
},
{
"docid": "ec3661f09e857568d32c6452bd8c4445",
"text": "User identification and differentiation have implications in many application domains, including security, personalization, and co-located multiuser systems. In response, dozens of approaches have been developed, from fingerprint and retinal scans, to hand gestures and RFID tags. In this work, we propose CapAuth, a technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide real-time authentication and even identification of users. As a proof-of-concept, we ran our software on an off-the-shelf Nexus 5 smartphone. Our user study demonstrates twenty-participant authentication accuracies of 99.6%. For twenty-user identification, our software achieved 94.0% accuracy and 98.2% on groups of four, simulating family use.",
"title": ""
},
{
"docid": "5f89aac70e93b9fcf4c37d119770f747",
"text": "Partial differential equations (PDEs) play a prominent role in many disciplines of science and engineering. PDEs are commonly derived based on empirical observations. However, with the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored. Such vast quantity of data offers new opportunities for data-driven discovery of physical laws. Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDENet, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. Comparing with existing approaches, our approach has the most flexibility by learning both differential operators and the nonlinear response function of the underlying PDE model. A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network. These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory). Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment. Equal contribution School of Mathematical Sciences, Peking University, Beijing, China Beijing Computational Science Research Center, Beijing, China Beijing International Center for Mathematical Research, Peking University, Beijing, China Center for Data Science, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "1168c9e6ce258851b15b7e689f60e218",
"text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).",
"title": ""
},
{
"docid": "e0696dfe3d01003197516adbabeac67d",
"text": "The incidence of rectal foreign bodies is increasing by the day, though not as common as that of upper gastrointestinal foreign bodies. Various methods for removal of foreign bodies have been reported. Removal during endoscopy using endoscopic devices is simple and safe, but if the foreign body is too large to be removed by this method, other methods are required. We report two cases of rectal foreign body removal by a relatively simple and inexpensive technique. A 42-year-old man with a vibrator in the rectum was admitted due to inability to remove it by himself and various endoscopic methods failed. Finally, the vibrator was removed successfully by using tenaculum forceps under endoscopic assistance. Similarly, a 59-year-old man with a carrot in the rectum was admitted. The carrot was removed easily by using the same method as that in the previous case. The use of tenaculum forceps under endoscopic guidance may be a useful method for removal of rectal foreign bodies.",
"title": ""
}
] |
scidocsrr
|
8609b11d7280df07ab594d71f7496450
|
Weakly Supervised Learning for Whole Slide Lung Cancer Image Classification
|
[
{
"docid": "f1cd96ddd519f35cf3ddc19f84d232cf",
"text": "This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative results for automatic detection of IDC regions in WSI in terms of F-measure and balanced accuracy (71.80%, 84.23%), in comparison with an approach using handcrafted image features (color, texture and edges, nuclear textural and architecture), and a machine learning classifier for invasive tumor classification using a Random Forest. The best performing handcrafted features were fuzzy color histogram (67.53%, 78.74%) and RGB histogram (66.64%, 77.24%). Our results also suggest that at least some of the tissue classification mistakes (false positives and false negatives) were less due to any fundamental problems associated with the approach, than the inherent limitations in obtaining a very highly granular annotation of the diseased area of interest by an expert pathologist.",
"title": ""
},
{
"docid": "ec90e30c0ae657f25600378721b82427",
"text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.",
"title": ""
},
{
"docid": "625b96d21cb9ff05785aa34c98c567ff",
"text": "The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.",
"title": ""
}
] |
[
{
"docid": "a1d9fef7fda8a547df136565afd5a443",
"text": "The authors proposed a circular-polarized array antenna by using hexagonal radiating apertures in the 60 GHz-band. The hexagonal radiating aperture is designed, and the good axial ratio characteristics are achieved in the boresight. We analyze the full structure of the 16×16-element array that combines the 2×2-element subarrays and a 64-way divider. The reflection is less than −14dB over 4.9% bandwidth where the axial ratio is less than 2.5dB. High antenna efficiency of 88.7% is obtained at 61.5GHz with the antenna gain of 33.3dBi including losses. The 1dB-down gain bandwidth is 6.8%.",
"title": ""
},
{
"docid": "6a757e3bb48be08ce6a56a08a3fc84d4",
"text": "The completion of a high-quality, comprehensive sequence of the human genome, in this fiftieth anniversary year of the discovery of the double-helical structure of DNA, is a landmark event. The genomic era is now a reality. In contemplating a vision for the future of genomics research,it is appropriate to consider the remarkable path that has brought us here. The rollfold (Figure 1) shows a timeline of landmark accomplishments in genetics and genomics, beginning with Gregor Mendel’s discovery of the laws of heredity and their rediscovery in the early days of the twentieth century.Recognition of DNA as the hereditary material, determination of its structure, elucidation of the genetic code, development of recombinant DNA technologies, and establishment of increasingly automatable methods for DNA sequencing set the stage for the Human Genome Project (HGP) to begin in 1990 (see also www.nature.com/nature/DNA50). Thanks to the vision of the original planners, and the creativity and determination of a legion of talented scientists who decided to make this project their overarching focus, all of the initial objectives of the HGP have now been achieved at least two years ahead of expectation, and a revolution in biological research has begun. The project’s new research strategies and experimental technologies have generated a steady stream of ever-larger and more complex genomic data sets that have poured into public databases and have transformed the study of virtually all life processes. The genomic approach of technology development and large-scale generation of community resource data sets has introduced an important new dimension into biological and biomedical research. Interwoven advances in genetics, comparative genomics, highthroughput biochemistry and bioinformatics",
"title": ""
},
{
"docid": "39cf15285321c7d56904c8c59b3e1373",
"text": "J. Naidoo1*, D. B. Page2, B. T. Li3, L. C. Connell3, K. Schindler4, M. E. Lacouture5,6, M. A. Postow3,6 & J. D. Wolchok3,6 Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore; Providence Portland Medical Center and Earl A. Chiles Research Institute, Portland; Department of Medicine and Ludwig Center, Memorial Sloan Kettering Cancer Center, New York, USA; Department of Dermatology, Medical University of Vienna, Vienna, Austria; Dermatology Service, Memorial Sloan Kettering Cancer Center, New York; Department of Medicine, Weill Cornell Medical College, New York, USA",
"title": ""
},
{
"docid": "edfadb9f9812914e7a527022630ad015",
"text": "This department is about building software with security in mind. Since it began in 2004, it has focused on the kinds of activities that constitute a secure development life cycle. As of to day, we're broadening that charter to include all the essential ingredients of a sustained soft ware security initiative. Instead of focusing on one turn of the crank that yields one new piece of software, we'll consider the ongoing organizational commitments necessary to facilitate se cure software development.",
"title": ""
},
{
"docid": "199df544c19711fbee2dd49e60956243",
"text": "Languages vary strikingly in how they encode motion events. In some languages (e.g. English), manner of motion is typically encoded within the verb, while direction of motion information appears in modifiers. In other languages (e.g. Greek), the verb usually encodes the direction of motion, while the manner information is often omitted, or encoded in modifiers. We designed two studies to investigate whether these language-specific patterns affect speakers' reasoning about motion. We compared the performance of English and Greek children and adults (a) in nonlinguistic (memory and categorization) tasks involving motion events, and (b) in their linguistic descriptions of these same motion events. Even though the two linguistic groups differed significantly in terms of their linguistic preferences, their performance in the nonlinguistic tasks was identical. More surprisingly, the linguistic descriptions given by subjects within language also failed to correlate consistently with their memory and categorization performance in the relevant regards. For the domain studied, these results are consistent with the view that conceptual development and organization are largely independent of language-specific labeling practices. The discussion emphasizes that the necessarily sketchy nature of language use assures that it will be at best a crude index of thought.",
"title": ""
},
{
"docid": "71da7722f6ce892261134bd60ca93ab7",
"text": "Semantically annotated data, using markup languages like RDFa and Microdata, has become more and more publicly available in the Web, especially in the area of e-commerce. Thus, a large amount of structured product descriptions are freely available and can be used for various applications, such as product search or recommendation. However, little efforts have been made to analyze the categories of the available product descriptions. Although some products have an explicit category assigned, the categorization schemes vary a lot, as the products originate from thousands of different sites. This heterogeneity makes the use of supervised methods, which have been proposed by most previous works, hard to apply. Therefore, in this paper, we explain how distantly supervised approaches can be used to exploit the heterogeneous category information in order to map the products to set of target categories from an existing product catalogue. Our results show that, even though this task is by far not trivial, we can reach almost 56% accuracy for classifying products into 37 categories.",
"title": ""
},
{
"docid": "ee72a297c05a438a49e86a45b81db17f",
"text": "Screening for cyclodextrin glycosyltransferase (CGTase)-producing alkaliphilic bacteria from samples collected from hyper saline soda lakes (Wadi Natrun Valley, Egypt), resulted in isolation of potent CGTase producing alkaliphilic bacterium, termed NPST-10. 16S rDNA sequence analysis identified the isolate as Amphibacillus sp. CGTase was purified to homogeneity up to 22.1 fold by starch adsorption and anion exchange chromatography with a yield of 44.7%. The purified enzyme was a monomeric protein with an estimated molecular weight of 92 kDa using SDS-PAGE. Catalytic activities of the enzyme were found to be 88.8 U mg(-1) protein, 20.0 U mg(-1) protein and 11.0 U mg(-1) protein for cyclization, coupling and hydrolytic activities, respectively. The enzyme was stable over a wide pH range from pH 5.0 to 11.0, with a maximal activity at pH 8.0. CGTase exhibited activity over a wide temperature range from 45 °C to 70 °C, with maximal activity at 50 °C and was stable at 30 °C to 55 °C for at least 1 h. Thermal stability of the purified enzyme could be significantly improved in the presence of CaCl(2). K(m) and V(max) values were estimated using soluble starch as a substrate to be 1.7 ± 0.15 mg/mL and 100 ± 2.0 μmol/min, respectively. CGTase was significantly inhibited in the presence of Co(2+), Zn(2+), Cu(2+), Hg(2+), Ba(2+), Cd(2+), and 2-mercaptoethanol. To the best of our knowledge, this is the first report of CGTase production by Amphibacillus sp. The achieved high conversion of insoluble raw corn starch into cyclodextrins (67.2%) with production of mainly β-CD (86.4%), makes Amphibacillus sp. NPST-10 desirable for the cyclodextrin production industry.",
"title": ""
},
{
"docid": "a4154317f6bb6af635edb1b2ef012d09",
"text": "The pulp industry in Taiwan discharges tons of wood waste and pulp sludge (i.e., wastewater-derived secondary sludge) per year. The mixture of these two bio-wastes, denoted as wood waste with pulp sludge (WPS), has been commonly converted to organic fertilizers for agriculture application or to soil conditioners. However, due to energy demand, the WPS can be utilized in a beneficial way to mitigate an energy shortage. This study elucidated the performance of applying torrefaction, a bio-waste to energy method, to transform the WPS into solid bio-fuel. Two batches of the tested WPS (i.e., WPS1 and WPS2) were generated from a virgin pulp factory in eastern Taiwan. The WPS1 and WPS2 samples contained a large amount of organics and had high heating values (HHV) on a dry-basis (HHD) of 18.30 and 15.72 MJ/kg, respectively, exhibiting a potential for their use as a solid bio-fuel. However, the wet WPS as received bears high water and volatile matter content and required de-watering, drying, and upgrading. After a 20 min torrefaction time (tT), the HHD of torrefied WPS1 (WPST1) can be enhanced to 27.49 MJ/kg at a torrefaction temperature (TT) of 573 K, while that of torrefied WPS2 (WPST2) increased to 19.74 MJ/kg at a TT of 593 K. The corresponding values of the energy densification ratio of torrefied solid bio-fuels of WPST1 and WPST2 can respectively rise to 1.50 and 1.25 times that of the raw bio-waste. The HHD of WPST1 of 27.49 MJ/kg is within the range of 24–35 MJ/kg for bituminous coal. In addition, the wet-basis HHV of WPST1 with an equilibrium moisture content of 5.91 wt % is 25.87 MJ/kg, which satisfies the Quality D coal specification of the Taiwan Power Co., requiring a value of above 20.92 MJ/kg.",
"title": ""
},
{
"docid": "7dde3558e82d37109d6d4d9b84f14e74",
"text": "Background/Objectives:The contemporary American diet figures centrally in the pathogenesis of numerous chronic diseases– 'diseases of civilization'–such as obesity and diabetes. We investigated in type 2 diabetes whether a diet similar to that consumed by our pre-agricultural hunter-gatherer ancestors ('Paleolithic' type diet) confers health benefits.Subjects/Methods:We performed an outpatient, metabolically controlled diet study in type 2 diabetes patients. We compared the findings in 14 participants consuming a Paleo diet comprising lean meat, fruits, vegetables and nuts, and excluding added salt, and non-Paleolithic-type foods comprising cereal grains, dairy or legumes, with 10 participants on a diet based on recommendations by the American Diabetes Association (ADA) containing moderate salt intake, low-fat dairy, whole grains and legumes. There were three ramp-up diets for 7 days, then 14 days of the test diet. Outcomes included the following: mean arterial blood pressure; 24-h urine electrolytes; hemoglobin A1c and fructosamine levels; insulin resistance by euglycemic hyperinsulinemic clamp and lipid levels.Results:Both groups had improvements in metabolic measures, but the Paleo diet group had greater benefits on glucose control and lipid profiles. Also, on the Paleo diet, the most insulin-resistant subjects had a significant improvement in insulin sensitivity (r=0.40, P=0.02), but no such effect was seen in the most insulin-resistant subjects on the ADA diet (r= 0.39, P=0.3).Conclusions:Even short-term consumption of a Paleolithic-type diet improved glucose control and lipid profiles in people with type 2 diabetes compared with a conventional diet containing moderate salt intake, low-fat dairy, whole grains and legumes.",
"title": ""
},
{
"docid": "4930e09f069eee618e8e0ad100cd5505",
"text": "While constraint-induced movement therapy (CIMT) is one of the most promising techniques for upper limb rehabilitation after stroke, it requires high residual function to start with. Robotic device, on the other hand, can provide intention-driven assistance and is proven capable to complement conventional therapy. However, with many robotic devices focus on more proximal joints like shoulder and elbow, recovery of hand and fingers functions have become a challenge. Here we propose the use of robotic device to assist hand and fingers functions training and we aim to evaluate the potential efficacy of intention-driven robot-assisted fingers training. Participants (6 to 24 months post-stroke) were randomly assigned into two groups: robot-assisted (robot) and non-assisted (control) fingers training groups. Each participant underwent 20-session training. Action Research Arm Test (ARAT) was used as the primary outcome measure, while, Wolf Motor Function Test (WMFT) score, its functional tasks (WMFT-FT) sub-score, Fugl-Meyer Assessment (FMA), its shoulder and elbow (FMA-SE) sub-score, and finger individuation index (FII) served as secondary outcome measures. Nineteen patients completed the 20-session training (Trial Registration: HKClinicalTrials.com HKCTR-1554); eighteen of them came back for a 6-month follow-up. Significant improvements (p < 0.05) were found in the clinical scores for both robot and control group after training. However, only robot group maintained the significant difference in the ARAT and FMA-SE six months after the training. The WMFT-FT score and time post-training improvements of robot group were significantly better than those of the control group. This study showed the potential efficacy of robot-assisted fingers training for hand and fingers rehabilitation and its feasibility to facilitate early rehabilitation for a wider population of stroke survivors; and hence, can be used to complement CIMT.",
"title": ""
},
{
"docid": "7f82573c3fe0a7195ddd138b1425b80c",
"text": "Introduction: Holoprosencephaly with unfused thalami is a rare malformation involving the forebrain and the face. The epidemiology of the disease is poorly known due to paucity of population based studies. Case Report: A 32-year-old grand multipara at 27th week gestation found on routine ultrasound examination to have a single live fetus with the fetal head showing dilated single cerebral ventricle, with no evidence of anterior midline echo (falx, inter hemispheric cistern and septum pellucidum). The thalami appear relatively small but not fused with a thin midline linear echoic septum separating them. Two subsequent sonograms at 30th and 33rd weeks of pregnancy, including coronal sonograms of the fetal head, correctly identified a dilated single cerebral ventricle. There was no history of diabetes mellitus, hypertension or previously affected child. Pregnancy termination was done on the couple’s request, because of the poor fetal prognosis. Postmortem clinical examination revealed a female newborn with normal body structure. The couple declined consent for autopsy. Conclusion: Alobar holoprosencephaly with unfused thalami is a rare and severe variety of holoprosencephaly with poorly understood aetiology and poor prognosis. (This page in not part of the published article.) International Journal of Case Reports and Images, Vol. 5 No. 11, November 2014. ISSN – [0976-3198] Int J Case Rep Images 2014;5(11):756–760. www.ijcasereportsandimages.com Ibrahim et al. 756 CASE REPORT OPEN ACCESS Alobar holoprosencephaly with unfused thalami: A rare variety of holoprosencephaly Abubakar A., Sanusi Mohammed Ibrahim, Ahidjo A., Tahir A.",
"title": ""
},
{
"docid": "3d4707bd4b113569f07c0b3aa95364d3",
"text": "This paper presents a new solution for choosing the K parameter in the k-nearest neighbor (KNN) algorithm, the solution depending on the idea of ensemble learning, in which a weak KNN classifier is used each time with a different K, starting from one to the square root of the size of the training set. The results of the weak classifiers are combined using the weighted sum rule. The proposed solution was tested and compared to other solutions using a group of experiments in real life problems. The experimental results show that the proposed classifier outperforms the traditional KNN classifier that uses a different number of neighbors, is competitive with other classifiers, and is a promising classifier with strong potential for a wide range of applications. KeywordsKNN; supervised learning; machine learning; ensemble learning; nearest neighbor;",
"title": ""
},
{
"docid": "ddb0a3bc0a9367a592403d0fc0cec0a5",
"text": "Fluorescence microscopy is a powerful quantitative tool for exploring regulatory networks in single cells. However, the number of molecular species that can be measured simultaneously is limited by the spectral overlap between fluorophores. Here we demonstrate a simple but general strategy to drastically increase the capacity for multiplex detection of molecules in single cells by using optical super-resolution microscopy (SRM) and combinatorial labeling. As a proof of principle, we labeled mRNAs with unique combinations of fluorophores using fluorescence in situ hybridization (FISH), and resolved the sequences and combinations of fluorophores with SRM. We measured mRNA levels of 32 genes simultaneously in single Saccharomyces cerevisiae cells. These experiments demonstrate that combinatorial labeling and super-resolution imaging of single cells is a natural approach to bring systems biology into single cells.",
"title": ""
},
{
"docid": "a4b6b6a8ea8fc48d90576f641febf5fb",
"text": "The recent introduction of the diagnostic category developmental coordination disorder (DCD) (American Psychiatric Association [APA], 1987, 1994), has generated confusion among researchers and clinicians in many fields, including occupational therapy. Although the diagnostic criteria appear to be similar to those used to define clumsy children, children with developmental dyspraxia, or children with sensory integrative dysfunction, we are left with the question: Are children who receive the diagnosis of DCD the same as those who receive the other diagnoses, a subgroup, or an entirely distinct group of children? This article will examine the theoretical and empirical literature and use the results to support the thesis that these terms are not interchangeable and yet are not being used in the literature in a way that clearly defines each subgroup of children. Clear definitions and characteristic features need to be identified and associated with each term to guide occupational therapy assessment and intervention and clinical research.",
"title": ""
},
{
"docid": "19f1f1156ca9464759169dd2d4005bf6",
"text": "We first consider the problem of partitioning the edges of a graph ~ into bipartite cliques such that the total order of the cliques is minimized, where the order of a clique is the number of vertices in it. It is shown that the problem is NP-complete. We then prove the existence of a partition of small total order in a sufficiently dense graph and devise an efilcient algorithm to compute such a partition. It turns out that our algorithm exhibits a trade-off between the total order of the partition and the running time. Next, we define the notion of a compression of a graph ~ and use the result on graph partitioning to efficiently compute an optimal compression for graphs of a given size. An interesting application of the graph compression result arises from the fact that several graph algorithms can be adapted to work with the compressed rep~esentation of the input graph, thereby improving the bound on their running times particularly on dense graphs. This makes use of the trade-off result we obtain from our partitioning algorithm. The algorithms analyzed include those for matchings, vertex connectivity, edge connectivity and shortest paths. In each case, we improve upon the running times of the best-known algorithms for these problems.",
"title": ""
},
{
"docid": "3988a78bd37adaed9dc8af87c7e1266b",
"text": "Current R&D project was the development of a software platform designed to be an advanced research testbed for the prototyping of Haskell based novel technologies in Cryo-EM Methodologies. Focused upon software architecture concepts and frameworks involving Haskell image processing libraries. Cryo-EM is an important tool to probe nano-bio systems.A number of hi-tech firms are implementing BIG-DATA analysis using Haskell especially in the domains of Pharma,Bio-informatics etc. Hence current research paper is one of the pioneering attempts made by the author to encourage advanced data analysis in the Cryo-EM domain to probe important aspects of nano-bio applications.",
"title": ""
},
{
"docid": "d3b5569b268a78ad8a1ffb6c903758d3",
"text": "Can robots in classroom reshape K-12 STEM education, and foster new ways of learning? To sketch an answer, this article reviews, side-by-side, existing literature on robot-based learning activities featuring mathematics and physics (purposefully putting aside the well-studied field of “robots to teach robotics”) and existing robot platforms and toolkits suited for classroom environment (in terms of cost, ease of use, orchestration load for the teacher, etc.). Our survey suggests that the use of robots in classroom has indeed moved from purely technology to education, to encompass new didactic fields. We however identified several shortcomings, in terms of robotic platforms and teaching environments, that contribute to the limited presence of robotics in existing curricula; the lack of specific teacher training being likely pivotal. Finally, we propose an educational framework merging the tangibility of robots with the advanced visibility of augmented reality.",
"title": ""
},
{
"docid": "ecc01f7058fa872f67cb247e79a06e6b",
"text": "The collection and storage of fingerprint profiles and DNA samples in the field of forensic science for nonviolent crimes is highly controversial. While biometric techniques such as fingerprinting have been used in law enforcement since the early 1900s, DNA presents a more invasive and contentious technique as most sampling is of an intimate nature (e.g. buccal swab). A fingerprint is a pattern residing on the surface of the skin while a DNA sample needs to be extracted in the vast majority of cases (e.g. at times extraction even implying the breaking of the skin). This paper aims to balance the need to collect DNA samples where direct evidence is lacking in violent crimes, versus the systematic collection of DNA from citizens who have committed acts such as petty crimes. The legal, ethical and social issues surrounding the proliferation of DNA collection and storage are explored, with a view to outlining the threats that such a regime may pose to citizens in the not-to-distant future, especially persons belonging to ethnic minority groups.",
"title": ""
},
{
"docid": "945dea6576c6131fc33cd14e5a2a0be8",
"text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.",
"title": ""
},
{
"docid": "8a31704d12d042618dd9e69f0aebd813",
"text": "a r t i c l e i n f o Keywords: Antisocial personality disorder Psychopathy Amygdala Orbitofrontal cortex Monoamine oxidase SNAP proteins Psychopathy is perhaps one of the most misused terms in the American public, which is in no small part due to our obsession with those who have no conscience, and our boldness to try and profile others with this disorder. Here, I present how psychopathy is seen today, before discussing the classification of psychopathy. I also explore the neurological differences in the brains of those with psychopathy, before finally taking a look at genetic risk factors. I conclude by raising some questions about potential treatment.",
"title": ""
}
] |
scidocsrr
|
4cb70dbe54b21485773023fd942ae7de
|
Service-Dominant Strategic Sourcing: Value Creation Versus Cost Saving
|
[
{
"docid": "dd62fd669d40571cc11d64789314dba1",
"text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.",
"title": ""
}
] |
[
{
"docid": "9b0f286b03b3d81942747a98ac0e8817",
"text": "Automated recommendations for next tracks to listen to or to include in a playlist are a common feature on modern music platforms. Correspondingly, a variety of algorithmic approaches for determining tracks to recommend have been proposed in academic research. The most sophisticated among them are often based on conceptually complex learning techniques which can also require substantial computational resources or special-purpose hardware like GPUs. Recent research, however, showed that conceptually more simple techniques, e.g., based on nearest-neighbor schemes, can represent a viable alternative to such techniques in practice.\n In this paper, we describe a hybrid technique for next-track recommendation, which was evaluated in the context of the ACM RecSys 2018 Challenge. A combination of nearest-neighbor techniques, a standard matrix factorization algorithm, and a small set of heuristics led our team KAENEN to the 3rd place in the \"creative\" track and the 7th one in the \"main\" track, with accuracy results only a few percent below the winning teams. Given that offline prediction accuracy is only one of several possible quality factors in music recommendation, practitioners have to validate if slight accuracy improvements truly justify the use of highly complex algorithms in real-world applications.",
"title": ""
},
{
"docid": "4174c1d49ff8755c6b82c2b453918d29",
"text": "Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.",
"title": ""
},
{
"docid": "e6dcc8f80b5b6528531b7f6e617cd633",
"text": "Over 2 million military and civilian personnel per year (over 1 million in the United States) are occupationally exposed, respectively, to jet propulsion fuel-8 (JP-8), JP-8 +100 or JP-5, or to the civil aviation equivalents Jet A or Jet A-1. Approximately 60 billion gallon of these kerosene-based jet fuels are annually consumed worldwide (26 billion gallon in the United States), including over 5 billion gallon of JP-8 by the militaries of the United States and other NATO countries. JP-8, for example, represents the largest single chemical exposure in the U.S. military (2.53 billion gallon in 2000), while Jet A and A-1 are among the most common sources of nonmilitary occupational chemical exposure. Although more recent figures were not available, approximately 4.06 billion gallon of kerosene per se were consumed in the United States in 1990 (IARC, 1992). These exposures may occur repeatedly to raw fuel, vapor phase, aerosol phase, or fuel combustion exhaust by dermal absorption, pulmonary inhalation, or oral ingestion routes. Additionally, the public may be repeatedly exposed to lower levels of jet fuel vapor/aerosol or to fuel combustion products through atmospheric contamination, or to raw fuel constituents by contact with contaminated groundwater or soil. Kerosene-based hydrocarbon fuels are complex mixtures of up to 260+ aliphatic and aromatic hydrocarbon compounds (C(6) -C(17+); possibly 2000+ isomeric forms), including varying concentrations of potential toxicants such as benzene, n-hexane, toluene, xylenes, trimethylpentane, methoxyethanol, naphthalenes (including polycyclic aromatic hydrocarbons [PAHs], and certain other C(9)-C(12) fractions (i.e., n-propylbenzene, trimethylbenzene isomers). While hydrocarbon fuel exposures occur typically at concentrations below current permissible exposure limits (PELs) for the parent fuel or its constituent chemicals, it is unknown whether additive or synergistic interactions among hydrocarbon constituents, up to six performance additives, and other environmental exposure factors may result in unpredicted toxicity. While there is little epidemiological evidence for fuel-induced death, cancer, or other serious organic disease in fuel-exposed workers, large numbers of self-reported health complaints in this cohort appear to justify study of more subtle health consequences. A number of recently published studies reported acute or persisting biological or health effects from acute, subchronic, or chronic exposure of humans or animals to kerosene-based hydrocarbon fuels, to constituent chemicals of these fuels, or to fuel combustion products. This review provides an in-depth summary of human, animal, and in vitro studies of biological or health effects from exposure to JP-8, JP-8 +100, JP-5, Jet A, Jet A-1, or kerosene.",
"title": ""
},
{
"docid": "79079ee1e352b997785dc0a85efed5e4",
"text": "Automatic recognition of the historical letters (XI-XVIII centuries) carved on the stoned walls of St.Sophia cathedral in Kyiv (Ukraine) was demonstrated by means of capsule deep learning neural network. It was applied to the image dataset of the carved Glagolitic and Cyrillic letters (CGCL), which was assembled and pre-processed recently for recognition and prediction by machine learning methods. CGCL dataset contains >4000 images for glyphs of 34 letters which are hardly recognized by experts even in contrast to notMNIST dataset with the better images of 10 letters taken from different fonts. The capsule network was applied for both datasets in three regimes: without data augmentation, with lossless data augmentation, and lossy data augmentation. Despite the much worse quality of CGCL dataset and extremely low number of samples (in comparison to notMNIST dataset) the capsule network model demonstrated much better results than the previously used convolutional neural network (CNN). The training rate for capsule network model was 5-6 times higher than for CNN. The validation accuracy (and validation loss) was higher (lower) for capsule network model than for CNN without data augmentation even. The area under curve (AUC) values for receiver operating characteristic (ROC) were also higher for the capsule network model than for CNN model: 0.88-0.93 (capsule network) and 0.50 (CNN) without data augmentation, 0.91-0.95 (capsule network) and 0.51 (CNN) with lossless data augmentation, and similar results of 0.91-0.93 (capsule network) and 0.9 (CNN) in the regime of lossless data augmentation only. The confusion matrixes were much better for capsule network than for CNN model and gave the much lower type I (false positive) and type II (false negative) values in all three regimes of data augmentation. These results supports the previous claims that capsule-like networks allow to reduce error rates not only on MNIST digit dataset, but on the other notMNIST letter dataset and the more complex CGCL handwriting graffiti letter dataset also. Moreover, capsule-like networks allow to reduce training set sizes to 180 images even like in this work, and they are considerably better than CNNs on the highly distorted and incomplete letters even like CGCL handwriting graffiti. Keywords— machine learning, deep learning, capsule neural network, stone carving dataset, notMNIST, data augmentation",
"title": ""
},
{
"docid": "5ec1cff52a55c5bd873b5d0d25e0456b",
"text": "This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",
"title": ""
},
{
"docid": "0a58aa0c5dff94efa183fcf6fb7952f6",
"text": "When people explore new environments they often use landmarks as reference points to help navigate and orientate themselves. This research paper examines how spatial datasets can be used to build a system for use in an urban environment which functions as a city guide, announcing Features of Interest (FoI) as they become visible to the user (not just proximal), as the user moves freely around the city. Visibility calculations for the FoIs were pre-calculated based on a digital surface model derived from LIDAR (Light Detection and Ranging) data. The results were stored in a textbased relational database management system (RDBMS) for rapid retrieval. All interaction between the user and the system was via a speech-based interface, allowing the user to record and request further information on any of the announced FoI. A prototype system, called Edinburgh Augmented Reality System (EARS) , was designed, implemented and field tested in order to assess the effectiveness of these ideas. The application proved to be an innovating, ‘non-invasive’ approach to augmenting the user’s reality",
"title": ""
},
{
"docid": "4d1be9aebf7534cce625b95bde4696c6",
"text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.",
"title": ""
},
{
"docid": "5c29083624be58efa82b4315976f8dc2",
"text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.",
"title": ""
},
{
"docid": "471af6726ec78126fcf46f4e42b666aa",
"text": "A new thermal tuning circuit for optical ring modulators enables demonstration of an optical chip-to-chip link for the first time with monolithically integrated photonic devices in a commercial 45nm SOI process, without any process changes. The tuning circuit uses independent 1/0 level-tracking and 1/0 bit counting to remain resilient against laser self-heating transients caused by non-DC-balanced transmit data. A 30fJ/bit transmitter and 374fJ/bit receiver with 6μApk-pk photocurrent sensitivity complete the 5Gb/s link. The thermal tuner consumes 275fJ/bit and achieves a 600 GHz tuning range with a heater tuning efficiency of 3.8μW/GHz.",
"title": ""
},
{
"docid": "24a10176ec2367a6a0b5333d57b894b8",
"text": "Automated classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. We have investigated this possibility experimentally and numerically using a diffraction imaging approach. A fast image analysis software based on the gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images. The results of GLCM analysis and subsequent classification demonstrate the potential for rapid classification among six types of cultured cells. Combined with numerical results we show that the method of diffraction imaging flow cytometry has the capacity as a platform for high-throughput and label-free classification of biological cells.",
"title": ""
},
{
"docid": "9edfedc5a1b17481ee8c16151cf42c88",
"text": "Nevus comedonicus is considered a genodermatosis characterized by the presence of multiple groups of dilated pilosebaceous orifices filled with black keratin plugs, with sharply unilateral distribution mostly on the face, neck, trunk, upper arms. Lesions can appear at any age, frequently before the age of 10 years, but they are usually present at birth. We present a 2.7-year-old girl with a very severe form of nevus comedonicus. She exhibited lesions located initially at the left side of the body with a linear characteristic, following Blascko lines T1/T2, T5, T7, S1 /S2, but progressively developed lesions on the right side of the scalp and left gluteal area.",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "3e9a214856235ef36a4dd2e9684543b7",
"text": "Leaf area index (LAI) is a key biophysical variable that can be used to derive agronomic information for field management and yield prediction. In the context of applying broadband and high spatial resolution satellite sensor data to agricultural applications at the field scale, an improved method was developed to evaluate commonly used broadband vegetation indices (VIs) for the estimation of LAI with VI–LAI relationships. The evaluation was based on direct measurement of corn and potato canopies and on QuickBird multispectral images acquired in three growing seasons. The selected VIs were correlated strongly with LAI but with different efficiencies for LAI estimation as a result of the differences in the stabilities, the sensitivities, and the dynamic ranges. Analysis of error propagation showed that LAI noise inherent in each VI–LAI function generally increased with increasing LAI and the efficiency of most VIs was low at high LAI levels. Among selected VIs, the modified soil-adjusted vegetation index (MSAVI) was the best LAI estimator with the largest dynamic range and the highest sensitivity and overall efficiency for both crops. QuickBird image-estimated LAI with MSAVI–LAI relationships agreed well with ground-measured LAI with the root-mean-square-error of 0.63 and 0.79 for corn and potato canopies, respectively. LAI estimated from the high spatial resolution pixel data exhibited spatial variability similar to the ground plot measurements. For field scale agricultural applications, MSAVI–LAI relationships are easy-to-apply and reasonably accurate for estimating LAI. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2848635e59cf2a41871d79748822c176",
"text": "The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito–temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito–temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito–temporal cortex may constitute a multimodal object-related network.",
"title": ""
},
{
"docid": "9960d17cb019350a279e4daccccb8e87",
"text": "Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.",
"title": ""
},
{
"docid": "a2e0163aebb348d3bfab7ebac119e0c0",
"text": "Herein we report the first study of the oxygen reduction reaction (ORR) catalyzed by a cofacial porphyrin scaffold accessed in high yield (overall 53%) using coordination-driven self-assembly with no chromatographic purification steps. The ORR activity was investigated using chemical and electrochemical techniques on monomeric cobalt(II) tetra(meso-4-pyridyl)porphyrinate (CoTPyP) and its cofacial analogue [Ru8(η6-iPrC6H4Me)8(dhbq)4(CoTPyP)2][OTf]8 (Co Prism) (dhbq = 2,5-dihydroxy-1,4-benzoquinato, OTf = triflate) as homogeneous oxygen reduction catalysts. Co Prism is obtained in one self-assembly step that organizes six total building blocks, two CoTPyP units and four arene-Ru clips, into a cofacial motif previously demonstrated with free-base, Zn(II), and Ni(II) porphyrins. Turnover frequencies (TOFs) from chemical reduction (66 vs 6 h-1) and rate constants of overall homogeneous catalysis (kobs) determined from rotating ring-disk experiments (1.1 vs 0.05 h-1) establish a cofacial enhancement upon comparison of the activities of Co Prism and CoTPyP, respectively. Cyclic voltammetry was used to initially probe the electrochemical catalytic behavior. Rotating ring-disk electrode studies were completed to probe the Faradaic efficiency and obtain an estimate of the rate constant associated with the ORR.",
"title": ""
},
{
"docid": "c1632ead357d08c3e019bb12ff75e756",
"text": "Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness, and discuss future directions in this area.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
},
{
"docid": "e3823047ccc723783cf05f24ca60d449",
"text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.",
"title": ""
},
{
"docid": "ec9eb309dd9d6f72bd7286580e75d36d",
"text": "This paper describes SONDY, a tool for analysis of trends and dynamics in online social network data. SONDY addresses two audiences: (i) end-users who want to explore social activity and (ii) researchers who want to experiment and compare mining techniques on social data. SONDY helps end-users like media analysts or journalists understand social network users interests and activity by providing emerging topics and events detection as well as network analysis functionalities. To this end, the application proposes visualizations such as interactive time-lines that summarize information and colored user graphs that reflect the structure of the network. SONDY also provides researchers an easy way to compare and evaluate recent techniques to mine social data, implement new algorithms and extend the application without being concerned with how to make it accessible. In the demo, participants will be invited to explore information from several datasets of various sizes and origins (such as a dataset consisting of 7,874,772 messages published by 1,697,759 Twitter users during a period of 7 days) and apply the different functionalities of the platform in real-time.",
"title": ""
}
] |
scidocsrr
|
a785601f50bad4ed3744ee1442d8116e
|
A Unit Selection Methodology for Music Generation Using Deep Neural Networks
|
[
{
"docid": "8f47dc7401999924dba5cb3003194071",
"text": "Few types of signal streams are as ubiquitous as music. Here we consider the problem of extracting essential ingredients of music signals, such as well-defined global temporal structure in the form of nested periodicities (or meter). Can we construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style? Because recurrent neural networks can in principle learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard recurrent neural networks (RNNs) often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long Short-Term Memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing & counting and learning of context sensitive languages. In the current study we show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.",
"title": ""
},
{
"docid": "67b5bd59689c325365ac765a17886169",
"text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.",
"title": ""
},
{
"docid": "9198e035c77e8798462dd97426ed0e67",
"text": "In this paper, we propose a generic technique to model temporal dependencies and sequences using a combination of a recurrent neural network and a Deep Belief Network. Our technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data. This makes RNN-DBNs ideal for sequence generation. Further, the use of a DBN in conjunction with the RNN makes this model capable of significantly more complex data representation than an RBM. We apply this technique to the task of polyphonic music generation.",
"title": ""
},
{
"docid": "b7ddc52ae897720f50d3f092d8cfbdab",
"text": "Markov chains are a well known tool to model temporal properties of many phenomena, from text structure to fluctuations in economics. Because they are easy to generate, Markovian sequences, i.e. temporal sequences having the Markov property, are also used for content generation applications such as text or music generation that imitate a given style. However, Markov sequences are traditionally generated using greedy, left-to-right algorithms. While this approach is computationally cheap, it is fundamentally unsuited for interactive control. This paper addresses the issue of generating steerable Markovian sequences. We target interactive applications such as games, in which users want to control, through simple input devices, the way the system generates a Markovian sequence, such as a text, a musical sequence or a drawing. To this aim, we propose to revisit Markov sequence generation as a branch and bound constraint satisfaction problem (CSP). We propose a CSP formulation of the basic Markovian hypothesis as elementary Markov Constraints (EMC). We propose algorithms that achieve domain-consistency for the propagators of EMCs, in an event-based implementation of CSP. We show how EMCs can be combined to estimate the global Markovian probability of a whole sequence, and accommodate for different species of Markov generation such as fixed order, variable-order, or smoothing. Such a formulation, although more costly than traditional greedy generation algorithms, yields the immense advantage of being naturally steerable, since control specifications can be represented by arbitrary additional constraints, without any modification of the generation algorithm. We illustrate our approach on simple yet combinatorial chord sequence and melody generation problems and give some performance results.",
"title": ""
}
] |
[
{
"docid": "785a0d51c9d105532a2e571afccd957b",
"text": "Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.",
"title": ""
},
{
"docid": "a7665a6c0955b5d4ca2c4c8cdc183974",
"text": "Deep learning has recently helped AI systems to achieve human-level performance in several domains, including speech recognition, object classification, and playing several types of games. The major benefit of deep learning is that it enables end-to-end learning of representations of the data on several levels of abstraction. However, the overall network architecture and the learning algorithms’ sensitive hyperparameters still need to be set manually by human experts. In this talk, I will discuss extensions of Bayesian optimization for handling this problem effectively, thereby paving the way to fully automated end-to-end learning. I will focus on speeding up Bayesian optimization by reasoning over data subsets and initial learning curves, sometimes resulting in 100-fold speedups in finding good hyperparameter settings. I will also show competition-winning practical systems for automated machine learning (AutoML) and briefly show related applications to the end-to-end optimization of algorithms for solving hard combinatorial problems. Bio. Frank Hutter is an Emmy Noether Research Group Lead (eq. Asst. Prof.) at the Computer Science Department of the University of Freiburg (Germany). He received his PhD from the University of British Columbia (2009). Frank’s main research interests span artificial intelligence, machine learning, combinatorial optimization, and automated algorithm design. He received a doctoral dissertation award from the Canadian Artificial Intelligence Association and, with his coauthors, several best paper awards (including from JAIR and IJCAI) and prizes in international competitions on machine learning, SAT solving, and AI planning. In 2016 he received an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning. Frontiers in Recurrent Neural Network Research",
"title": ""
},
{
"docid": "6db5f103fa479fc7c7c33ea67d7950f6",
"text": "Problem statement: To design, implement, and test an algorithm for so lving the square jigsaw puzzle problem, which has many applications in image processing, pattern recognition, and computer vision such as restoration of archeologica l artifacts and image descrambling. Approach: The algorithm used the gray level profiles of border pi xels for local matching of the puzzle pieces, which was performed using dynamic programming to facilita te non-rigid alignment of pixels of two gray level profiles. Unlike the classical best-first sea rch, the algorithm simultaneously located the neigh bors of a puzzle piece during the search using the wellknown Hungarian procedure, which is an optimal assignment procedure. To improve the search for a g lobal solution, every puzzle piece was considered as starting piece at various starting locations. Results: Experiments using four well-known images demonstrated the effectiveness of the proposed appr o ch over the classical piece-by-piece matching approach. The performance evaluation was based on a new precision performance measure. For all four test images, the proposed algorithm achieved 1 00% precision rate for puzzles up to 8×8. Conclusion: The proposed search mechanism based on simultaneou s all cation of puzzle pieces using the Hungarian procedure provided better performance than piece-by-piece used in classical methods.",
"title": ""
},
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
},
{
"docid": "29f6917a8eaf7958ffa3408a41e981a4",
"text": "Reconstruction and rehabilitation following rhinectomy remains controversial and presents a complex problem. Although reconstruction with local and microvascular flaps is a valid option, the aesthetic results may not always be satisfactory. The aesthetic results achieved with a nasal prosthesis are excellent; however patient acceptance relies on a secure method of retention. The technique used and results obtained in a large series of patients undergoing rhinectomy and receiving zygomatic implants for the retention of a nasal prosthesis are described here. A total of 56 zygomatic implants (28 patients) were placed, providing excellent retention and durability with the loss of only one implant in 15 years.",
"title": ""
},
{
"docid": "1b5bf2ef58a5f12e09f66e91d6472e56",
"text": "High quality upsampling of sparse 3D point clouds is critically useful for a wide range of geometric operations such as reconstruction, rendering, meshing, and analysis. In this paper, we propose a data-driven algorithm that enables an upsampling of 3D point clouds without the need for hard-coded rules. Our approach uses a deep network with Chamfer distance as the loss function, capable of learning the latent features in point clouds belonging to different object categories. We evaluate our algorithm across different amplification factors, with upsampling learned and performed on objects belonging to the same category as well as different categories. We also explore the desirable characteristics of input point clouds as a function of the distribution of the point samples. Finally, we demonstrate the performance of our algorithm in single-category training versus multi-category training scenarios. The final proposed model is compared against a baseline, optimization-based upsampling method. The results indicate that our algorithm is capable of generating more accurate upsamplings with less Chamfer loss.",
"title": ""
},
{
"docid": "4bf485a218fca405a4d8655bc2a2be86",
"text": "In today’s competitive business environment, companies are facing challenges in dealing with big data issues for rapid decision making for improved productivity. Many manufacturing systems are not ready to manage big data due to the lack of smart analytics tools. Germany is leading a transformation toward 4th Generation Industrial Revolution (Industry 4.0) based on Cyber-Physical System based manufacturing and service innovation. As more software and embedded intelligence are integrated in industrial products and systems, predictive technologies can further intertwine intelligent algorithms with electronics and tether-free intelligence to predict product performance degradation and autonomously manage and optimize product service needs. This article addresses the trends of industrial transformation in big data environment as well as the readiness of smart predictive informatics tools to manage big data to achieve transparency and productivity. Keywords—Industry 4.0; Cyber Physical Systems; Prognostics and Health Management; Big Data;",
"title": ""
},
{
"docid": "c843f4ba35aee9ef2ac7e852a1d489c4",
"text": "We investigate the effect of a corporate culture of sustainability on multiple facets of corporate behavior and performance outcomes. Using a matched sample of 180 companies, we find that corporations that voluntarily adopted environmental and social policies many years ago termed as High Sustainability companies exhibit fundamentally different characteristics from a matched sample of firms that adopted almost none of these policies termed as Low Sustainability companies. In particular, we find that the boards of directors of these companies are more likely to be responsible for sustainability and top executive incentives are more likely to be a function of sustainability metrics. Moreover, they are more likely to have organized procedures for stakeholder engagement, to be more long-term oriented, and to exhibit more measurement and disclosure of nonfinancial information. Finally, we provide evidence that High Sustainability companies significantly outperform their counterparts over the long-term, both in terms of stock market and accounting performance. The outperformance is stronger in sectors where the customers are individual consumers instead of companies, companies compete on the basis of brands and reputations, and products significantly depend upon extracting large amounts of natural resources. Robert G. Eccles is a Professor of Management Practice at Harvard Business School. Ioannis Ioannou is an Assistant Professor of Strategic and International Management at London Business School. George Serafeim is an Assistant Professor of Business Administration at Harvard Business School, contact email: gserafeim@hbs.edu. Robert Eccles and George Serafeim gratefully acknowledge financial support from the Division of Faculty Research and Development of the Harvard Business School. We would like to thank Christopher Greenwald for supplying us with the ASSET4 data. Moreover, we would like to thank Cecile Churet and Iordanis Chatziprodromou from Sustainable Asset Management for giving us access to their proprietary data. We are grateful to Chris Allen, Jeff Cronin, Christine Rivera, and James Zeitler for research assistance. We thank Ben Esty, Joshua Margolis, Costas Markides, Catherine Thomas and seminar participants at Boston College for helpful comments. We are solely responsible for any errors in this manuscript.",
"title": ""
},
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
},
{
"docid": "99e4a4619e20bf0612c0db4249952874",
"text": "Today, machine learning based on neural networks has become mainstream, in many application domains. A small subset of machine learning algorithms, called Convolutional Neural Networks (CNN), are considered as state-ofthe- art for many applications (e.g. video/audio classification). The main challenge in implementing the CNNs, in embedded systems, is their large computation, memory, and bandwidth requirements. To meet these demands, dedicated hardware accelerators have been proposed. Since memory is the major cost in CNNs, recent accelerators focus on reducing the memory accesses. In particular, they exploit data locality using either tiling, layer merging or intra/inter feature map parallelism to reduce the memory footprint. However, they lack the flexibility to interleave or cascade these optimizations. Moreover, most of the existing accelerators do not exploit compression that can simultaneously reduce memory requirements, increase the throughput, and enhance the energy efficiency. To tackle these limitations, we present a flexible accelerator called MOCHA. MOCHA has three features that differentiate it from the state-of-the-art: (i) the ability to compress input/ kernels, (ii) the flexibility to interleave various optimizations, and (iii) intelligence to automatically interleave and cascade the optimizations, depending on the dimension of a specific CNN layer and available resources. Post layout Synthesis results reveal that MOCHA provides up to 63% higher energy efficiency, up to 42% higher throughput, and up to 30% less storage, compared to the next best accelerator, at the cost of 26-35% additional area.",
"title": ""
},
{
"docid": "d361dd8eaea9c8fa8d0a74e8f2161f4b",
"text": "Gamification is commonly employed in designing interactive systems to enhance user engagement and motivations, or to trigger behavior change processes. Although some quantitative studies have been recently conducted aiming at measuring the effects of gamification on users’ behaviors and motivations, there is a shortage of qualitative studies able to capture the subjective experiences of users, when using gamified systems. The authors propose to investigate how users are engaged by the most common gamification techniques, by conducting a diary study followed by a series of six focus groups. From the findings gathered, they conclude the paper identifying some implications for the design of interactive systems that aim at supporting intrinsic motivations to engage their users. A Qualitative Investigation of Gamification: Motivational Factors in Online Gamified Services and Applications",
"title": ""
},
{
"docid": "81919bc432dd70ed3e48a0122d91b9e4",
"text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.",
"title": ""
},
{
"docid": "48b3ee93758294ffa7b24584c53cbda1",
"text": "Engineering design problems requiring the construction of a cheap-to-evaluate 'surrogate' model f that emulates the expensive response of some black box f come in a variety of forms, but they can generally be distilled down to the following template. Here ffx is some continuous quality, cost or performance metric of a product or process defined by a k-vector of design variables x ∈ D ⊂ R k. In what follows we shall refer to D as the design space or design domain. Beyond the assumption of continuity, the only insight we can gain into f is through discrete observations or samples x ii → y ii = ffx ii i = 1 n. These are expensive to obtain and therefore must be used sparingly. The task is to use this sparse set of samples to construct an approximation f , which can then be used to make a cheap performance prediction for any design x ∈ D. Much of this book is made up of recipes for constructing f , given a set of samples. Excepting a few pathological cases, the mathematical formulations of these modelling approaches are well-posed, regardless of how the sampling plan X = x 1 x 2 x nn determines the spatial arrangement of the observations we have built them upon. Some models do require a minimum number n of data points but, once we have passed this threshold, we can use them to build an unequivocally defined surrogate. However, a well-posed model does not necessarily generalize well, that is it may still be poor at predicting unseen data, and this feature does depend on the sampling plan X. For example, measuring the performance of a design at the extreme values of its parameters may leave a great deal of interesting behaviour undiscovered, say, in the centre of the design space. Equally, spraying points liberally in certain parts of the inside of the domain, forcing the surrogate model to make far-reaching extrapolations elsewhere, may lead us to (false) global conclusions based on patchy, local knowledge of the objective landscape. Of course, we do not always have a choice in the matter. We may be using data obtained by someone else for some other purpose or the available observations may come from a variety of external sources and we may not be able to add to them. The latter situation often occurs in conceptual design, where we …",
"title": ""
},
{
"docid": "472e9807c2f4ed6d1e763dd304f22c64",
"text": "Commercial analytical database systems suffer from a high \"time-to-first-analysis\": before data can be processed, it must be modeled and schematized (a human effort), transferred into the database's storage layer, and optionally clustered and indexed (a computational effort). For many types of structured data, this upfront effort is unjustifiable, so the data are processed directly over the file system using the Hadoop framework, despite the cumulative performance benefits of processing this data in an analytical database system. In this paper we describe a system that achieves the immediate gratification of running MapReduce jobs directly over a file system, while still making progress towards the long-term performance benefits of database systems. The basic idea is to piggyback on MapReduce jobs, leverage their parsing and tuple extraction operations to incrementally load and organize tuples into a database system, while simultaneously processing the file system data. We call this scheme Invisible Loading, as we load fractions of data at a time at almost no marginal cost in query latency, but still allow future queries to run much faster.",
"title": ""
},
{
"docid": "2e6e46a1224041ed2080395f82b7c49c",
"text": "The image processing techniques are very useful for many applications such as biology, security, satellite imagery, personal photo, medicine, etc. The procedures of image processing such as image enhancement, image segmentation and feature extraction are used for fracture detection system.This paper uses Canny edge detection method for segmentation.Canny method produces perfect information from the bone image. The main aim of this research is to detect human lower leg bone fracture from X-Ray images. The proposed system has three steps, namely, preprocessing, segmentation, and fracture detection. In feature extraction step, this paper uses Hough transform technique for line detection in the image. Feature extraction is the main task of the system. The results from various experiments show that the proposed system is very accurate and efficient.",
"title": ""
},
{
"docid": "c61107e9c5213ddb8c5e3b1b14dca661",
"text": "In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.",
"title": ""
},
{
"docid": "8a9076c9212442e3f52b828ad96f7fe7",
"text": "The building industry uses great quantities of raw materials that also involve high energy consumption. Choosing materials with high content in embodied energy entails an initial high level of energy consumption in the building production stage but also determines future energy consumption in order to fulfil heating, ventilation and air conditioning demands. This paper presents the results of an LCA study comparing the most commonly used building materials with some eco-materials using three different impact categories. The aim is to deepen the knowledge of energy and environmental specifications of building materials, analysing their possibilities for improvement and providing guidelines for materials selection in the eco-design of new buildings and rehabilitation of existing buildings. The study proves that the impact of construction products can be significantly reduced by promoting the use of the best techniques available and eco-innovation in production plants, substituting the use of finite natural resources for waste generated in other production processes, preferably available locally. This would stimulate competition between manufacturers to launch more eco-efficient products and encourage the use of the Environmental Product Declarations. This paper has been developed within the framework of the “LoRe-LCA Project” co-financed by the European Commission’s Intelligent Energy for Europe Program and the “PSE CICLOPE Project” co-financed by the Spanish Ministry of Science and Technology and the European Regional Development Fund. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3d5bbe4dcdc3ad787e57583f7b621e36",
"text": "A miniaturized antenna employing a negative index metamaterial with modified split-ring resonator (SRR) and capacitance-loaded strip (CLS) unit cells is presented for Ultra wideband (UWB) microwave imaging applications. Four left-handed (LH) metamaterial (MTM) unit cells are located along one axis of the antenna as the radiating element. Each left-handed metamaterial unit cell combines a modified split-ring resonator (SRR) with a capacitance-loaded strip (CLS) to obtain a design architecture that simultaneously exhibits both negative permittivity and negative permeability, which ensures a stable negative refractive index to improve the antenna performance for microwave imaging. The antenna structure, with dimension of 16 × 21 × 1.6 mm³, is printed on a low dielectric FR4 material with a slotted ground plane and a microstrip feed. The measured reflection coefficient demonstrates that this antenna attains 114.5% bandwidth covering the frequency band of 3.4-12.5 GHz for a voltage standing wave ratio of less than 2 with a maximum gain of 5.16 dBi at 10.15 GHz. There is a stable harmony between the simulated and measured results that indicate improved nearly omni-directional radiation characteristics within the operational frequency band. The stable surface current distribution, negative refractive index characteristic, considerable gain and radiation properties make this proposed negative index metamaterial antenna optimal for UWB microwave imaging applications.",
"title": ""
},
{
"docid": "baf9e931df45d010c44083973d1281fd",
"text": "Error vector magnitude (EVM) is one of the widely accepted figure of merits used to evaluate the quality of communication systems. In the literature, EVM has been related to signal-to-noise ratio (SNR) for data-aided receivers, where preamble sequences or pilots are used to measure the EVM, or under the assumption of high SNR values. In this paper, this relation is examined for nondata-aided receivers and is shown to perform poorly, especially for low SNR values or high modulation orders. The EVM for nondata-aided receivers is then evaluated and its value is related to the SNR for quadrature amplitude modulation (QAM) and pulse amplitude modulation (PAM) signals over additive white Gaussian noise (AWGN) channels and Rayleigh fading channels, and for systems with IQ imbalances. The results show that derived equations can be used to reliably estimate SNR values using EVM measurements that are made based on detected data symbols. Thus, presented work can be quite useful for measurement devices such as vector signal analyzers (VSA), where EVM measurements are readily available.",
"title": ""
},
{
"docid": "082e747ab9f93771a71e2b6147d253b2",
"text": "Social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby. However, the location of individuals in online social networking platforms is often unknown. Prior approaches have tried to infer individuals’ locations from the content they produce online or their online relations, but often are limited by the available location-related data. We propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network, using only a small number of initial locations. In five experiments, we demonstrate the effectiveness in multiple social networking platforms, using both precise and noisy data to start the inference, and present heuristics for improving performance. In one experiment, we demonstrate the ability to infer the locations of a group of users who generate over 74% of the daily Twitter message volume with an estimated median location error of 10km. Our results open the possibility of gathering large quantities of location-annotated data from social media platforms.",
"title": ""
}
] |
scidocsrr
|
901fbc60be4ba1bc1ae7b59755786123
|
CIPA: A collaborative intrusion prevention architecture for programmable network and SDN
|
[
{
"docid": "a9b20ad74b3a448fbc1555b27c4dcac9",
"text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.",
"title": ""
}
] |
[
{
"docid": "e19445c2ea8e19002a85ec9ace463990",
"text": "In this paper we propose a system that takes attendance of student and maintaining its records in an academic institute automatically. Manually taking the attendance and maintaining it for a long time makes it difficult task as well as wastes a lot of time. For this reason an efficient system is designed. This system takes attendance with the help of a fingerprint sensor module and all the records are saved on a computer. Fingerprint sensor module and LCD screen are dynamic which can move in the room. In order to mark the attendance, student has to place his/her finger on the fingerprint sensor module. On identification of particular student, his attendance record is updated in the database and he/she is notified through LCD screen. In this system we are going to generate Microsoft excel attendance report on computer. This report will generate automatically after 15 days (depends upon user). This report will be sent to the respected HOD, teacher and student’s parents email Id.",
"title": ""
},
{
"docid": "cecc2950741d12045d9ba3ebad1fc69f",
"text": "Learning to read is extremely difficult for about 10% of children; they are affected by a neurodevelopmental disorder called dyslexia [1, 2]. The neurocognitive causes of dyslexia are still hotly debated [3-12]. Dyslexia remediation is far from being fully achieved [13], and the current treatments demand high levels of resources [1]. Here, we demonstrate that only 12 hr of playing action video games-not involving any direct phonological or orthographic training-drastically improve the reading abilities of children with dyslexia. We tested reading, phonological, and attentional skills in two matched groups of children with dyslexia before and after they played action or nonaction video games for nine sessions of 80 min per day. We found that only playing action video games improved children's reading speed, without any cost in accuracy, more so than 1 year of spontaneous reading development and more than or equal to highly demanding traditional reading treatments. Attentional skills also improved during action video game training. It has been demonstrated that action video games efficiently improve attention abilities [14, 15]; our results showed that this attention improvement can directly translate into better reading abilities, providing a new, fast, fun remediation of dyslexia that has theoretical relevance in unveiling the causal role of attention in reading acquisition.",
"title": ""
},
{
"docid": "5c4c265df2d24350340eb956191417ae",
"text": "When a remotely sited wind farm is connected to the utility power system through a distribution line, the overcurrent relay at the common coupling point needs a directional feature. This paper presents a method for estimating the direction of fault in such radial distribution systems using phase change in current. The difference in phase angle between the positive-sequence component of the current during fault and prefault conditions has been found to be a good indicator of the fault direction in a three-phase system. A rule base formed for the purpose decides the location of fault with respect to the relay in a distribution system. Such a strategy reduces the cost of the voltage sensor and/or connection for a protection scheme which is of relevance in emerging distributed-generation systems. The algorithm has been tested through simulation for different radial distribution systems.",
"title": ""
},
{
"docid": "b1d2ff76f8b4437a731ef5ccdb46429f",
"text": "Form, function and the relationship between the two are notions that have served a crucial role in design science. Within architectural design, key aspects of the anticipated function of buildings, or of spatial environments in general, are supposed to be determined by their structural form, i.e., their shape, layout, or connectivity. Whereas the philosophy of form and function is a well-researched topic, the practical relations and dependencies between form and function are only known implicitly by designers and architects. Specifically, the formal modelling of structural form and resulting artefactual function within design and design assistance systems remains elusive. In our work, we aim at making these definitions explicit by the ontological modelling of domain entities, their properties and related constraints. We thus have to particularly focus on formal interpretation of the terms “(structural) form” and “(artefactual) function”. We put these notions into practice by formalising ontological specifications accordingly by using modularly constructed ontologies for the architectural design domain. A key aspect of our modelling approach is the use of formal qualitative spatial calculi and conceptual requirements as a link between the structural form of a design and the differing functional capabilities that it affords or leads to. We demonstrate the manner in which our ontological modelling reflects notions of architectural form and function, and how it facilitates the conceptual modelling of requirement constraints for architectural design.",
"title": ""
},
{
"docid": "15b8b0f3682e2eb7c1b1a62be65d6327",
"text": "Data augmentation is widely used to train deep neural networks for image classification tasks. Simply flipping images can help learning by increasing the number of training images by a factor of two. However, data augmentation in natural language processing is much less studied. Here, we describe two methods for data augmentation for Visual Question Answering (VQA). The first uses existing semantic annotations to generate new questions. The second method is a generative approach using recurrent neural networks. Experiments show the proposed schemes improve performance of baseline and state-of-the-art VQA algorithms.",
"title": ""
},
{
"docid": "aeb578582a6c612e0640449e12000a21",
"text": "Intelligent Tutoring Systems (ITS) generate a wealth of finegrained student interaction data. Although it seems likely that teachers could benefit from access to advanced analytics generated from these data, ITSs do not typically come with dashboards designed for teachers’ needs. In this project, we follow a user-centered design approach to create a dashboard for teachers using ITSs.",
"title": ""
},
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
},
{
"docid": "42d755dbb843d9e5ba4bae4b492c2b8e",
"text": "Context: The management of software development productivity is a key issue in software organizations, where the major drivers are lower cost and shorter time-to-market. Agile methods, including Extreme Programming and Scrum, have evolved as “light” approaches that simplify the software development process, potentially leading to increased team productivity. However, little empirical research has examined which factors do have an impact on productivity and in what way, when using agile methods. Objective: Our objective is to provide a better understanding of the factors and mediators that impact agile team productivity. Method: We have conducted a multiple-case study for six months in three large Brazilian companies that have been using agile methods for over two years. We have focused on the main productivity factors perceived by team members through interviews, documentation from retrospectives, and non-participant observation. Results: We developed a novel conceptual framework, using thematic analysis to understand the possible mechanisms behind such productivity factors. Agile team management was found to be the most influential factor in achieving agile team productivity. At the intra-team level, the main productivity factors were team design (structure and work allocation) and member turnover. At the inter-team level, the main productivity factors were how well teams could be effectively coordinated by proper interfaces and other dependencies and avoiding delays in providing promised software to dependent teams. Conclusion: Teams should be aware of the influence and magnitude of turnover, which has been shown negative for agile team productivity. Team design choices remain an important factor impacting team productivity, even more pronounced on agile teams that rely on teamwork and people factors. The intra-team coordination processes must be adjusted to enable productive work by considering priorities and pace between teams. Finally, the revised conceptual framework for agile team productivity supports further tests through confirmatory studies.",
"title": ""
},
{
"docid": "e70425a0b9d14ff4223f3553de52c046",
"text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.",
"title": ""
},
{
"docid": "599d814fd3b3a758f3b2459b74aeb92c",
"text": "Relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text. We propose a novel convolutional neural network architecture for this task, relying on two levels of attention in order to better discern patterns in heterogeneous contexts. This architecture enables endto-end learning from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments show that our model outperforms previous state-of-the-art methods, including those relying on much richer forms of prior knowledge.",
"title": ""
},
{
"docid": "f9c8209fcecbbed99aa29761dffc8e25",
"text": "ImageNet is a large-scale database of object classes with millions of images. Unfortunately only a small fraction of them is manually annotated with bounding-boxes. This prevents useful developments, such as learning reliable object detectors for thousands of classes. In this paper we propose to automatically populate ImageNet with many more bounding-boxes, by leveraging existing manual annotations. The key idea is to localize objects of a target class for which annotations are not available, by transferring knowledge from related source classes with available annotations. We distinguish two kinds of source classes: ancestors and siblings. Each source provides knowledge about the plausible location, appearance and context of the target objects, which induces a probability distribution over windows in images of the target class. We learn to combine these distributions so as to maximize the location accuracy of the most probable window. Finally, we employ the combined distribution in a procedure to jointly localize objects in all images of the target class. Through experiments on 0.5 million images from 219 classes we show that our technique (i) annotates a wide range of classes with bounding-boxes; (ii) effectively exploits the hierarchical structure of ImageNet, since all sources and types of knowledge we propose contribute to the results; (iii) scales efficiently.",
"title": ""
},
{
"docid": "987024b9cca47797813f27da08d9a7c6",
"text": "Image segmentation plays a crucial role in many medical imaging applications by automating or facilitating the delineation of anatomical structures and other regions of interest. We present herein a critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality. We conclude with a discussion on the future of image segmentation methods in biomedical research.",
"title": ""
},
{
"docid": "79f691668b5e1d13cd1bfa70dfa33384",
"text": "Reported speech in the form of direct and indirect reported speech is an important indicator of evidentiality in traditional newspaper texts, but also increasingly in the new media that rely heavily on citation and quotation of previous postings, as for instance in blogs or newsgroups. This paper details the basic processing steps for reported speech analysis and reports on performance of an implementation in form of a GATE resource.",
"title": ""
},
{
"docid": "3f11c629670d986b8a266bae08e8a8d0",
"text": "SURVIVAL ANALYSIS APPROACH FOR EARLY PREDICTION OF STUDENT DROPOUT by SATTAR AMERI December 2015 Advisor: Dr. Chandan Reddy Major: Computer Science Degree: Master of Science Retention of students at colleges and universities has long been a concern for educators for many decades. The consequences of student attrition are significant for both students, academic staffs and the overall institution. Thus, increasing student retention is a long term goal of any academic institution. The most vulnerable students at all institutions of higher education are the freshman students, who are at the highest risk of dropping out at the beginning of their study. Consequently, the early identification of “at-risk” students is a crucial task that needs to be addressed precisely. In this thesis, we develop a framework for early prediction of student success using survival analysis approach. We propose time-dependent Cox (TD-Cox), which is based on the Cox proportional hazard regression model and also captures time-varying factors to address the challenge of predicting dropout students as well as the semester that the dropout will occur, to enable proactive interventions. This is critical in student retention problem because not only correctly classifying whether student is going to dropout is important but also when this is going to happen is crucial to investigate. We evaluate our method on real student data collected at Wayne State University. The results show that the proposed Cox-based framework can predict the student dropout and the semester of dropout with high accuracy and precision compared to the other alternative state-of-the-art methods.",
"title": ""
},
{
"docid": "4de2c6422d8357e6cb00cce21e703370",
"text": "OBJECTIVE\nFalls and fall-related injuries are leading problems in residential aged care facilities. The objective of this study was to provide descriptive data about falls in nursing homes.\n\n\nDESIGN/SETTING/PARTICIPANTS\nProspective recording of all falls over 1 year covering all residents from 528 nursing homes in Bavaria, Germany.\n\n\nMEASUREMENTS\nFalls were reported on a standardized form that included a facility identification code, date, time of the day, sex, age, degree of care need, location of the fall, and activity leading to the fall. Data detailing homes' bed capacities and occupancy levels were used to estimate total person-years under exposure and to calculate fall rates. All analyses were stratified by residents' degree of care need.\n\n\nRESULTS\nMore than 70,000 falls were recorded during 42,843 person-years. The fall rate was higher in men than in women (2.18 and 1.49 falls per person-year, respectively). Fall risk differed by degree of care need with lower fall risks both in the least and highest care categories. About 75% of all falls occurred in the residents' rooms or in the bathrooms and only 22% were reported within the common areas. Transfers and walking were responsible for 41% and 36% of all falls respectively. Fall risk varied during the day. Most falls were observed between 10 am and midday and between 2 pm and 8 pm.\n\n\nCONCLUSION\nThe differing fall risk patterns in specific subgroups may help to target preventive measures.",
"title": ""
},
{
"docid": "f85fb22836663d713074efcc9b1d3991",
"text": "Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified.",
"title": ""
},
{
"docid": "9600f488c41b5574766067d32004400e",
"text": "A conversational agent, capable to have a ldquosense of humourrdquo is presented. The agent can both generate humorous sentences and recognize humoristic expressions introduced by the user during the dialogue. Humorist Bot makes use of well founded techniques of computational humor and it has been implemented using the ALICE framework embedded into an Yahoo! Messenger client. It includes also an avatar that changes the face expression according to humoristic content of the dialogue.",
"title": ""
},
{
"docid": "ef55470ed9fa3a1b792e347f5bddedbe",
"text": "Rechargeable lithium-ion batteries are promising candidates for building grid-level storage systems because of their high energy and power density, low discharge rate, and decreasing cost. A vital aspect in energy storage planning and operations is to accurately model the aging cost of battery cells, especially in irregular cycling operations. This paper proposes a semi-empirical lithium-ion battery degradation model that assesses battery cell life loss from operating profiles. We formulate the model by combining fundamental theories of battery degradation and our observations in battery aging test results. The model is adaptable to different types of lithium-ion batteries, and methods for tuning the model coefficients based on manufacturer's data are presented. A cycle-counting method is incorporated to identify stress cycles from irregular operations, allowing the degradation model to be applied to any battery energy storage (BES) applications. The usefulness of this model is demonstrated through an assessment of the degradation that a BES would incur by providing frequency control in the PJM regulation market.",
"title": ""
},
{
"docid": "48b88774957a6d30ae9d0a97b9643647",
"text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features",
"title": ""
},
{
"docid": "e31ea6b8c4a5df049782b463abc602ea",
"text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.",
"title": ""
}
] |
scidocsrr
|
1f2250209a2472bb1d660be549649ffe
|
Adversarial Learning of Structure-Aware Fully Convolutional Networks for Landmark Localization
|
[
{
"docid": "2670c9d261edfb771d7e9673a282ea0b",
"text": "In this paper a method is proposed to recover and interpret the 3D body structures of a person from a single view, provided that (1) at least six feature points on the head and a set of body joints are available on the image plane, and (2) the geometry of head and lengths of body segments formed by joints are known. First of all, the feature points on the head in the head-centered coordinate system and their image projections are used to determine a transformation matrix. Then, the camera position and orientations are extracted from the matrix. Finally, the 3D coordinates of the head points expressed in the camera-centered coordinate system are obtained. Starting from the coordinates of the neck, which is a head feature point, the 3D coordinates of other joints one-by-one are determined under the assumption of the fixed lengths of the body segments. A binary interpretation tree is used to represent the 2”-’ possible body structures, if a human body has n joints. To determine the final feasible body structures, physical and motion constraints are used to prune the interpretation tree. Formulas and rules required for the tree pruning are formulated. Experiments are used to illustrate the pruning powers of these constraints. In the two cases of input data chosen, a unique or nearly unique solution of the body structure is obtained. e 1985 Academic PI~SS, IIIC.",
"title": ""
},
{
"docid": "8a1ba356c34935a2f3a14656138f0414",
"text": "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.",
"title": ""
}
] |
[
{
"docid": "f4e6c5c5c7fccbf0f72ff681cd3a8762",
"text": "Program specifications are important for many tasks during software design, development, and maintenance. Among these, temporal specifications are particularly useful. They express formal correctness requirements of an application's ordering of specific actions and events during execution, such as the strict alternation of acquisition and release of locks. Despite their importance, temporal specifications are often missing, incomplete, or described only informally. Many techniques have been proposed that mine such specifications from execution traces or program source code. However, existing techniques mine only simple patterns, or they mine a single complex pattern that is restricted to a particular set of manually selected events. There is no practical, automatic technique that can mine general temporal properties from execution traces.\n In this paper, we present Javert, the first general specification mining framework that can learn, fully automatically, complex temporal properties from execution traces. The key insight behind Javert is that real, complex specifications can be formed by composing instances of small generic patterns, such as the alternating pattern ((ab)) and the resource usage pattern ((ab c)). In particular, Javert learns simple generic patterns and composes them using sound rules to construct large, complex specifications. We have implemented the algorithm in a practical tool and conducted an extensive empirical evaluation on several open source software projects. Our results are promising; they show that Javert is scalable, general, and precise. It discovered many interesting, nontrivial specifications in real-world code that are beyond the reach of existing automatic techniques.",
"title": ""
},
{
"docid": "6b5455a7e5b93cd754c0ad90a7181a4d",
"text": "This paper reports an exploration of the concept of social intelligence in the context of designing home dialogue systems for an Ambient Intelligence home. It describes a Wizard of Oz experiment involving a robotic interface capable of simulating several human social behaviours. Our results show that endowing a home dialogue system with some social intelligence will: (a) create a positive bias in the user’s perception of technology in the home environment, (b) enhance user acceptance for the home dialogue system, and (c) trigger social behaviours by the user in relation to the home dialogue system. q 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "61662cfd286c06970243bc13d5eff566",
"text": "This paper develops a theoretical learning model of text classification for Support Vector Machines (SVMs). It connects the statistical properties of text-classification tasks with the generalization performance of a SVM in a quantitative way. Unlike conventional approaches to learning text classifiers, which rely primarily on empirical evidence, this model explains why and when SVMs perform well for text classification. In particular, it addresses the following questions: Why can support vector machines handle the large feature spaces in text classification effectively? How is this related to the statistical properties of text? What are sufficient conditions for applying SVMs to text-classification problems successfully?",
"title": ""
},
{
"docid": "f3dc6ab7d2d66604353f60fe1d7bd45a",
"text": "Establishing end-to-end authentication between devices and applications in Internet of Things (IoT) is a challenging task. Due to heterogeneity in terms of devices, topology, communication and different security protocols used in IoT, existing authentication mechanisms are vulnerable to security threats and can disrupt the progress of IoT in realizing Smart City, Smart Home and Smart Infrastructure, etc. To achieve end-to-end authentication between IoT devices/applications, the existing authentication schemes and security protocols require a two-factor authentication mechanism. Therefore, as part of this paper we review the suitability of an authentication scheme based on One Time Password (OTP) for IoT and proposed a scalable, efficient and robust OTP scheme. Our proposed scheme uses the principles of lightweight Identity Based Elliptic Curve Cryptography scheme and Lamport's OTP algorithm. We evaluate analytically and experimentally the performance of our scheme and observe that our scheme with a smaller key size and lesser infrastructure performs on par with the existing OTP schemes without compromising the security level. Our proposed scheme can be implemented in real-time IoT networks and is the right candidate for two-factor authentication among devices, applications and their communications in IoT.",
"title": ""
},
{
"docid": "f1d1a73f21dcd1d27da4e9d4a93c5581",
"text": "Movements of interfaces can be analysed in terms of whether they are sensible, sensable and desirable. Sensible movements are those that users naturally perform; sensable are those that can be measured by a computer; and desirable movements are those that are required by a given application. We show how a systematic comparison of sensible, sensable and desirable movements, especially with regard to how they do not precisely overlap, can reveal potential problems with an interface and also inspire new features. We describe how this approach has been applied to the design of three interfaces: the Augurscope II, a mobile augmented reality interface for outdoors; the Drift Table, an item of furniture that uses load sensing to control the display of aerial photographs; and pointing flashlights at walls and posters in order to play sounds.",
"title": ""
},
{
"docid": "78d7c61f7ca169a05e9ae1393712cd69",
"text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.",
"title": ""
},
{
"docid": "aaba5dc8efc9b6a62255139965b6f98d",
"text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.",
"title": ""
},
{
"docid": "ffdddba343bb0aa47fc101696ab3696d",
"text": "The meaning of a sentence in a document is more easily determined if its constituent words exhibit cohesion with respect to their individual semantics. This paper explores the degree of cohesion among a document's words using lexical chains as a semantic representation of its meaning. Using a combination of diverse types of lexical chains, we develop a text document representation that can be used for semantic document retrieval. For our approach, we develop two kinds of lexical chains: (i) a multilevel flexible chain representation of the extracted semantic values, which is used to construct a fixed segmentation of these chains and constituent words in the text; and (ii) a fixed lexical chain obtained directly from the initial semantic representation from a document. The extraction and processing of concepts is performed using WordNet as a lexical database. The segmentation then uses these lexical chains to model the dispersion of concepts in the document. Representing each document as a high-dimensional vector, we use spherical k-means clustering to demonstrate that our approach performs better than previ-",
"title": ""
},
{
"docid": "69275ddc999036a415b339a0a0219978",
"text": "BACKGROUND\nDeveloping countries, including Ethiopia are experiencing a double burden of malnutrition. There is limited information about prevalence of overweight/obesity among school aged children in Ethiopia particularly in Bahir Dar city. Hence this study aimed to assess the prevalence of overweight/obesity and associated factors among school children aged 6-12 years at Bahir Dar City, Northwest Ethiopia.\n\n\nMETHODS\nA school based cross-sectional study was carried out. A total of 634 children were included in the study. Multi stage systematic random sampling technique was used. A multivariable logistic regression analysis was used to identify factors associated with overweight/obesity. The association between dependent and independent variables were assessed using odds ratio with 95% confidence interval and p-value ≤0.05 was considered statistically significant.\n\n\nRESULTS\nThe overall prevalence of overweight and/or obesity was 11.9% (95% CI, 9.3, 14.4) (out of which 8.8% were overweight and 3.1% were obese). Higher wealth status[adjusted OR = 3.14, 95% CI:1.17, 8.46], being a private school student [AOR = 2.21, 95% CI:1.09, 4.49], use of transportation to and from school [AOR = 2.53, 95% CI: 1.26,5.06], fast food intake [AOR = 3.88, 95% CI: 1.42,10.55], lack of moderate physical activity [AOR = 2.87, 95% CI: 1.21,6.82], low intake of fruit and vegetable [AOR = 6.45, 95% CI:3.19,13.06] were significant factors associated with overweight and obesity.\n\n\nCONCLUSION\nThis study revealed that prevalence of overweight/obesity among school aged children in Bahir Dar city is high. Thus, promoting healthy dietary habit, particularly improving fruit and vegetable intake is essential to reduce the burden of overweight and obesity. Furthermore, it is important to strengthen nutrition education about avoiding junk food consumption and encouraging regular physical activity.",
"title": ""
},
{
"docid": "f847a04cb60bbbe5a2cd1ec1c4c9be6f",
"text": "This letter presents a wideband patch antenna on a low-temperature cofired ceramic substrate for Local Multipoint Distribution Service band applications. Conventional rectangular patch antennas have a narrow bandwidth. The proposed via-wall structure enhances the electric field coupling between the stacked patches to achieve wideband characteristics. We designed same-side and opposite-side feeding configurations and report on the fabrication of an experimental 28-GHz antenna used to validate the design concept. Measurements correlate well with the simulation results, achieving a 10-dB impedance bandwidth of 25.4% (23.4-30.2 GHz).",
"title": ""
},
{
"docid": "ea9e392bdca32154b95b2b0b424229c3",
"text": "Multi-person pose estimation in images and videos is an important yet challenging task with many applications. Despite the large improvements in human pose estimation enabled by the development of convolutional neural networks, there still exist a lot of difficult cases where even the state-of-the-art models fail to correctly localize all body joints. This motivates the need for an additional refinement step that addresses these challenging cases and can be easily applied on top of any existing method. In this work, we introduce a pose refinement network (PoseRefiner) which takes as input both the image and a given pose estimate and learns to directly predict a refined pose by jointly reasoning about the input-output space. In order for the network to learn to refine incorrect body joint predictions, we employ a novel data augmentation scheme for training, where we model \"hard\" human pose cases. We evaluate our approach on four popular large-scale pose estimation benchmarks such as MPII Single- and Multi-Person Pose Estimation, PoseTrack Pose Estimation, and PoseTrack Pose Tracking, and report systematic improvement over the state of the art.",
"title": ""
},
{
"docid": "cf7eff6c24f333b6bcf30ef8cd8686e0",
"text": "For 4 decades, vigorous efforts have been based on the premise that early intervention for children of poverty and, more recently, for children with developmental disabilities can yield significant improvements in cognitive, academic, and social outcomes. The history of these efforts is briefly summarized and a conceptual framework presented to understand the design, research, and policy relevance of these early interventions. This framework, biosocial developmental contextualism, derives from social ecology, developmental systems theory, developmental epidemiology, and developmental neurobiology. This integrative perspective predicts that fragmented, weak efforts in early intervention are not likely to succeed, whereas intensive, high-quality, ecologically pervasive interventions can and do. Relevant evidence is summarized in 6 principles about efficacy of early intervention. The public policy challenge in early intervention is to contain costs by more precisely targeting early interventions to those who most need and benefit from these interventions. The empirical evidence on biobehavioral effects of early experience and early intervention has direct relevance to federal and state policy development and resource allocation.",
"title": ""
},
{
"docid": "b7094d555b9b4c7197822027510a65aa",
"text": "Vegetation indices have been used extensively to estimate the vegetation density from satellite and airborne images for many years. In this paper, we focus on one of the most popular of such indices, the normalized difference vegetation index (NDVI), and we introduce a statistical framework to analyze it. As the degree of vegetation increases, the corresponding NDVI values begin to saturate and cannot represent highly vegetated regions reliably. By adopting the statistical viewpoint, we show how to obtain a linearized and more reliable measure. While the NDVI uses only red and near-infrared bands, we use the statistical framework to introduce new indices using the blue and green bands as well. We compare these indices with that obtained by linearizing the NDVI with extensive experimental results on real IKONOS multispectral images.",
"title": ""
},
{
"docid": "204a2331af6c32a502005d5d19f4fc10",
"text": "This paper presents a detailed comparative study of spoke type brushless dc (SPOKE-BLDC) motors due to the operating conditions and designs a new type SPOKE-BLDC with flux barriers for high torque applications, such as tractions. The current dynamic analysis method considering the local magnetic saturation of the rotor and the instantaneous current by pwm driving circuit is developed based on the coupled finite element analysis with rotor dynamic equations. From this analysis, several new structures using the flux barriers are designed and the characteristics are compared in order to reduce the large torque ripple and improve the average torque of SPOKE-BLDC. From these results, it is confirmed that the flux barriers, which are inserted on the optimized position of the rotor, have made remarkable improvement for the torque characteristics of the SPOKE-BLDC.",
"title": ""
},
{
"docid": "0cc665089be9aa8217baac32f0385f41",
"text": "Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.",
"title": ""
},
{
"docid": "e3b3e4e75580f3dad0f2fb2b9e28fff4",
"text": "The present study introduced an integrated method for the production of biodiesel from microalgal oil. Heterotrophic growth of Chlorella protothecoides resulted in the accumulation of high lipid content (55%) in cells. Large amount of microalgal oil was efficiently extracted from these heterotrophic cells by using n-hexane. Biodiesel comparable to conventional diesel was obtained from heterotrophic microalgal oil by acidic transesterification. The best process combination was 100% catalyst quantity (based on oil weight) with 56:1 molar ratio of methanol to oil at temperature of 30 degrees C, which reduced product specific gravity from an initial value of 0.912 to a final value of 0.8637 in about 4h of reaction time. The results suggested that the new process, which combined bioengineering and transesterification, was a feasible and effective method for the production of high quality biodiesel from microalgal oil.",
"title": ""
},
{
"docid": "80ba326570f2e492eff3515ddcc2b3cf",
"text": "Automatic program transformation tools can be valuable for programmers to help them with refactoring tasks, and for Computer Science students in the form of tutoring systems that suggest repairs to programming assignments. However, manually creating catalogs of transformations is complex and time-consuming. In this paper, we present REFAZER, a technique for automatically learning program transformations. REFAZER builds on the observation that code edits performed by developers can be used as input-output examples for learning program transformations. Example edits may share the same structure but involve different variables and subexpressions, which must be generalized in a transformation at the right level of abstraction. To learn transformations, REFAZER leverages state-of-the-art programming-by-example methodology using the following key components: (a) a novel domain-specific language (DSL) for describing program transformations, (b) domain-specific deductive algorithms for efficiently synthesizing transformations in the DSL, and (c) functions for ranking the synthesized transformations. We instantiate and evaluate REFAZER in two domains. First, given examples of code edits used by students to fix incorrect programming assignment submissions, we learn program transformations that can fix other students' submissions with similar faults. In our evaluation conducted on 4 programming tasks performed by 720 students, our technique helped to fix incorrect submissions for 87% of the students. In the second domain, we use repetitive code edits applied by developers to the same project to synthesize a program transformation that applies these edits to other locations in the code. In our evaluation conducted on 56 scenarios of repetitive edits taken from three large C# open-source projects, REFAZER learns the intended program transformation in 84% of the cases using only 2.9 examples on average.",
"title": ""
},
{
"docid": "d83ecee8e5f59ee8e6a603c65f952c22",
"text": "PredPatt is a pattern-based framework for predicate-argument extraction. While it works across languages and provides a well-formed syntax-semantics interface for NLP tasks, a large-scale and reproducible evaluation has been lacking, which prevents comparisons between PredPatt and other related systems, and inhibits the updates of the patterns in PredPatt. In this work, we improve and evaluate PredPatt by introducing a large set of high-quality annotations converted from PropBank, which can also be used as a benchmark for other predicate-argument extraction systems. We compare PredPatt with other prominent systems and shows that PredPatt achieves the best precision and recall.",
"title": ""
},
{
"docid": "23ba216f846eab3ff8c394ad29b507bf",
"text": "The emergence of large-scale freeform shapes in architecture poses big challenges to the fabrication of such structures. A key problem is the approximation of the design surface by a union of patches, so-called panels, that can be manufactured with a selected technology at reasonable cost, while meeting the design intent and achieving the desired aesthetic quality of panel layout and surface smoothness. The production of curved panels is mostly based on molds. Since the cost of mold fabrication often dominates the panel cost, there is strong incentive to use the same mold for multiple panels. We cast the major practical requirements for architectural surface paneling, including mold reuse, into a global optimization framework that interleaves discrete and continuous optimization steps to minimize production cost while meeting user-specified quality constraints. The search space for optimization is mainly generated through controlled deviation from the design surface and tolerances on positional and normal continuity between neighboring panels. A novel 6-dimensional metric space allows us to quickly compute approximate inter-panel distances, which dramatically improves the performance of the optimization and enables the handling of complex arrangements with thousands of panels. The practical relevance of our system is demonstrated by paneling solutions for real, cutting-edge architectural freeform design projects.",
"title": ""
}
] |
scidocsrr
|
7f905cac740516a87c460e9988988718
|
Automatic detection of cyber-recruitment by violent extremists
|
[
{
"docid": "e07cb04e3000607d4a3f99d47f72a906",
"text": "As part of the NSF-funded Dark Web research project, this paper presents an exploratory study of cyber extremism on the Web 2.0 media: blogs, YouTube, and Second Life. We examine international Jihadist extremist groups that use each of these media. We observe that these new, interactive, multimedia-rich forms of communication provide effective means for extremists to promote their ideas, share resources, and communicate among each other. The development of automated collection and analysis tools for Web 2.0 can help policy makers, intelligence analysts, and researchers to better understand extremistspsila ideas and communication patterns, which may lead to strategies that can counter the threats posed by extremists in the second-generation Web.",
"title": ""
},
{
"docid": "4d791fa53f7ed8660df26cd4dbe9063a",
"text": "The Internet is a powerful political instrument, wh ich is increasingly employed by terrorists to forward their goals. The fiv most prominent contemporary terrorist uses of the Net are information provision , fi ancing, networking, recruitment, and information gathering. This article describes a nd explains each of these uses and follows up with examples. The final section of the paper describes the responses of government, law enforcement, intelligence agencies, and others to the terrorism-Internet nexus. There is a particular emphasis within the te xt on the UK experience, although examples from other jurisdictions are also employed . ___________________________________________________________________ “Terrorists use the Internet just like everybody el se” Richard Clarke (2004) 1 ___________________________________________________________________",
"title": ""
}
] |
[
{
"docid": "126b62a0ae62c76b43b4fb49f1bf05cd",
"text": "OBJECTIVE\nThe aim of the study was to evaluate efficacy of fractional CO2 vaginal laser treatment (Laser, L) and compare it to local estrogen therapy (Estriol, E) and the combination of both treatments (Laser + Estriol, LE) in the treatment of vulvovaginal atrophy (VVA).\n\n\nMETHODS\nA total of 45 postmenopausal women meeting inclusion criteria were randomized in L, E, or LE groups. Assessments at baseline, 8 and 20 weeks, were conducted using Vaginal Health Index (VHI), Visual Analog Scale for VVA symptoms (dyspareunia, dryness, and burning), Female Sexual Function Index, and maturation value (MV) of Meisels.\n\n\nRESULTS\nForty-five women were included and 3 women were lost to follow-up. VHI average score was significantly higher at weeks 8 and 20 in all study arms. At week 20, the LE arm also showed incremental improvement of VHI score (P = 0.01). L and LE groups showed a significant improvement of dyspareunia, burning, and dryness, and the E arm only of dryness (P < 0.001). LE group presented significant improvement of total Female Sex Function Index (FSFI) score (P = 0.02) and individual domains of pain, desire, and lubrication. In contrast, the L group showed significant worsening of pain domain in FSFI (P = 0.04), but FSFI total scores were comparable in all treatment arms at week 20.\n\n\nCONCLUSIONS\nCO2 vaginal laser alone or in combination with topical estriol is a good treatment option for VVA symptoms. Sexual-related pain with vaginal laser treatment might be of concern.",
"title": ""
},
{
"docid": "96e56dcf3d38c8282b5fc5c8ae747a66",
"text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.",
"title": ""
},
{
"docid": "a21513f9cf4d5a0e6445772941e9fba2",
"text": "Superficial dorsal penile vein thrombosis was diagnosed 8 times in 7 patients between 19 and 40 years old (mean age 27 years). All patients related the onset of the thrombosis to vigorous sexual intercourse. No other etiological medications, drugs or constricting devices were implicated. Three patients were treated acutely with anti-inflammatory medications, while 4 were managed expectantly. The mean interval to resolution of symptoms was 7 weeks. Followup ranged from 3 to 30 months (mean 11) at which time all patients noticed normal erectile function. Only 1 patient had recurrent thrombosis 3 months after the initial episode, again related to intercourse. We conclude that this is a benign self-limited condition. Anti-inflammatory agents are useful for acute discomfort but they do not affect the rate of resolution.",
"title": ""
},
{
"docid": "688ee7a4bde400a6afbd6972d729fad4",
"text": "Learning-to-Rank ( LtR ) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far. The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features. We extensively analyze the quality versus efficiency trade-off of a wide spectrum of stateof-the-art LtR , and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees ( GBRT ), Lambda-Mart ( λ-MART ), and the first public-domain implementation of Oblivious Lambda-Mart ( λ-MART ), an algorithm that induces forests of oblivious regression trees. We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the qualitycost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget. © 2016 Elsevier Ltd. All rights reserved. ∗ Corresponding author. E-mail addresses: gabriele.capannini@mdh.se (G. Capannini), claudio.lucchese@isti.cnr.it , c.lucchese@isti.cnr.it (C. Lucchese), f.nardini@isti.cnr.it (F.M. Nardini), orlando@unive.it (S. Orlando), r.perego@isti.cnr.it (R. Perego), n.tonellotto@isti.cnr.it (N. Tonellotto). http://dx.doi.org/10.1016/j.ipm.2016.05.004 0306-4573/© 2016 Elsevier Ltd. All rights reserved. Please cite this article as: G. Capannini et al., Quality versus efficiency in document scoring with learning-to-rank models, Information Processing and Management (2016), http://dx.doi.org/10.1016/j.ipm.2016.05.004 2 G. Capannini et al. / Information Processing and Management 0 0 0 (2016) 1–17 ARTICLE IN PRESS JID: IPM [m3Gsc; May 17, 2016;19:28 ] Document Index Base Ranker Top Ranker Features Learning to Rank Algorithm Query First step Second step N docs K docs 1. ............ 2. ............ 3. ............ K. ............ ... ... Results Page(s) Fig. 1. The architecture of a generic machine-learned ranking pipeline.",
"title": ""
},
{
"docid": "350137bf3c493b23aa6d355df946440f",
"text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.",
"title": ""
},
{
"docid": "bccb8e4cf7639dbcd3896e356aceec8d",
"text": "Over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy.",
"title": ""
},
{
"docid": "866b9d88e90a93357ca6caa4979ef2d7",
"text": "This paper describes a new speech corpus, the RSR2015 database designed for text-dependent speaker recognition with scenario based on fixed pass-phrases. This database consists of over 71 hours of speech recorded from English speakers covering the diversity of accents spoken in Singapore. Acquisition has been done using a set of six portable devices including smart phones and tablets. The pool of speakers consists of 300 participants (143 female and 157 male speakers) from 17 to 42 years old. We propose a protocol for the case of user-dependent pass-phrases in text-dependent speaker recognition and we also report speaker recognition experiments on RSR2015 database.",
"title": ""
},
{
"docid": "d95cc1187827e91601cb5711dbdb1550",
"text": "As data sparsity remains a significant challenge for collaborative filtering (CF, we conjecture that predicted ratings based on imputed data may be more accurate than those based on the originally very sparse rating data. In this paper, we propose a framework of imputation-boosted collaborative filtering (IBCF), which first uses an imputation technique, or perhaps machine learned classifier, to fill-in the sparse user-item rating matrix, then runs a traditional Pearson correlation-based CF algorithm on this matrix to predict a novel rating. Empirical results show that IBCF using machine learning classifiers can improve predictive accuracy of CF tasks. In particular, IBCF using a classifier capable of dealing well with missing data, such as naïve Bayes, can outperform the content-boosted CF (a representative hybrid CF algorithm) and IBCF using PMM (predictive mean matching, a state-of-the-art imputation technique), without using external content information.",
"title": ""
},
{
"docid": "0e68120ea21beb2fdaff6538aa342aa5",
"text": "The development of a truly non-invasive continuous glucose sensor is an elusive goal. We describe the rise and fall of the Pendra device. In 2000 the company Pendragon Medical introduced a truly non-invasive continuous glucose-monitoring device. This system was supposed to work through so-called impedance spectroscopy. Pendra was Conformité Européenne (CE) approved in May 2003. For a short time the Pendra was available on the Dutch direct-to-consumer market. A post-marketing reliability study was performed in six type 1 diabetes patients. Mean absolute difference between Pendra glucose values and values obtained through self-monitoring of blood glucose was 52%; the Pearson’s correlation coefficient was 35.1%; and a Clarke error grid showed 4.3% of the Pendra readings in the potentially dangerous zone E. We argue that the CE certification process for continuous glucose sensors should be made more transparent, and that a consensus on specific requirements for continuous glucose sensors is needed to prevent patient exposure to potentially dangerous situations.",
"title": ""
},
{
"docid": "b47127a755d7bef1c5baf89253af46e7",
"text": "In an effort to explain pro-environmental behavior, environmental sociologists often study environmental attitudes. While much of this work is atheoretical, the focus on attitudes suggests that researchers are implicitly drawing upon attitude theory in psychology. The present research brings sociological theory to environmental sociology by drawing on identity theory to understand environmentally responsive behavior. We develop an environment identity model of environmental behavior that includes not only the meanings of the environment identity, but also the prominence and salience of the environment identity and commitment to the environment identity. We examine the identity process as it relates to behavior, though not to the exclusion of examining the effects of environmental attitudes. The findings reveal that individual agency is important in influencing environmentally responsive behavior, but this agency is largely through identity processes, rather than attitude processes. This provides an important theoretical and empirical advance over earlier work in environmental sociology.",
"title": ""
},
{
"docid": "284c056db69549efe956b81d5316ac6d",
"text": "PURPOSE\nThe aim of this study was to identify the effects of a differential-learning program, embedded in small-sided games, on the creative and tactical behavior of youth soccer players. Forty players from under-13 (U13) and under-15 (U15) were allocated into control and experimental groups and were tested using a randomized pretest to posttest design using small-sided games situations.\n\n\nMETHOD\nThe experimental group participated in a 5-month differential-learning program embodied in small-sided games situations, while the control group participated in a typical small-sided games training program. In-game creativity was assessed through notational analyses of the creative components, and the players' positional data were used to compute tactical-derived variables.\n\n\nRESULTS\nThe findings suggested that differential learning facilitated the development of creative components, mainly concerning attempts (U13, small; U15, small), versatility (U13, moderate; U15, small), and originality (U13, unclear; U15, small) of players' actions. Likewise, the differential-learning approach provided a decrease in fails during the game in both experimental groups (moderate). Moreover, differential learning seemed to favor regularity in pitch-positioning behavior for the distance between players' dyads (U13, small; U15, small), the distance to the team target (U13, moderate; U15, small), and the distance to the opponent target (U13, moderate; U15, small).\n\n\nCONCLUSIONS\nThe differential-learning program stressed creative and positional behavior in both age groups with a distinct magnitude of effects, with the U13 players demonstrating higher improvements over the U15 players. Overall, these findings confirmed that the technical variability promoted by differential learning nurtures regularity of positioning behavior.",
"title": ""
},
{
"docid": "883c37c54b86cea09d04657426a96f97",
"text": "temporal relations tend to emerge later in development than simple comparatives. 2.4.7. Spatial Relations A large number of relations deal with the arrangement of objects or aspects of objects in space, relative to each other, such as in-out, front-back, over-under and so on. These spatial relations are like comparative relations, but often they imply or specify frames of reference that make them quite specific. For example, if you are told that house A faces the back of house B, you could order the front and back doors of both houses into a linear sequence (back door of A, front door of A, back door of B, front door of B). This is because front and back doors are relative to each individual house, and knowing the orientation of the two houses implies the more detailed information. 2.4.8. Conditionality and Causality Conditionality and causality share features with both hierarchical relations and comparative relations. Forexample, if a listener is told, “A causes B and B causes C,” s/he may simply derive, via a frame of comparison, that “A caused C and C was caused by A.” Hierarchical class membership is involved, however, if the listener derives “B was caused by A alone, but C was caused by both A and B.” That is, the listener constructs a precise hierarchy of causeeffect relations, and therefore such relational responding extends beyond the basic frame of comparison. The same type of analysis may be applied to conditional relations such as “ifthen.” The constructed nature of this relation is more obvious than with temporal relations, particularly as one begins to attribute cause to conditional properties. Events are said to cause events based on many features: sequences, contiguity, manipulability, practical exigencies, cultural beliefs, and so on. Causality itself is not a physical dimension of any event. 2.4.9. Deictic Relations By deictic relations we mean those that specify a relation in terms of the perspective of the speaker such as left-right; I-you (and all of its correlates, such as “mine”); here-there; and now-then (see Barnes and Roche, 1997a; Hayes, 1984). Some relations may or may not be deictic, such as front-back or above-below, depending on the perspective applied. For example, the sentence “the back door of my house is in front of me” contains both spatial and deictic forms of “front-back.” Deictic relations seem to be a particularly important family of relational frames that may be critical for perspective-taking. Consider, for example, the three frames of I and YOU, HERE and THERE, and NOW and THEN (when it seems contextually useful, we will capitalize relational terms if they refer to specific relational frames). These frames are unlike DERIVED RELATIONAL RESPONDING AS LEARNED BEHAVIOR 39 the others mentioned previously in that they do not appear to have any formal or nonarbitrary counterparts. Coordination, for instance, is based on formal identity or sameness, while “bigger than” is based on relative size. Temporal frames are more inherently verbal in that they are based on the nonarbitrary experience of change, but the dimensional nature of that experience must be verbally constructed. Frames that depend on perspective, however, cannot be traced to formal dimensions in the environment at all. Instead, the relationship between the individual and other events serves as the constant variable upon which these frames are based. Learning to respond appropriately to (and ask) the following kinds of questions appears to be critical in establishing these kinds of relational frames: “What are you doing now?” “What did you do then?” “What are you doing here?” “What are you doing there?” “What am I doing now?” “What did I do then?” “What am I doing here?” “What will I do there?” Each time one or more of these questions is asked or answered, the physical environment will likely be different. The only constant across all of the questions are the relational properties of I versus You, Here versus There, and Now versus Then. These properties appear to be abstracted through learning to talk about one’s own perspective in relation to other perspectives. For example, I is always from this perspective here, not from someone else’s perspective there. Clearly, a speaker must learn to respond in accordance with these relational frames. For example, if Peter is asked, “What did you do when you got there?” he should not simply describe what someone else is doing now (unless he wishes to hide what he actually did, or annoy and confuse the questioner). We shall consider the relational frames of perspective in greater detail in subsequent chapters. 2.4.10. Interactions Among Relational Frames At the present time very little is known about the effects of learning to respond in accordance with one type of frame on other framing activities. We have seen evidence in our research of such effects. For example, training in SAME may make OPPOSITION easier; training in deictic relations may make appreciation of contingencies easier and so on. One fairly clear prediction from RFT is that there should be some generalization of relational responding, particularly within families of relational frames. For example, an individual who learns to respond in accordance with sameness, may learn to respond in accordance with similarity (or opposition, since sameness is a combinatorially entailed aspect of opposition) more rapidly than, say, comparison. Similarly, learning across more closely associated families of relations may be more expected than learning across more distinct families. For example, to frame in accordance with comparison may facilitate hierarchical framing more readily than a frame of coordination. For the time being, however, such issues will have to await systematic empirical investigation. 40 RELATIONAL FRAME THEORY 2.4.11. Relational Frames: A Caveat In listing the foregoing families of relational frames, we are not suggesting that they are somehow final or absolute. If RFT is correct, the number of relational frames is limited only by the creativity of the social/verbal community that trains them. Some frames, such as coordination, have been the subject of many empirical analyses. Others such as opposition and more-than/less-than have also been studied experimentally, but the relevant database is much smaller than for coordination. Many of the frames listed, however, have not been analyzed empirically, or have only been subjected to the most preliminary of experimental analyses. Thus the list we have presented is to some degree tentative in that some of the relational frames we have identified are based on our preliminary, non-experimental analyses of human language. For example, TIME and CAUSALITY can be thought of as one or two types of relations. It is not yet clear if thinking of them as separate or related may be the most useful. Thus, while the generic concept of a relational frame is foundational to RFT, the concept of any particular relational frame is not. Our aim in presenting this list is to provide a set of conceptual tools, some more firmly grounded in data than others, that may be modified and refined as subsequent empirical analyses are conducted. 2.5. COMPLEX RELATIONAL NETWORKS It is possible to create relational networks from mixtures of various relational frames and to relate entire relational classes with other relational classes. Forexample, if one equivalence class is the opposite of another equivalence class, then normally each member of the first class is the opposite of all members of the second and vice versa. This can continue to virtually any level of complexity. For example, consider the relations that surround a given word, such as “car.” It is part of many hierarchical classes, such as the class “noun,” or the class “vehicles.” Other terms are in a hierarchical relation with it, such as “windshield” or “wheel.” It enters into many comparisons: it is faster then a snail, bigger than a breadbox, heavier than a book. It is the same as “automobile,” but different than a house, and so on. The participation of the word “car” in these relations is part of the training required for the verbal community to use the stimulus “car” in the way that it does. Even the simplest verbal concept quickly becomes the focus of a complex network of stimulus relations in natural language use. We will deal with this in detail in the next three chapters because this is a crucial form of relational responding in such activities as problem-solving, reasoning, and thinking. The generative implications of this process are spectacular. A single specified relation between two sets of relata might give rise to myriad derived relations in an instant. Entire sets of relations can change in an instant. This kind of phenomenon seems to be part of what is being described with terms like “insight.” 2.6. EMPIRICAL EVIDENCE FOR RELATIONAL FRAMES AS OPERANTS Operant behavior can be originated, maintained, modified, or eliminated in the laboratory and it is relatively easy to identify operants in that context. Many naturally occurring DERIVED RELATIONAL RESPONDING AS LEARNED BEHAVIOR 41 behaviors, however, are difficult to bring into the laboratory in such a highly controlled fashion. Nevertheless, we can examine the characteristics of these naturalistic behaviors to see if they have some of the properties characteristic of operants. Four such properties seem most relevant: first, they should develop over time rather than emerging in whole cloth; second, they should have flexible form; third, they should be under antecedent stimulus control; and fourth, they should be under consequential control. If derived stimulus relations are based upon operant behavior, they should show these four characteristics. Although much work remains to be done, there is some supporting evidence for each of them.",
"title": ""
},
{
"docid": "1d0ca28334542ed2978f986cd3550150",
"text": "Recent success of deep learning models for the task of extractive Question Answering (QA) is hinged on the availability of large annotated corpora. However, large domain specific annotated corpora are limited and expensive to construct. In this work, we envision a system where the end user specifies a set of base documents and only a few labelled examples. Our system exploits the document structure to create cloze-style questions from these base documents; pre-trains a powerful neural network on the cloze style questions; and further finetunes the model on the labeled examples. We evaluate our proposed system across three diverse datasets from different domains, and find it to be highly effective with very little labeled data. We attain more than 50% F1 score on SQuAD and TriviaQA with less than a thousand labelled examples. We are also releasing a set of 3.2M cloze-style questions for practitioners to use while building QA systems1.",
"title": ""
},
{
"docid": "802280cdb72ad33987ad57772d932537",
"text": "It is usually believed that drugs of abuse are smuggled into the United States or are clandestinely produced for illicit distribution. Less well known is that many hallucinogens and dissociative agents can be obtained from plants and fungi growing wild or in gardens. Some of these botanical sources can be located throughout the United States; others have a more narrow distribution. This article reviews plants containing N,N-dimethyltryptamine, reversible type A monoamine oxidase inhibitors (MAOI), lysergic acid amide, the anticholinergic drugs atropine and scopolamine, or the diterpene salvinorin-A (Salvia divinorum). Also reviewed are mescaline-containing cacti, psilocybin/psilocin-containing mushrooms, and the Amanita muscaria and Amanita pantherina mushrooms that contain muscimol and ibotenic acid. Dangerous misidentification is most common with the mushrooms, but even a novice forager can quickly learn how to properly identify and prepare for ingestion many of these plants. Moreover, through the ever-expanding dissemination of information via the Internet, this knowledge is being obtained and acted upon by more and more individuals. This general overview includes information on the geographical range, drug content, preparation, intoxication, and the special health risks associated with some of these plants. Information is also offered on the unique issue of when bona fide religions use such plants as sacraments in the United States. In addition to the Native American Church's (NAC) longstanding right to peyote, two religions of Brazilian origin, the Santo Daime and the Uniao do Vegetal (UDV), are seeking legal protection in the United States for their use of sacramental dimethyltryptamine-containing \"ayahuasca.\"",
"title": ""
},
{
"docid": "8976eea8c39d9cb9dea21c42bae8ebea",
"text": "Continuously monitoring schizophrenia patients’ psychiatric symptoms is crucial for in-time intervention and treatment adjustment. The Brief Psychiatric Rating Scale (BPRS) is a survey administered by clinicians to evaluate symptom severity in schizophrenia. The CrossCheck symptom prediction system is capable of tracking schizophrenia symptoms based on BPRS using passive sensing from mobile phones. We present results from an ongoing randomized control trial, where passive sensing data, self-reports, and clinician administered 7-item BPRS surveys are collected from 36 outpatients with schizophrenia recently discharged from hospital over a period ranging from 2-12 months. We show that our system can predict a symptom scale score based on a 7-item BPRS within ±1.45 error on average using automatically tracked behavioral features from phones (e.g., mobility, conversation, activity, smartphone usage, the ambient acoustic environment) and user supplied self-reports. Importantly, we show our system is also capable of predicting an individual BPRS score within ±1.59 error purely based on passive sensing from phones without any self-reported information from outpatients. Finally, we discuss how well our predictive system reflects symptoms experienced by patients by reviewing a number of case studies.",
"title": ""
},
{
"docid": "e9c383d71839547d41829348bebaabcf",
"text": "Receiver operating characteristic (ROC) analysis, which yields indices of accuracy such as the area under the curve (AUC), is increasingly being used to evaluate the performances of diagnostic tests that produce results on continuous scales. Both parametric and nonparametric ROC approaches are available to assess the discriminant capacity of such tests, but there are no clear guidelines as to the merits of each, particularly with non-binormal data. Investigators may worry that when data are non-Gaussian, estimates of diagnostic accuracy based on a binormal model may be distorted. The authors conducted a Monte Carlo simulation study to compare the bias and sampling variability in the estimates of the AUCs derived from parametric and nonparametric procedures. Each approach was assessed in data sets generated from various configurations of pairs of overlapping distributions; these included the binormal model and non-binormal pairs of distributions where one or both pair members were mixtures of Gaussian (MG) distributions with different degrees of departures from binormality. The biases in the estimates of the AUCs were found to be very small for both parametric and nonparametric procedures. The two approaches yielded very close estimates of the AUCs and the corresponding sampling variability even when data were generated from non-binormal models. Thus, for a wide range of distributions, concern about bias or imprecision of the estimates of the AUC should not be a major factor in choosing between the nonparametric and parametric approaches.",
"title": ""
},
{
"docid": "063598613ce313e2ad6d2b0697e0c708",
"text": "Contour shape descriptors are among the important shape description methods. Fourier descriptors (FD) and curvature scale space descriptors (CSSD) are widely used as contour shape descriptors for image retrieval in the literature. In MPEG-7, CSSD has been proposed as one of the contour-based shape descriptors. However, no comprehensive comparison has been made between these two shape descriptors. In this paper we study and compare FD and CSSD using standard principles and standard database. The study targets image retrieval application. Our experimental results show that FD outperforms CSSD in terms of robustness, low computation, hierarchical representation, retrieval performance and suitability for efficient indexing.",
"title": ""
},
{
"docid": "e881c2ab6abc91aa8e7cbe54d861d36d",
"text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.",
"title": ""
},
{
"docid": "df5aaa0492fc07b76eb7f8da97ebf08e",
"text": "The aim of the present case report is to describe the orthodontic-surgical treatment of a 17-year-and-9-month-old female patient with a Class III malocclusion, poor facial esthetics, and mandibular and chin protrusion. She had significant anteroposterior and transverse discrepancies, a concave profile, and strained lip closure. Intraorally, she had a negative overjet of 5 mm and an overbite of 5 mm. The treatment objectives were to correct the malocclusion, and facial esthetic and also return the correct function. The surgical procedures included a Le Fort I osteotomy for expansion, advancement, impaction, and rotation of the maxilla to correct the occlusal plane inclination. There was 2 mm of impaction of the anterior portion of the maxilla and 5 mm of extrusion in the posterior region. A bilateral sagittal split osteotomy was performed in order to allow counterclockwise rotation of the mandible and anterior projection of the chin, accompanying the maxillary occlusal plane. Rigid internal fixation was used without any intermaxillary fixation. It was concluded that these procedures were very effective in producing a pleasing facial esthetic result, showing stability 7 years posttreatment.",
"title": ""
},
{
"docid": "9c6f2a1eb23fc35e5a3a2b54c5dcb0c4",
"text": "Some of the current assembly issues of fine-pitch chip-on-flex (COF) packages for LCD applications are reviewed. Traditional underfill material, anisotropic conductive adhesive (ACA), and nonconductive adhesive (NCA) are considered in conjunction with two applicable bonding methods including thermal and laser bonding. Advantages and disadvantages of each material/process combination are identified. Their applicability is further investigated to identify a process most suitable to the next-generation fine-pitch packages (less than 35 mum). Numerical results and subsequent testing results indicate that the NCA/laser bonding process is advantageous for preventing both lead crack and excessive misalignment compared to the conventional bonding process",
"title": ""
}
] |
scidocsrr
|
676750cc6699250834bbba06c106c5c6
|
Cyber-Physical-Social Based Security Architecture for Future Internet of Things
|
[
{
"docid": "de8e9537d6b50467d014451dcaae6c0e",
"text": "With increased global interconnectivity, reliance on e-commerce, network services, and Internet communication, computer security has become a necessity. Organizations must protect their systems from intrusion and computer-virus attacks. Such protection must detect anomalous patterns by exploiting known signatures while monitoring normal computer programs and network usage for abnormalities. Current antivirus and network intrusion detection (ID) solutions can become overwhelmed by the burden of capturing and classifying new viral stains and intrusion patterns. To overcome this problem, a self-adaptive distributed agent-based defense immune system based on biological strategies is developed within a hierarchical layered architecture. A prototype interactive system is designed, implemented in Java, and tested. The results validate the use of a distributed-agent biological-system approach toward the computer-security problems of virus elimination and ID.",
"title": ""
},
{
"docid": "e33dd9c497488747f93cfcc1aa6fee36",
"text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.",
"title": ""
}
] |
[
{
"docid": "bc5b77c532c384281af64633fcf697a3",
"text": "The purpose of this study was to investigate the effects of a 12-week resistance-training program on muscle strength and mass in older adults. Thirty-three inactive participants (60-74 years old) were assigned to 1 of 3 groups: high-resistance training (HT), moderate-resistance training (MT), and control. After the training period, both HT and MT significantly increased 1-RM body strength, the peak torque of knee extensors and flexors, and the midthigh cross-sectional area of the total muscle. In addition, both HT and MT significantly decreased the abdominal circumference. HT was more effective in increasing 1-RM strength, muscle mass, and peak knee-flexor torque than was MT. These data suggest that muscle strength and mass can be improved in the elderly with both high- and moderate-intensity resistance training, but high-resistance training can lead to greater strength gains and hypertrophy than can moderate-resistance training.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "5c819727ba80894e72531a62e402f0c4",
"text": "omega-3 fatty acids, alpha-tocopherol, ascorbic acid, beta-carotene and glutathione determined in leaves of purslane (Portulaca oleracea), grown in both a controlled growth chamber and in the wild, were compared in composition to spinach. Leaves from both samples of purslane contained higher amounts of alpha-linolenic acid (18:3w3) than did leaves of spinach. Chamber-grown purslane contained the highest amount of 18:3w3. Samples from the two kinds of purslane contained higher leaves of alpha-tocopherol, ascorbic acid and glutathione than did spinach. Chamber-grown purslane was richer in all three and the amount of alpha-tocopherol was seven times higher than that found in spinach, whereas spinach was slightly higher in beta-carotene. One hundred grams of fresh purslane leaves (one serving) contain about 300-400 mg of 18:3w3; 12.2 mg of alpha-tocopherol; 26.6 mg of ascorbic acid; 1.9 mg of beta-carotene; and 14.8 mg of glutathione. We confirm that purslane is a nutritious food rich in omega-3 fatty acids and antioxidants.",
"title": ""
},
{
"docid": "ede12c734b2fb65b427b3d47e1f3c3d8",
"text": "Battery management systems in hybrid-electric-vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state-of-charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose methods, based on extended Kalman filtering (EKF), that are able to accomplish these goals for a lithium ion polymer battery pack. We expect that they will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. This third paper concludes the series by presenting five additional applications where either an EKF or results from EKF may be used in typical BMS algorithms: initializing state estimates after the vehicle has been idle for some time; estimating state-of-charge with dynamic error bounds on the estimate; estimating pack available dis/charge power; tracking changing pack parameters (including power fade and capacity fade) as the pack ages, and therefore providing a quantitative estimate of state-of-health; and determining which cells must be equalized. Results from pack tests are presented. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e4e26cc61b326f8d60dc3f32909d340c",
"text": "We propose two secure protocols namely private equality test (PET) for single comparison and private batch equality test (PriBET) for batch comparisons of l-bit integers. We ensure the security of these secure protocols using somewhat homomorphic encryption (SwHE) based on ring learning with errors (ring-LWE) problem in the semi-honest model. In the PET protocol, we take two private integers input and produce the output denoting their equality or non-equality. Here the PriBET protocol is an extension of the PET protocol. So in the PriBET protocol, we take single private integer and another set of private integers as inputs and produce the output denoting whether single integer equals at least one integer in the set of integers or not. To serve this purpose, we also propose a new packing method for doing the batch equality test using few homomorphic multiplications of depth one. Here we have done our experiments at the 140-bit security level. For the lattice dimension 2048, our experiments show that the PET protocol is capable of doing any equality test of 8-bit to 2048-bit that require at most 107 milliseconds. Moreover, the PriBET protocol is capable of doing about 600 (resp., 300) equality comparisons per second for 32-bit (resp., 64-bit) integers. In addition, our experiments also show that the PriBET protocol can do more computations within the same time if the data size is smaller like 8-bit or 16-bit.",
"title": ""
},
{
"docid": "1cc4048067cc93c2f1e836c77c2e06dc",
"text": "Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.",
"title": ""
},
{
"docid": "440436a887f73c599452dc57c689dc9d",
"text": "This paper will explore the process of desalination by reverse osmosis (RO) and the benefits that it can contribute to society. RO may offer a sustainable solution to the water crisis, a global problem that is not going away without severe interference and innovation. This paper will go into depth on the processes involved with RO and how contaminants are removed from sea-water. Additionally, the use of significant pressures to force water through the semipermeable membranes, which only allow water to pass through them, will be investigated. Throughout the paper, the topics of environmental and economic sustainability will be covered. Subsequently, the two primary methods of desalination, RO and multi-stage flash distillation (MSF), will be compared. It will become clear that RO is a better method of desalination when compared to MSF. This paper will study examples of RO in action, including; the Carlsbad Plant, the Sorek Plant, and applications beyond the potable water industry. It will be shown that The Claude \"Bud\" Lewis Carlsbad Desalination Plant (Carlsbad), located in San Diego, California is a vital resource in the water economy of the area. The impact of the Sorek Plant, located in Tel Aviv, Israel will also be explained. Both plants produce millions of gallons of fresh, drinkable water and are vital resources for the people that live there.",
"title": ""
},
{
"docid": "10496d5427035670d89f72a64b68047f",
"text": "A challenge for human-computer interaction researchers and user interf ace designers is to construct information technologies that support creativity. This ambitious goal can be attained by building on an adequate understanding of creative processes. This article offers a four-phase framework for creativity that might assist designers in providing effective tools for users: (1)Collect: learn from provious works stored in libraries, the Web, etc.; (2) Relate: consult with peers and mentors at early, middle, and late stages, (3)Create: explore, compose, evaluate possible solutions; and (4) Donate: disseminate the results and contribute to the libraries. Within this integrated framework, this article proposes eight activities that require human-computer interaction research and advanced user interface design. A scenario about an architect illustrates the process of creative work within such an environment.",
"title": ""
},
{
"docid": "c19b63a2c109c098c22877bcba8690ae",
"text": "A monolithic current-mode pulse width modulation (PWM) step-down dc-dc converter with 96.7% peak efficiency and advanced control and protection circuits is presented in this paper. The high efficiency is achieved by \"dynamic partial shutdown strategy\" which enhances circuit speed with less power consumption. Automatic PWM and \"pulse frequency modulation\" switching boosts conversion efficiency during light load operation. The modified current sensing circuit and slope compensation circuit simplify the current-mode control circuit and enhance the response speed. A simple high-speed over-current protection circuit is proposed with the modified current sensing circuit. The new on-chip soft-start circuit prevents the power on inrush current without additional off-chip components. The dc-dc converter has been fabricated with a 0.6 mum CMOS process and measured 1.35 mm2 with the controller measured 0.27 mm2. Experimental results show that the novel on-chip soft-start circuit with longer than 1.5 ms soft-start time suppresses the power-on inrush current. This converter can operate at 1.1 MHz with supply voltage from 2.2 to 6.0 V. Measured power efficiency is 88.5-96.7% for 0.9 to 800 mA output current and over 85.5% for 1000 mA output current.",
"title": ""
},
{
"docid": "cc5f814338606b92c92aa6caf2f4a3f5",
"text": "The purpose of this study was to report the outcome of infants with antenatal hydronephrosis. Between May 1999 and June 2006, all patients diagnosed with isolated fetal renal pelvic dilatation (RPD) were prospectively followed. The events of interest were: presence of uropathy, need for surgical intervention, RPD resolution, urinary tract infection (UTI), and hypertension. RPD was classified as mild (5–9.9 mm), moderate (10–14.9 mm) or severe (≥15 mm). A total of 192 patients was included in the analysis; 114 were assigned to the group of non-significant findings (59.4%) and 78 to the group of significant uropathy (40.6%). Of 89 patients with mild dilatation, 16 (18%) presented uropathy. Median follow-up time was 24 months. Twenty-seven patients (15%) required surgical intervention. During follow-up, UTI occurred in 27 (14%) children. Of 89 patients with mild dilatation, seven (7.8%) presented UTI during follow-up. Renal function, blood pressure, and somatic growth were within normal range at last visit. The majority of patients with mild fetal RPD have no significant findings during infancy. Nevertheless, our prospective study has shown that 18% of these patients presented uropathy and 7.8% had UTI during a medium-term follow-up time. Our findings suggested that, in contrast to patients with moderate/severe RPD, infants with mild RPD do not require invasive diagnostic procedures but need strict clinical surveillance for UTI and progression of RPD.",
"title": ""
},
{
"docid": "2f3bb54596bba8cd7a073ef91964842c",
"text": "BACKGROUND AND PURPOSE\nRecent meta-analyses have suggested similar wound infection rates when using single- or multiple-dose antibiotic prophylaxis in the operative management of closed long bone fractures. In order to assist clinicians in choosing the optimal prophylaxis strategy, we performed a cost-effectiveness analysis comparing single- and multiple-dose prophylaxis.\n\n\nMETHODS\nA cost-effectiveness analysis comparing the two prophylactic strategies was performed using time horizons of 60 days and 1 year. Infection probabilities, costs, and quality-adjusted life days (QALD) for each strategy were estimated from the literature. All costs were reported in 2007 US dollars. A base case analysis was performed for the surgical treatment of a closed ankle fracture. Sensitivity analysis was performed for all variables, including probabilistic sensitivity analysis using Monte Carlo simulation.\n\n\nRESULTS\nSingle-dose prophylaxis results in lower cost and a similar amount of quality-adjusted life days gained. The single-dose strategy had an average cost of $2,576 for an average gain of 272 QALD. Multiple doses had an average cost of $2,596 for 272 QALD gained. These results are sensitive to the incidence of surgical site infection and deep wound infection for the single-dose treatment arm. Probabilistic sensitivity analysis using all model variables also demonstrated preference for the single-dose strategy.\n\n\nINTERPRETATION\nAssuming similar infection rates between the prophylactic groups, our results suggest that single-dose prophylaxis is slightly more cost-effective than multiple-dose regimens for the treatment of closed fractures. Extensive sensitivity analysis demonstrates these results to be stable using published meta-analysis infection rates.",
"title": ""
},
{
"docid": "3aa58539c69d6706bc0a9ca0256cdf80",
"text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.",
"title": ""
},
{
"docid": "bf4a991dbb32ec1091a535750637dbd7",
"text": "As cutting-edge experiments display ever more extreme forms of non-classical behavior, the prevailing view on the interpretation of quantum mechanics appears to be gradually changing. A (highly unscientific) poll taken at the 1997 UMBC quantum mechanics workshop gave the once alldominant Copenhagen interpretation less than half of the votes. The Many Worlds interpretation (MWI) scored second, comfortably ahead of the Consistent Histories and Bohm interpretations. It is argued that since all the above-mentioned approaches to nonrelativistic quantum mechanics give identical cookbook prescriptions for how to calculate things in practice, practical-minded experimentalists, who have traditionally adopted the “shut-up-and-calculate interpretation”, typically show little interest in whether cozy classical concepts are in fact real in some untestable metaphysical sense or merely the way we subjectively perceive a mathematically simpler world where the Schrödinger equation describes everything — and that they are therefore becoming less bothered by a profusion of worlds than by a profusion of words. Common objections to the MWI are discussed. It is argued that when environment-induced decoherence is taken into account, the experimental predictions of the MWI are identical to those of the Copenhagen interpretation except for an experiment involving a Byzantine form of “quantum suicide”. This makes the choice between them purely a matter of taste, roughly equivalent to whether one believes mathematical language or human language to be more fundamental.",
"title": ""
},
{
"docid": "f274062a188fb717b8645e4d2352072a",
"text": "CPU-FPGA heterogeneous acceleration platforms have shown great potential for continued performance and energy efficiency improvement for modern data centers, and have captured great attention from both academia and industry. However, it is nontrivial for users to choose the right platform among various PCIe and QPI based CPU-FPGA platforms from different vendors. This paper aims to find out what microarchitectural characteristics affect the performance, and how. We conduct our quantitative comparison and in-depth analysis on two representative platforms: QPI-based Intel-Altera HARP with coherent shared memory, and PCIe-based Alpha Data board with private device memory. We provide multiple insights for both application developers and platform designers.",
"title": ""
},
{
"docid": "c9c29c091c9851920315c4d4b38b4c9f",
"text": "BACKGROUND\nThe presence of six or more café au lait (CAL) spots is a criterion for the diagnosis of neurofibromatosis type 1 (NF-1). Children with multiple CAL spots are often referred to dermatologists for NF-1 screening. The objective of this case series is to characterize a subset of fair-complected children with red or blond hair and multiple feathery CAL spots who did not meet the criteria for NF-1 at the time of their last evaluation.\n\n\nMETHODS\nWe conducted a chart review of eight patients seen in our pediatric dermatology clinic who were previously identified as having multiple CAL spots and no other signs or symptoms of NF-1.\n\n\nRESULTS\nWe describe eight patients ages 2 to 9 years old with multiple, irregular CAL spots with feathery borders and no other signs or symptoms of NF-1. Most of these patients had red or blond hair and were fair complected. All patients were evaluated in our pediatric dermatology clinic, some with a geneticist. The number of CAL spots per patient ranged from 5 to 15 (mean 9.4, median 9).\n\n\nCONCLUSION\nA subset of children, many with fair complexions and red or blond hair, has an increased number of feathery CAL spots and appears unlikely to develop NF-1, although genetic testing was not conducted. It is important to recognize the benign nature of CAL spots in these patients so that appropriate screening and follow-up recommendations may be made.",
"title": ""
},
{
"docid": "fc07af4d49f7b359e484381a0a88aff7",
"text": "In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov Complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee’s level of intelligence in order to obtain an intelligence score within a limited time.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1ec62f70be9d006b7e1295ef8d9cb1e3",
"text": "The aim of this research is to explore social media and its benefits especially from business-to-business innovation and related customer interface perspective, and to create a more comprehensive picture of the possibilities of social media for the business-to-business sector. Business-to-business context was chosen because it is in many ways a very different environment for social media than business-to-consumer context, and is currently very little academically studied. A systematic literature review on B2B use of social media and achieved benefits in the inn ovation con text was performed to answer the questions above and achieve the research goals. The study clearly demonstrates that not merely B2C's, as commonly believed, but also B2B's can benefit from the use of social media in a variety of ways. Concerning the broader classes of innovation --related benefits, the reported benefits of social media use referred to increased customer focus and understanding, increased level of customer service, and decreased time-to-market. The study contributes to the existing social media --related literature, because there were no found earlier comprehensive academic studies on the use of social media in the innovation process in the context of B2B customer interface.",
"title": ""
},
{
"docid": "97c162261666f145da6e81d2aa9a8343",
"text": "Shape optimization is a growing field of interest in many areas of academic research, marine design, and manufacturing. As part of the CREATE Ships Hydromechanics Product, an effort is underway to develop a computational tool set and process framework that can aid the ship designer in making informed decisions regarding the influence of the planned hull shape on its hydrodynamic characteristics, even at the earliest stages where decisions can have significant cost implications. The major goal of this effort is to utilize the increasing experience gained in using these methods to assess shape optimization techniques and how they might impact design for current and future naval ships. Additionally, this effort is aimed at establishing an optimization framework within the bounds of a collaborative design environment that will result in improved performance and better understanding of preliminary ship designs at an early stage. The initial effort demonstrated here is aimed at ship resistance, and examples are shown for full ship and localized bow dome shaping related to the Joint High Speed Sealift (JHSS) hull concept. Introduction Any ship design inherently involves optimization, as competing requirements and design parameters force the design to evolve, and as designers strive to deliver the most effective and efficient platform possible within the constraints of time, budget, and performance requirements. A significant number of applications of computational fluid dynamics (CFD) tools to hydrodynamic optimization, mostly for reducing calm-water drag and wave patterns, demonstrate a growing interest in optimization. In addition, more recent ship design programs within the US Navy illustrate some fundamental changes in mission and performance requirements, and future ship designs may be radically different from current ships in the fleet. One difficulty with designing such new concepts is the lack of experience from which to draw from when performing design studies; thus, optimization techniques may be particularly useful. These issues point to a need for greater fidelity, robustness, and ease of use in the tools used in early stage ship design. The Computational Research and Engineering Acquisition Tools and Environments (CREATE) program attempts to address this in its plan to develop and deploy sets of computational engineering design and analysis tools. It is expected that advances in computers will allow for highly accurate design and analyses studies that can be carried out throughout the design process. In order to evaluate candidate designs and explore the design space more thoroughly shape optimization is an important component of the CREATE Ships Hydromechanics Product. The current program development plan includes fast parameterized codes to bound the design space and more accurate Reynolds-Averaged Navier-Stokes (RANS) codes to better define the geometry and performance of the specified hull forms. The potential for hydrodynamic shape optimization has been demonstrated for a variety of different hull forms, including multi-hulls, in related efforts (see e.g., Wilson et al, 2009, Stern et al, Report Documentation Page Form Approved",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
}
] |
scidocsrr
|
b5058bb2c8ad7534f010c04fa0032c83
|
SurroundSense: mobile phone localization via ambience fingerprinting
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "8718d91f37d12b8ff7658723a937ea84",
"text": "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).\n To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and/or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.",
"title": ""
}
] |
[
{
"docid": "a5ed1ebf973e3ed7ea106e55795e3249",
"text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.",
"title": ""
},
{
"docid": "cbde86d9b73371332a924392ae1f10d0",
"text": "The difficulty to solve multiple objective combinatorial optimization problems with traditional techniques has urged researchers to look for alternative, better performing approaches for them. Recently, several algorithms have been proposed which are based on the Ant Colony Optimization metaheuristic. In this contribution, the existing algorithms of this kind are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objective genetic algorithms.",
"title": ""
},
{
"docid": "446af0ad077943a77ac4a38fd84df900",
"text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.",
"title": ""
},
{
"docid": "de0d2808f949723f1c0ee8e87052f889",
"text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.",
"title": ""
},
{
"docid": "e0d6212e77cbd54b54db5d38eca29814",
"text": "Summarization aims to represent source documents by a shortened passage. Existing methods focus on the extraction of key information, but often neglect coherence. Hence the generated summaries suffer from a lack of readability. To address this problem, we have developed a graph-based method by exploring the links between text to produce coherent summaries. Our approach involves finding a sequence of sentences that best represent the key information in a coherent way. In contrast to the previous methods that focus only on salience, the proposed method addresses both coherence and informativeness based on textual linkages. We conduct experiments on the DUC2004 summarization task data set. A performance comparison reveals that the summaries generated by the proposed system achieve comparable results in terms of the ROUGE metric, and show improvements in readability by human evaluation.",
"title": ""
},
{
"docid": "d9b19dd523fd28712df61384252d331c",
"text": "Purpose – The purpose of this paper is to examine the ways in which governments build social media and information and communication technologies (ICTs) into e-government transparency initiatives, to promote collaboration with members of the public and the ways in members of the public are able to employ the same social media to monitor government activities. Design/methodology/approach – This study used an iterative strategy that involved conducting a literature review, content analysis, and web site analysis, offering multiple perspectives on government transparency efforts, the role of ICTs and social media in these efforts, and the ability of e-government initiatives to foster collaborative transparency through embedded ICTs and social media. Findings – The paper identifies key initiatives, potential impacts, and future challenges for collaborative e-government as a means of transparency. Originality/value – The paper is one of the first to examine the interrelationships between ICTs, social media, and collaborative e-government to facilitate transparency.",
"title": ""
},
{
"docid": "d7c27413eb3f379618d1aafd85a43d3f",
"text": "This paper presents a tool Altair that automatically generates API function cross-references, which emphasizes reliable structural measures and does not depend on specific client code. Altair ranks related API functions for a given query according to pair-wise overlap, i.e., how they share state, and clusters tightly related ones into meaningful modules.\n Experiments against several popular C software packages show that Altair recommends related API functions for a given query with remarkably more precise and complete results than previous tools, that it can extract modules from moderate-sized software (e.g., Apache with 1000+ functions) at high precision and recall rates (e.g., both exceeding 70% for two modules in Apache), and that the computation can finish within a few seconds.",
"title": ""
},
{
"docid": "44b7ed6c8297b6f269c8b872b0fd6266",
"text": "vii",
"title": ""
},
{
"docid": "ee18a820614aac64d26474796464b518",
"text": "Recommender systems have already proved to be valuable for coping with the information overload problem in several application domains. They provide people with suggestions for items which are likely to be of interest for them; hence, a primary function of recommender systems is to help people make good choices and decisions. However, most previous research has focused on recommendation techniques and algorithms, and less attention has been devoted to the decision making processes adopted by the users and possibly supported by the system. There is still a gap between the importance that the community gives to the assessment of recommendation algorithms and the current range of ongoing research activities concerning human decision making. Different decision-psychological phenomena can influence the decision making of users of recommender systems, and research along these lines is becoming increasingly important and popular. This special issue highlights how the coupling of recommendation algorithms with the understanding of human choice and decision making theory has the potential to benefit research and practice on recommender systems and to enable users to achieve a good balance between decision accuracy and decision effort.",
"title": ""
},
{
"docid": "dd1e7bb3ba33c5ea711c0d066db53fa9",
"text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.",
"title": ""
},
{
"docid": "79287d0ca833605430fefe4b9ab1fd92",
"text": "Passwords are frequently used in data encryption and user authentication. Since people incline to choose meaningful words or numbers as their passwords, lots of passwords are easy to guess. This paper introduces a password guessing method based on Long Short-Term Memory recurrent neural networks. After training our LSTM neural network with 30 million passwords from leaked Rockyou dataset, the generated 3.35 billion passwords could cover 81.52% of the remaining Rockyou dataset. Compared with PCFG and Markov methods, this method shows higher coverage rate.",
"title": ""
},
{
"docid": "27ffdb0d427d2e281ffe84e219e6ed72",
"text": "UNLABELLED\nHitherto, noncarious cervical lesions (NCCLs) of teeth have been generally ascribed to either toothbrush-dentifrice abrasion or acid \"erosion.\" The last two decades have provided a plethora of new studies concerning such lesions. The most significant studies are reviewed and integrated into a practical approach to the understanding and designation of these lesions. A paradigm shift is suggested regarding use of the term \"biocorrosion\" to supplant \"erosion\" as it continues to be misused in the United States and many other countries of the world. Biocorrosion embraces the chemical, biochemical, and electrochemical degradation of tooth substance caused by endogenous and exogenous acids, proteolytic agents, as well as the piezoelectric effects only on dentin. Abfraction, representing the microstructural loss of tooth substance in areas of stress concentration, should not be used to designate all NCCLs because these lesions are commonly multifactorial in origin. Appropriate designation of a particular NCCL depends upon the interplay of the specific combination of three major mechanisms: stress, friction, and biocorrosion, unique to that individual case. Modifying factors, such as saliva, tongue action, and tooth form, composition, microstructure, mobility, and positional prominence are elucidated.\n\n\nCLINICAL SIGNIFICANCE\nBy performing a comprehensive medical and dental history, using precise terms and concepts, and utilizing the Revised Schema of Pathodynamic Mechanisms, the dentist may successfully identify and treat the etiology of root surface lesions. Preventive measures may be instituted if the causative factors are detected and their modifying factors are considered.",
"title": ""
},
{
"docid": "598dd39ec35921242b94f17e24b30389",
"text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.",
"title": ""
},
{
"docid": "159e040b0e74ad1b6124907c28e53daf",
"text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ",
"title": ""
},
{
"docid": "b142873eed364bd471fbe231cd19c27d",
"text": "Robotics have long sought an actuation technology comparable to or as capable as biological muscle tissue. Natural muscles exhibit a high power-to-weight ratio, inherent compliance and damping, fast action, and a high dynamic range. They also produce joint displacements and forces without the need for gearing or additional hardware. Recently, supercoiled commercially available polymer threads (sewing thread or nylon fishing lines) have been used to create significant mechanical power in a muscle-like form factor. Heating and cooling the polymer threads causes contraction and expansion, which can be utilized for actuation. In this paper, we describe the working principle of supercoiled polymer (SCP) actuation and explore the controllability and properties of these threads. We show that under appropriate environmental conditions, the threads are suitable as a building block for a controllable artificial muscle. We leverage off-the-shelf silver-coated threads to enable rapid electrical heating while the low thermal mass allows for rapid cooling. We utilize both thermal and thermomechanical models for feed-forward and feedback control. The resulting SCP actuator regulates to desired force levels in as little as 28 ms. Together with its inherent stiffness and damping, this is sufficient for a position controller to execute large step movements in under 100 ms. This controllability, high performance, the mechanical properties, and the extremely low material cost are indicative of a viable artificial muscle.",
"title": ""
},
{
"docid": "2cb0c74e57dea6fead692d35f8a8fac6",
"text": "Matching local image descriptors is a key step in many computer vision applications. For more than a decade, hand-crafted descriptors such as SIFT have been used for this task. Recently, multiple new descriptors learned from data have been proposed and shown to improve on SIFT in terms of discriminative power. This paper is dedicated to an extensive experimental evaluation of learned local features to establish a single evaluation protocol that ensures comparable results. In terms of matching performance, we evaluate the different descriptors regarding standard criteria. However, considering matching performance in isolation only provides an incomplete measure of a descriptors quality. For example, finding additional correct matches between similar images does not necessarily lead to a better performance when trying to match images under extreme viewpoint or illumination changes. Besides pure descriptor matching, we thus also evaluate the different descriptors in the context of image-based reconstruction. This enables us to study the descriptor performance on a set of more practical criteria including image retrieval, the ability to register images under strong viewpoint and illumination changes, and the accuracy and completeness of the reconstructed cameras and scenes. To facilitate future research, the full evaluation pipeline is made publicly available.",
"title": ""
},
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "e39ad8ee1d913cba1707b6aafafceefb",
"text": "Thoracic Outlet Syndrome (TOS) is the constellation of symptoms caused by compression of neurovascular structures at the superior aperture of the thorax, properly the thoracic inlet! The diagnosis and treatment is contentious and some even question its existence. Symptoms are often confused with distal compression neuropathies or cervical",
"title": ""
},
{
"docid": "f56c5a623b29b88f42bf5d6913b2823e",
"text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.",
"title": ""
},
{
"docid": "863ec0a6a06ce9b3cc46c85b09a7af69",
"text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x2 + y2 + z2 + w2 = 12(x + y + z + w) 2. Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element −n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple.",
"title": ""
}
] |
scidocsrr
|
01b5af49bd41891b0e9c7c78fbcc468b
|
Collaborative Networks of Cognitive Systems
|
[
{
"docid": "8bc04818536d2a8deff01b0ea0419036",
"text": "Research in IT must address the design tasks faced by practitioners. Real problems must be properly conceptualized and represented, appropriate techniques for their solution must be constructed, and solutions must be implemented and evaluated using appropriate criteria. If significant progress is to be made, IT research must also develop an understanding of how and why IT systems work or do not work. Such an understanding must tie together natural laws governing IT systems with natural laws governing the environments in which they operate. This paper presents a two dimensional framework for research in information technology. The first dimension is based on broad types of design and natural science research activities: build, evaluate, theorize, and justify. The second dimension is based on broad types of outputs produced by design research: representational constructs, models, methods, and instantiations. We argue that both design science and natural science activities are needed to insure that IT research is both relevant and effective.",
"title": ""
},
{
"docid": "86b12f890edf6c6561536a947f338feb",
"text": "Looking for qualified reading resources? We have process mining discovery conformance and enhancement of business processes to check out, not only review, yet also download them or even read online. Discover this great publication writtern by now, simply right here, yeah just right here. Obtain the data in the sorts of txt, zip, kindle, word, ppt, pdf, as well as rar. Once again, never ever miss out on to read online as well as download this publication in our site here. Click the link. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files.",
"title": ""
}
] |
[
{
"docid": "b2a7c0a96f29a554ecdba2d56778b7c7",
"text": "Existing video streaming algorithms use various estimation approaches to infer the inherently variable bandwidth in cellular networks, which often leads to reduced quality of experience (QoE). We ask the question: \"If accurate bandwidth prediction were possible in a cellular network, how much can we improve video QoE?\". Assuming we know the bandwidth for the entire video session, we show that existing streaming algorithms only achieve between 69%-86% of optimal quality. Since such knowledge may be impractical, we study algorithms that know the available bandwidth for a few seconds into the future. We observe that prediction alone is not sufficient and can in fact lead to degraded QoE. However, when combined with rate stabilization functions, prediction outperforms existing algorithms and reduces the gap with optimal to 4%. Our results lead us to believe that cellular operators and content providers can tremendously improve video QoE by predicting available bandwidth and sharing it through APIs.",
"title": ""
},
{
"docid": "b922460e2a1d8b6dff6cc1c8c8c459ed",
"text": "This paper presents a new dynamic latched comparator which shows lower input-referred latch offset voltage and higher load drivability than the conventional dynamic latched comparators. With two additional inverters inserted between the input- and output-stage of the conventional double-tail dynamic comparator, the gain preceding the regenerative latch stage was improved and the complementary version of the output-latch stage, which has bigger output drive current capability at the same area, was implemented. As a result, the circuit shows up to 25% less input-referred latch offset voltage and 44% less sensitivity of the delay versus the input voltage difference (delay/log(ΔVin)), which is about 17.2ps/decade, than the conventional double-tail latched comparator at approximately the same area and power consumption.",
"title": ""
},
{
"docid": "7cef2ade99ffacfe1df5108665870988",
"text": "We describe improvements of the currently most popular method for prediction of classically secreted proteins, SignalP. SignalP consists of two different predictors based on neural network and hidden Markov model algorithms, where both components have been updated. Motivated by the idea that the cleavage site position and the amino acid composition of the signal peptide are correlated, new features have been included as input to the neural network. This addition, combined with a thorough error-correction of a new data set, have improved the performance of the predictor significantly over SignalP version 2. In version 3, correctness of the cleavage site predictions has increased notably for all three organism groups, eukaryotes, Gram-negative and Gram-positive bacteria. The accuracy of cleavage site prediction has increased in the range 6-17% over the previous version, whereas the signal peptide discrimination improvement is mainly due to the elimination of false-positive predictions, as well as the introduction of a new discrimination score for the neural network. The new method has been benchmarked against other available methods. Predictions can be made at the publicly available web server",
"title": ""
},
{
"docid": "6a19410817766b052a2054b2cb3efe42",
"text": "Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (‘bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.",
"title": ""
},
{
"docid": "d14812771115b4736c6d46aecadb2d8a",
"text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.",
"title": ""
},
{
"docid": "90aceb010cead2fbdc37781c686bf522",
"text": "The present article examines the relationship between age and dominance in bilingual populations. Age in bilingualism is understood as the point in devel10 opment at which second language (L2) acquisition begins and as the chronological age of users of two languages. Age of acquisition (AoA) is a factor in determining which of a bilingual’s two languages is dominant and to what degree, and it, along with age of first language (L1) attrition, may be associated with shifts in dominance from the L1 to the L2. In turn, dominance and chron15 ological age, independently and in interaction with lexical frequency, predict performance on naming tasks. The article also considers the relevance of criticalperiod accounts of the relationships of AoA and age of L1 attrition to L2 dominance, and of usage-based and cognitive-aging accounts of the roles of age and dominance in naming.",
"title": ""
},
{
"docid": "0f85ce6afd09646ee1b5242a4d6122d1",
"text": "Environmental concern has resulted in a renewed interest in bio-based materials. Among them, plant fibers are perceived as an environmentally friendly substitute to glass fibers for the reinforcement of composites, particularly in automotive engineering. Due to their wide availability, low cost, low density, high-specific mechanical properties, and eco-friendly image, they are increasingly being employed as reinforcements in polymer matrix composites. Indeed, their complex microstructure as a composite material makes plant fiber a really interesting and challenging subject to study. Research subjects about such fibers are abundant because there are always some issues to prevent their use at large scale (poor adhesion, variability, low thermal resistance, hydrophilic behavior). The choice of natural fibers rather than glass fibers as filler yields a change of the final properties of the composite. One of the most relevant differences between the two kinds of fiber is their response to humidity. Actually, glass fibers are considered as hydrophobic whereas plant fibers have a pronounced hydrophilic behavior. Composite materials are often submitted to variable climatic conditions during their lifetime, including unsteady hygroscopic conditions. However, in humid conditions, strong hydrophilic behavior of such reinforcing fibers leads to high level of moisture absorption in wet environments. This results in the structural modification of the fibers and an evolution of their mechanical properties together with the composites in which they are fitted in. Thereby, the understanding of these moisture absorption mechanisms as well as the influence of water on the final properties of these fibers and their composites is of great interest to get a better control of such new biomaterials. This is the topic of this review paper.",
"title": ""
},
{
"docid": "ceb270c07d26caec5bc20e7117690f9f",
"text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].",
"title": ""
},
{
"docid": "94105f6e64a27b18f911d788145385b6",
"text": "Low socioeconomic status (SES) is generally associated with high psychiatric morbidity, more disability, and poorer access to health care. Among psychiatric disorders, depression exhibits a more controversial association with SES. The authors carried out a meta-analysis to evaluate the magnitude, shape, and modifiers of such an association. The search found 51 prevalence studies, five incidence studies, and four persistence studies meeting the criteria. A random effects model was applied to the odds ratio of the lowest SES group compared with the highest, and meta-regression was used to assess the dose-response relation and the influence of covariates. Results indicated that low-SES individuals had higher odds of being depressed (odds ratio = 1.81, p < 0.001), but the odds of a new episode (odds ratio = 1.24, p = 0.004) were lower than the odds of persisting depression (odds ratio = 2.06, p < 0.001). A dose-response relation was observed for education and income. Socioeconomic inequality in depression is heterogeneous and varies according to the way psychiatric disorder is measured, to the definition and measurement of SES, and to contextual features such as region and time. Nonetheless, the authors found compelling evidence for socioeconomic inequality in depression. Strategies for tackling inequality in depression are needed, especially in relation to the course of the disorder.",
"title": ""
},
{
"docid": "d8752c40782d8189d454682d1d30738e",
"text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.",
"title": ""
},
{
"docid": "6df45b11d623e8080cc7163632dde893",
"text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamicallyscalable and often virtualized resources are provided as a service over the Internet has become a significant issues. In this paper, we aim to pinpoint the challenges and issues of Cloud computing. We first discuss two related computing paradigms Service-Oriented Computing and Grid computing, and their relationships with Cloud computing. We then identify several challenges from the Cloud computing adoption perspective. Last, we will highlight the Cloud interoperability issue that deserves substantial further research and development. __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "ad967dca901ccdd3f33b83da29e9f18b",
"text": "Energy consumption limits battery life in mobile devices and increases costs for servers and data centers. Approximate computing addresses energy concerns by allowing applications to trade accuracy for decreased energy consumption. Approximation frameworks can guarantee accuracy or performance and generally reduce energy usage; however, they provide no energy guarantees. Such guarantees would be beneficial for users who have a fixed energy budget and want to maximize application accuracy within that budget. We address this need by presenting JouleGuard: a runtime control system that coordinates approximate applications with system resource usage to provide control theoretic formal guarantees of energy consumption, while maximizing accuracy. We implement JouleGuard and test it on three different platforms (a mobile, tablet, and server) with eight different approximate applications created from two different frameworks. We find that JouleGuard respects energy budgets, provides near optimal accuracy, adapts to phases in application workload, and provides better outcomes than application approximation or system resource adaptation alone. JouleGuard is general with respect to the applications and systems it controls, making it a suitable runtime for a number of approximate computing frameworks.",
"title": ""
},
{
"docid": "7da0a472f0a682618eccbfd4229ca14f",
"text": "A Search Join is a join operation which extends a user-provided table with additional attributes based on a large corpus of heterogeneous data originating from the Web or corporate intranets. Search Joins are useful within a wide range of application scenarios: Imagine you are an analyst having a local table describing companies and you want to extend this table with attributes containing the headquarters, turnover, and revenue of each company. Or imagine you are a film enthusiast and want to extend a table describing films with attributes like director, genre, and release date of each film. This article presents the Mannheim Search Join Engine which automatically performs such table extension operations based on a large corpus of Web data. Given a local table, the Mannheim Search Join Engine searches the corpus for additional data describing the entities contained in the input table. The discovered data is then joined with the local table and is consolidated using schema matching and data fusion techniques. As result, the user is presented with an extended table and given the opportunity to examine the provenance of the added data. We evaluate the Mannheim Search Join Engine using heterogeneous data originating from over one million different websites. The data corpus consists of HTML tables, as well as Linked Data and Microdata annotations which are converted into tabular form. Our experiments show that the Mannheim Search Join Engine achieves a coverage close to 100% and a precision of around 90% for the tasks of extending tables describing cities, companies, countries, drugs, books, films, and songs.",
"title": ""
},
{
"docid": "442680dcfbe4651eb5434e6b6703d25e",
"text": "The mammalian genome is transcribed into large numbers of long noncoding RNAs (lncRNAs), but the definition of functional lncRNA groups has proven difficult, partly due to their low sequence conservation and lack of identified shared properties. Here we consider promoter conservation and positional conservation as indicators of functional commonality. We identify 665 conserved lncRNA promoters in mouse and human that are preserved in genomic position relative to orthologous coding genes. These positionally conserved lncRNA genes are primarily associated with developmental transcription factor loci with which they are coexpressed in a tissue-specific manner. Over half of positionally conserved RNAs in this set are linked to chromatin organization structures, overlapping binding sites for the CTCF chromatin organiser and located at chromatin loop anchor points and borders of topologically associating domains (TADs). We define these RNAs as topological anchor point RNAs (tapRNAs). Characterization of these noncoding RNAs and their associated coding genes shows that they are functionally connected: they regulate each other’s expression and influence the metastatic phenotype of cancer cells in vitro in a similar fashion. Furthermore, we find that tapRNAs contain conserved sequence domains that are enriched in motifs for zinc finger domain-containing RNA-binding proteins and transcription factors, whose binding sites are found mutated in cancers. This work leverages positional conservation to identify lncRNAs with potential importance in genome organization, development and disease. The evidence that many developmental transcription factors are physically and functionally connected to lncRNAs represents an exciting stepping-stone to further our understanding of genome regulation.",
"title": ""
},
{
"docid": "7a3441773c79b9fde64ebcf8103616a1",
"text": "SIMD parallelism has become an increasingly important mechanism for delivering performance in modern CPUs, due its power efficiency and relatively low cost in die area compared to other forms of parallelism. Unfortunately, languages and compilers for CPUs have not kept up with the hardware's capabilities. Existing CPU parallel programming models focus primarily on multi-core parallelism, neglecting the substantial computational capabilities that are available in CPU SIMD vector units. GPU-oriented languages like OpenCL support SIMD but lack capabilities needed to achieve maximum efficiency on CPUs and suffer from GPU-driven constraints that impair ease of use on CPUs. We have developed a compiler, the Intel R® SPMD Program Compiler (ispc), that delivers very high performance on CPUs thanks to effective use of both multiple processor cores and SIMD vector units. ispc draws from GPU programming languages, which have shown that for many applications the easiest way to program SIMD units is to use a single-program, multiple-data (SPMD) model, with each instance of the program mapped to one SIMD lane. We discuss language features that make ispc easy to adopt and use productively with existing software systems and show that ispc delivers up to 35x speedups on a 4-core system and up to 240× speedups on a 40-core system for complex workloads (compared to serial C++ code).",
"title": ""
},
{
"docid": "892bad91cfae82dfe3d06d2f93edfe8b",
"text": "Fine-grained image recognition is a challenging computer vision problem, due to the small inter-class variations caused by highly similar subordinate categories, and the large intra-class variations in poses, scales and rotations. In this paper, we prove that selecting useful deep descriptors contributes well to fine-grained image recognition. Specifically, a novel Mask-CNN model without the fully connected layers is proposed. Based on the part annotations, the proposed model consists of a fully convolutional network to both locate the discriminative parts ( e.g. , head and torso), and more importantly generate weighted object/part masks for selecting useful and meaningful convolutional descriptors. After that, a three-stream Mask-CNN model is built for aggregating the selected objectand part-level descriptors simultaneously. Thanks to discarding the parameter redundant fully connected layers, our Mask-CNN has a small feature dimensionality and efficient inference speed by comparing with other fine-grained approaches. Furthermore, we obtain a new state-of-the-art accuracy on two challenging fine-grained bird species categorization datasets, which validates the effectiveness of both the descriptor selection scheme and the proposed",
"title": ""
},
{
"docid": "acc700d965586f5ea65bdcb67af38fca",
"text": "OBJECTIVE\nAttention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD.\n\n\nMETHODS\nSeventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea).\n\n\nRESULTS\nThere were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03).\n\n\nCONCLUSION\nWe suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.",
"title": ""
},
{
"docid": "486417082d921eba9320172a349ee28f",
"text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.",
"title": ""
},
{
"docid": "106df67fa368439db4f5684b4a9f7bd9",
"text": "Issues in cybersecurity; understanding the potential risks associated with hackers/crackers Alan D. Smith William T. Rupp Article information: To cite this document: Alan D. Smith William T. Rupp, (2002),\"Issues in cybersecurity; understanding the potential risks associated with hackers/ crackers\", Information Management & Computer Security, Vol. 10 Iss 4 pp. 178 183 Permanent link to this document: http://dx.doi.org/10.1108/09685220210436976",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
}
] |
scidocsrr
|
d211ce71eee620d3c1ec4cf4a098d158
|
Robust end-to-end deep audiovisual speech recognition
|
[
{
"docid": "a3bff96ab2a6379d21abaea00bc54391",
"text": "In view of the advantages of deep networks in producing useful representation, the generated features of different modality data (such as image, audio) can be jointly learned using Multimodal Restricted Boltzmann Machines (MRB-M). Recently, audiovisual speech recognition based the M-RBM has attracted much attention, and the MRBM shows its effectiveness in learning the joint representation across audiovisual modalities. However, the built networks have weakness in modeling the multimodal sequence which is the natural property of speech signal. In this paper, we will introduce a novel temporal multimodal deep learning architecture, named as Recurrent Temporal Multimodal RB-M (RTMRBM), that models multimodal sequences by transforming the sequence of connected MRBMs into a probabilistic series model. Compared with existing multimodal networks, it's simple and efficient in learning temporal joint representation. We evaluate our model on audiovisual speech datasets, two public (AVLetters and AVLetters2) and one self-build. The experimental results demonstrate that our approach can obviously improve the accuracy of recognition compared with standard MRBM and the temporal model based on conditional RBM. In addition, RTMRBM still outperforms non-temporal multimodal deep networks in the presence of the weakness of long-term dependencies.",
"title": ""
}
] |
[
{
"docid": "54c2914107ae5df0a825323211138eb9",
"text": "An implicit, but pervasive view in the information science community is that people are perpetual seekers after new public information, incessantly identifying and consuming new information by browsing the Web and accessing public collections. One aim of this review is to move beyond this consumer characterization, which regards information as a public resource containing novel data that we seek out, consume, and then discard. Instead, I want to focus on a very different view: where familiar information is used as a personal resource that we keep, manage, and (sometimes repeatedly) exploit. I call this information curation. I first summarize limitations of the consumer perspective. I then review research on three different information curation processes: keeping, management, and exploitation. I describe existing work detailing how each of these processes is applied to different types of personal data: documents, e-mail messages, photos, and Web pages. The research indicates people tend to keep too much information, with the exception of contacts and Web pages. When managing information, strategies that rely on piles as opposed to files provide surprising benefits. And in spite of the emergence of desktop search, exploitation currently remains reliant on manual methods such as navigation. Several new technologies have the potential to address important",
"title": ""
},
{
"docid": "180672be0e49be493d9af3ef7b558804",
"text": "Causality is a very intuitive notion that is difficult to make precise without lapsing into tautology. Two ingredients are central to any definition: (1) a set of possible outcomes (counterfactuals) generated by a function of a set of ‘‘factors’’ or ‘‘determinants’’ and (2) a manipulation where one (or more) of the ‘‘factors’’ or ‘‘determinants’’ is changed. An effect is realized as a change in the argument of a stable function that produces the same change in the outcome for a class of interventions that change the ‘‘factors’’ by the same amount. The outcomes are compared at different levels of the factors or generating variables. Holding all factors save one at a constant level, the change in the outcome associated with manipulation of the varied factor is called a causal effect of the manipulated factor. This definition, or some version of it, goes back to Mill (1848) and Marshall (1890). Haavelmo’s (1943) made it more precise within the context of linear equations models. The phrase ‘ceteris paribus’ (everything else held constant) is a mainstay of economic analysis",
"title": ""
},
{
"docid": "87a319361ad48711eff002942735258f",
"text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned",
"title": ""
},
{
"docid": "e47ec55000621d81f665f7d01a1a8553",
"text": "Plant pest recognition and detection is vital for f od security, quality of life and a stable agricult ural economy. This research demonstrates the combination of the k -m ans clustering algorithm and the correspondence filter to achieve pest detection and recognition. The detecti on of the dataset is achieved by partitioning the d ata space into Voronoi cells, which tends to find clusters of comparable spatial extents, thereby separating the obj cts (pests) from the background (pest habitat). The det ction is established by extracting the variant dis inctive attributes between the pest and its habitat (leaf, stem) and using the correspondence filter to identi fy the plant pests to obtain correlation peak values for differe nt datasets. This work further establishes that the recognition probability from the pest image is directly proport i nal to the height of the output signal and invers ely proportional to the viewing angles, which further c onfirmed that the recognition of plant pests is a f unction of their position and viewing angle. It is encouraging to note that the correspondence filter can achieve rotational invariance of pests up to angles of 360 degrees, wh ich proves the effectiveness of the algorithm for t he detection and recognition of plant pests.",
"title": ""
},
{
"docid": "14c981a63e34157bb163d4586502a059",
"text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.",
"title": ""
},
{
"docid": "52462bd444f44910c18b419475a6c235",
"text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).",
"title": ""
},
{
"docid": "1448b02c9c14e086a438d76afa1b2fde",
"text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.",
"title": ""
},
{
"docid": "7edaef142ecf8a3825affc09ad10d73a",
"text": "Internet of Things (IoT) is a network of sensors, actuators, mobile and wearable devices, simply things that have processing and communication modules and can connect to the Internet. In a few years time, billions of such things will start serving in many fields within the concept of IoT. Self-configuration, autonomous device addition, Internet connection and resource limitation features of IoT causes it to be highly prone to the attacks. Denial of Service (DoS) attacks which have been targeting the communication networks for years, will be the most dangerous threats to IoT networks. This study aims to analyze and classify the DoS attacks that may target the IoT environments. In addition to this, the systems that try to detect and mitigate the DoS attacks to IoT will be evaluated.",
"title": ""
},
{
"docid": "65aa93b6ca41fe4ca54a4a7dee508db2",
"text": "The field of deep learning has seen significant advancement in recent years. However, much of the existing work has been focused on real-valued numbers. Recent work has shown that a deep learning system using the complex numbers can be deeper for a fixed parameter budget compared to its real-valued counterpart. In this work, we explore the benefits of generalizing one step further into the hyper-complex numbers, quaternions specifically, and provide the architecture components needed to build deep quaternion networks. We develop the theoretical basis by reviewing quaternion convolutions, developing a novel quaternion weight initialization scheme, and developing novel algorithms for quaternion batch-normalization. These pieces are tested in a classification model by end-to-end training on the CIFAR −10 and CIFAR −100 data sets and a segmentation model by end-to-end training on the KITTI Road Segmentation data set. These quaternion networks show improved convergence compared to real-valued and complex-valued networks, especially on the segmentation task, while having fewer parameters.",
"title": ""
},
{
"docid": "cb98fd6c850d9b3d9a2bac638b9f632d",
"text": "Artificial immune systems are a collection of algorithms inspired by the human immune system. Over the past 15 years, extensive research has been performed regarding the application of artificial immune systems to computer security. However, existing immune-inspired techniques have not performed as well as expected when applied to the detection of intruders in computer systems. In this thesis the development of the Dendritic Cell Algorithm is described. This is a novel immune-inspired algorithm based on the function of the dendritic cells of the human immune system. In nature, dendritic cells function as natural anomaly detection agents, instructing the immune system to respond if stress or damage is detected. Dendritic cells are a crucial cell in the detection and combination of ‘signals’ which provide the immune system with a sense of context. The Dendritic Cell Algorithm is based on an abstract model of dendritic cell behaviour, with the abstraction process performed in close collaboration with immunologists. This algorithm consists of components based on the key properties of dendritic cell behaviour, which involves data fusion and correlation components. In this algorithm, four categories of input signal are used. The resultant algorithm is formally described in this thesis and is validated on a standard machine learning dataset. The validation process shows that the Dendritic Cell Algorithm can be applied to static datasets and suggests that the algorithm is suitable for the analysis of time-dependent data. Further analysis and evaluation of the Dendritic Cell Algorithm is performed. This is assessed through the algorithm’s application to the detection of anomalous port scans. The results of this investigation show that the Dendritic Cell Algorithm can be applied to detection problems in real-time. This analysis also shows that detection with this algorithm produces high rates of false positives and high rates of true positives, in addition to being robust against modification to system parameters. The limitations of the Dendritic Cell Algorithm are also evaluated and presented, including loss of sensitivity and the generation of false positives under certain circumstances. It is shown that the Dendritic Cell Algorithm can perform well as an anomaly detection algorithm and can be applied to real-world, realtime data.",
"title": ""
},
{
"docid": "fde78187088da4d4b8fe4cb0f959b860",
"text": "The key question raised in this research in progress paper is whether the development stage of a (hardware) startup can give an indication of the crowdfunding type it decides to choose. Throughout the paper, I empirically investigate the German crowdfunding landscape and link it to startups in the hardware sector, picking up the proposed notion of an emergent hardware renaissance. To identify the potential points of contact between crowdfunds and startups, an evaluation of different startup stage models with regard to funding requirements is provided, as is an overview of currently used crowdfunding typologies. The example of two crowdfunding platforms (donation and non-monetary reward crowdfunding vs. equity-based crowdfunding) and their respective hardware projects and startups is used to highlight the potential of this research in progress. 1 Introduction Originally motivated by Paul Graham's 'The Hardware Renaissance' (2012) and further spurred by Witheiler's 'The hardware revolution will be crowdfunded' (2013), I chose to consider the intersection of startups, crowdfunding, and hardware. This is particularly interesting since literature on innovation and startup funding has indeed grown to some sophistication regarding the timing of more classic sources of capital in a startup's life, such as bootstrapping, business angel funding, and venture capital (cf. e.g., Schwienbacher & Larralde, 2012; Metrick & Yasuda, 2011). Due to the novelty of crowdfunding, however, general research on this type of funding is just at the beginning stages and many papers are rather focused on specific elements of the phenomenon (e.g., Belleflamme et al., 2013; Agrawal et al. 2011) and / or exploratory in nature (e.g., Mollick, 2013). What is missing is a verification of the research on potential points of contact between crowdfunds and startups. It remains unclear when crowdfunding is used—primarily during the early seed stage for example or equally at some later point as well—and what types apply (cf. e.g., Collins & Pierrakis, 2012). Simply put, the research question that emerges is whether the development stage of a startup can give an indication of the crowdfunding type it decides to choose. To further explore an answer to this question, I commenced an investigation of the German crowdfunding scene with a focus on hardware startups. Following desk research on platforms situated in German-speaking areas—Germany, Austria, Switzerland—, a categorization of the respectively used funding types is still in process, and transitions into a quantitative analysis and an in-depth case study-based assessment. The prime challenge of such an investigation …",
"title": ""
},
{
"docid": "d8da6bebb1ca8f00b176e1493ded4b9c",
"text": "This paper presents an efficient technique for the evaluation of different types of losses in substrate integrated waveguide (SIW). This technique is based on the Boundary Integral-Resonant Mode Expansion (BI-RME) method in conjunction with a perturbation approach. This method also permits to derive automatically multimodal and parametric equivalent circuit models of SIW discontinuities, which can be adopted for an efficient design of complex SIW circuits. Moreover, a comparison of losses in different types of planar interconnects (SIW, microstrip, coplanar waveguide) is presented.",
"title": ""
},
{
"docid": "7cd8dee294d751ec6c703d628e0db988",
"text": "A major component of secondary education is learning to write effectively, a skill which is bolstered by repeated practice with formative guidance. However, providing focused feedback to every student on multiple drafts of each essay throughout the school year is a challenge for even the most dedicated of teachers. This paper first establishes a new ordinal essay scoring model and its state of the art performance compared to recent results in the Automated Essay Scoring field. Extending this model, we describe a method for using prediction on realistic essay variants to give rubric-specific formative feedback to writers. This method is used in Revision Assistant, a deployed data-driven educational product that provides immediate, rubric-specific, sentence-level feedback to students to supplement teacher guidance. We present initial evaluations of this feedback generation, both offline and in deployment.",
"title": ""
},
{
"docid": "18247ea0349da81fe2cf93b3663b081f",
"text": "Nowadays, more and more companies migrate business from their own servers to the cloud. With the influx of computational requests, datacenters consume tremendous energy every day, attracting great attention in the energy efficiency dilemma. In this paper, we investigate the energy-aware resource management problem in cloud datacenters, where green energy with unpredictable capacity is connected. Via proposing a robust blockchain-based decentralized resource management framework, we save the energy consumed by the request scheduler. Moreover, we propose a reinforcement learning method embedded in a smart contract to further minimize the energy cost. Because the reinforcement learning method is informed from the historical knowledge, it relies on no request arrival and energy supply. Experimental results on Google cluster traces and real-world electricity price show that our approach is able to reduce the datacenters cost significantly compared with other benchmark algorithms.",
"title": ""
},
{
"docid": "7ddf437114258023cc7d9c6d51bb8f94",
"text": "We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.",
"title": ""
},
{
"docid": "b2db6db73699ecc66f33e2f277cf055b",
"text": "In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking. Our recurrent convolutional network exploits the history of locations as well as the distinctive visual features learned by the deep neural networks. Inspired by recent bounding box regression methods for object detection, we study the regression capability of Long Short-Term Memory (LSTM) in the temporal domain, and propose to concatenate high-level visual features produced by convolutional networks with region information. In contrast to existing deep learning based trackers that use binary classification for region candidates, we use regression for direct prediction of the tracking locations both at the convolutional layer and at the recurrent unit. Our experimental results on challenging benchmark video tracking datasets show that our tracker is competitive with state-of-the-art approaches while maintaining low computational cost.",
"title": ""
},
{
"docid": "3b62ccd8e989d81f86b557e8d35a8742",
"text": "The ability to accurately judge the similarity between natural language sentences is critical to the performance of several applications such as text mining, question answering, and text summarization. Given two sentences, an effective similarity measure should be able to determine whether the sentences are semantically equivalent or not, taking into account the variability of natural language expression. That is, the correct similarity judgment should be made even if the sentences do not share similar surface form. In this work, we evaluate fourteen existing text similarity measures which have been used to calculate similarity score between sentences in many text applications. The evaluation is conducted on three different data sets, TREC9 question variants, Microsoft Research paraphrase corpus, and the third recognizing textual entailment data set.",
"title": ""
},
{
"docid": "216e38bb5e6585099e949572f7645ebf",
"text": "The graviperception of the hypotrichous ciliate Stylonychia mytilus was investigated using electrophysiological methods and behavioural analysis. It is shown that Stylonychia can sense gravity and thereby compensates sedimentation rate by a negative gravikinesis. The graviresponse consists of a velocity-regulating physiological component (negative gravikinesis) and an additional orientational component. The latter is largely based on a physical mechanism but might, in addition, be affected by the frequency of ciliary reversals, which is under physiological control. We show that the external stimulus of gravity is transformed to a physiological signal, activating mechanosensitive calcium and potassium channels. Earlier electrophysiological experiments revealed that these ion channels are distributed in the manner of two opposing gradients over the surface membrane. Here, we show, for the first time, records of gravireceptor potentials in Stylonychia that are presumably based on this two-gradient system of ion channels. The gravireceptor potentials had maximum amplitudes of approximately 4 mV and slow activation characteristics (0.03 mV s(-1)). The presumptive number of involved graviperceptive ion channels was calculated and correlates with the analysis of the locomotive behaviour.",
"title": ""
},
{
"docid": "23ac77f4ada235965c1474bd8d3b0829",
"text": "Oral lichen planus and oral lichenoid drug reactions have similar clinical and histologic findings. The onset of oral lichenoid drug reactions appears to correspond to the administration of medications, especially antihypertensive drugs, oral hypoglycemic drugs, antimalarial drugs, gold salts, penicillamine and others. The author reports the case of 58-year-old male patient with oral lichenoid drug reaction, hypertension and diabetes mellitus. The oral manifestation showed radiated white lines with erythematous and erosive areas. The patient experienced pain and a burning sensation when eating spicy food. A tissue biopsy was carried out and revealed the characteristics of lichen planus. The patient was treated with 0.1% fluocinolone acetonide in an orabase as well as the replacement of the oral hypoglycemic and antihypertensive agents. The lesions improved and the burning sensation disappeared in two weeks after treatment. No recurrence was observed in the follow-up after three months.",
"title": ""
},
{
"docid": "0947728fbeeda33a5ca88ad0bfea5258",
"text": "The cybersecurity community typically reacts to attacks after they occur. Being reactive is costly and can be fatal where attacks threaten lives, important data, or mission success. But can cybersecurity be done proactively? Our research capitalizes on the Germination Period—the time lag between hacker communities discussing software flaw types and flaws actually being exploited—where proactive measures can be taken. We argue for a novel proactive approach, utilizing big data, for (I) identifying potential attacks before they come to fruition; and based on this identification, (II) developing preventive countermeasures. The big data approach resulted in our vision of the Proactive Cybersecurity System (PCS), a layered, modular service platform that applies big data collection and processing tools to a wide variety of unstructured data sources to predict vulnerabilities and develop countermeasures. Our exploratory study is the first to show the promise of this novel proactive approach and illuminates challenges that need to be addressed.",
"title": ""
}
] |
scidocsrr
|
ffc920437de019647b81d41ec4a699b4
|
Whole Brain Segmentation Automated Labeling of Neuroanatomical Structures in the Human Brain
|
[
{
"docid": "d529b4f1992f438bb3ce4373090f8540",
"text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.",
"title": ""
},
{
"docid": "772fc1cf2dd2837227facd31f897dba3",
"text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.",
"title": ""
}
] |
[
{
"docid": "8147143579de86a5eeb668037c2b8c5d",
"text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.",
"title": ""
},
{
"docid": "409b257d38faef216a1056fd7c548587",
"text": "Reservoir computing systems utilize dynamic reservoirs having short-term memory to project features from the temporal inputs into a high-dimensional feature space. A readout function layer can then effectively analyze the projected features for tasks, such as classification and time-series analysis. The system can efficiently compute complex and temporal data with low-training cost, since only the readout function needs to be trained. Here we experimentally implement a reservoir computing system using a dynamic memristor array. We show that the internal ionic dynamic processes of memristors allow the memristor-based reservoir to directly process information in the temporal domain, and demonstrate that even a small hardware system with only 88 memristors can already be used for tasks, such as handwritten digit recognition. The system is also used to experimentally solve a second-order nonlinear task, and can successfully predict the expected output without knowing the form of the original dynamic transfer function. Reservoir computing facilitates the projection of temporal input signals onto a high-dimensional feature space via a dynamic system, known as the reservoir. Du et al. realise this concept using metal-oxide-based memristors with short-term memory to perform digit recognition tasks and solve non-linear problems.",
"title": ""
},
{
"docid": "42b6c55e48f58e3e894de84519cb6feb",
"text": "What social value do Likes on Facebook hold? This research examines peopleâs attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which peopleâs friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.",
"title": ""
},
{
"docid": "c0ba7119eaf77c6815f43ff329457e5e",
"text": "In Utility Computing business model, the owners of the computing resources negotiate with their potential clients to sell computing power. The terms of the Quality of Service (QoS) and the economic conditions are established in a Service-Level Agreement (SLA). There are many scenarios in which the agreed QoS cannot be provided because of errors in the service provisioning or failures in the system. Since providers have usually different types of clients, according to their relationship with the provider or by the fee that they pay, it is important to minimize the impact of the SLA violations in preferential clients. This paper proposes a set of policies to provide better QoS to preferential clients in such situations. The criterion to classify clients is established according to the relationship between client and provider (external user, internal or another privileged relationship) and the QoS that the client purchases (cheap contracts or extra QoS by paying an extra fee). Most of the policies use key features of virtualization: Selective Violation of the SLAs, Dynamic Scaling of the Allocated Resources, and Runtime Migration of Tasks. The validity of the policies is demonstrated through exhaustive experiments.",
"title": ""
},
{
"docid": "6cacb8cdc5a1cc17c701d4ffd71bdab1",
"text": "Phishing costs Internet users billions of dollars a year. Using various data sets collected in real-time, this paper analyzes various aspects of phisher modi operandi. We examine the anatomy of phishing URLs and domains, registration of phishing domains and time to activation, and the machines used to host the phishing sites. Our findings can be used as heuristics in filtering phishing-related emails and in identifying suspicious domain registrations.",
"title": ""
},
{
"docid": "b6e62590995a41adb1128703060e0e2d",
"text": "Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in the arena of special education. Although 3D printing is beginning to infiltrate mainstream education, little to no research has explored 3D printing in the context of students with special support needs. We present a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing performs three functions in special education: developing 3D design and printing skills encourages STEM engagement; 3D printing can support the creation of educational aids for providing accessible curriculum content; and 3D printing can be used to create custom adaptive devices. In addition to providing opportunities to students, faculty, and caregivers in their efforts to integrate 3D printing in special education settings, our investigation also revealed several concerns and challenges. We present our investigation at three diverse sites as a case study of 3D printing in the realm of special education, discuss obstacles to efficient 3D printing in this context, and offer suggestions for designers and technologists.",
"title": ""
},
{
"docid": "63262d2a9abdca1d39e31d9937bb41cf",
"text": "A structural model is presented for synthesizing binaural sound from a monaural source. The model produces well-controlled vertical as well as horizontal effects. The model is based on a simplified time-domain description of the physics of wave propagation and diffraction. The components of the model have a one-to-one correspondence with the physical sources of sound diffraction, delay, and reflection. The simplicity of the model permits efficient implementation in DSP hardware, and thus facilitates real-time operation. Additionally, the parameters in the model can be adjusted to fit a particular individual’s characteristics, thereby producing individualized head-related transfer functions. Experimental tests verify the perceptual effectiveness of the approach.",
"title": ""
},
{
"docid": "5487dd1976a164447c821303b53ebdf8",
"text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.",
"title": ""
},
{
"docid": "a959b14468625cb7692de99a986937c4",
"text": "In this paper, we describe a novel method for searching and comparing 3D objects. The method encodes the geometric and topological information in the form of a skeletal graph and uses graph matching techniques to match the skeletons and to compare them. The skeletal graphs can be manually annotated to refine or restructure the search. This helps in choosing between a topological similarity and a geometric (shape) similarity. A feature of skeletal matching is the ability to perform part-matching, and its inherent intuitiveness, which helps in defining the search and in visualizing the results. Also, the matching results, which are presented in a per-node basis can be used for driving a number of registration algorithms, most of which require a good initial guess to perform registration. In this paper, we also describe a visualization tool to aid in the selection and specification of the matched objects.",
"title": ""
},
{
"docid": "afd1bc554857a1857ac4be5ee37cc591",
"text": "0953-5438/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.intcom.2011.04.007 ⇑ Corresponding author. E-mail addresses: m.cole@rutgers.edu (M.J. Co (J. Gwizdka), changl@eden.rutgers.edu (C. Liu), ralf@b rutgers.edu (N.J. Belkin), xiangminz@gmail.com (X. Zh We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0046aca3e98d75f9d3c414a6de42e017",
"text": "Fast Downward is a classical planning system based on heuris tic search. It can deal with general deterministic planning problems encoded in the propos itional fragment of PDDL2.2, including advanced features like ADL conditions and effects and deriv ed predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a pro gression planner, searching the space of world states of a planning task in the forward direct ion. However, unlike other PDDL planning systems, Fast Downward does not use the propositional P DDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks , which makes many of the implicit constraints of a propositi nal planning task explicit. Exploiting this alternative representatio n, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic fun ction, called thecausal graph heuristic , which is very different from traditional HSP-like heuristi cs based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward’s app roach to solving multi-valued planning tasks. We extend our earlier discussion of the caus al graph heuristic to tasks involving axioms and conditional effects and present some novel techn iques for search control that are used within Fast Downward’s best-first search algorithm: preferred operatorstransfer the idea of helpful actions from local search to global best-first search, deferred evaluationof heuristic functions mitigates the negative effect of large branching factors on earch performance, and multi-heuristic best-first searchcombines several heuristic evaluation functions within a s ingle search algorithm in an orthogonal way. We also describe efficient data structu es for fast state expansion ( successor generatorsandaxiom evaluators ) and present a new non-heuristic search algorithm called focused iterative-broadening search , which utilizes the information encoded in causal graphs in a ovel way. Fast Downward has proven remarkably successful: It won the “ classical” (i. e., propositional, non-optimising) track of the 4th International Planning Co mpetition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions a d provide some insights about the usefulness of the new search enhancements.",
"title": ""
},
{
"docid": "7647993815a13899e60fdc17f91e270d",
"text": "of Dissertation presented to COPPE/UFRJ as a partial fulfillment of the requirements for the degree of Master of Science (M.Sc.) WHEN AUTOENCODERS MEET RECOMMENDER SYSTEMS: COFILS APPROACH Julio César Barbieri Gonzalez de Almeida",
"title": ""
},
{
"docid": "71a76b562681450b23c512d4710c9f00",
"text": "The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.",
"title": ""
},
{
"docid": "c70383b0a3adb6e697932ef4b02877ac",
"text": "Betweenness centrality (BC) is a crucial graph problem that measures the significance of a vertex by the number of shortest paths leading through it. We propose Maximal Frontier Betweenness Centrality (MFBC): a succinct BC algorithm based on novel sparse matrix multiplication routines that performs a factor of p1/3 less communication on p processors than the best known alternatives, for graphs with n vertices and average degree k = n/p2/3. We formulate, implement, and prove the correctness of MFBC for weighted graphs by leveraging monoids instead of semirings, which enables a surprisingly succinct formulation. MFBC scales well for both extremely sparse and relatively dense graphs. It automatically searches a space of distributed data decompositions and sparse matrix multiplication algorithms for the most advantageous configuration. The MFBC implementation outperforms the well-known CombBLAS library by up to 8x and shows more robust performance. Our design methodology is readily extensible to other graph problems.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "1886f5d95b1db7c222bc23770835e2b7",
"text": "Signature files and inverted files are well-known index structures. In this paper we undertake a direct comparison of the two for searching for partially-specified queries in a large lexicon stored in main memory. Using n-grams to index lexicon terms, a bit-sliced signature file can be compressed to a smaller size than an inverted file if each n-gram sets only one bit in the term signature. With a signature width less than half the number of unique n-grams in the lexicon, the signature file method is about as fast as the inverted file method, and significantly smaller. Greater flexibility in memory usage and faster index generation time make signature files appropriate for searching large lexicons or other collections in an environment where memory is at a premium.",
"title": ""
},
{
"docid": "95514c6f357115ef181b652eedd780fd",
"text": "Application Programming Interfaces (APIs) are a tremendous resource—that is, when they are stable. Several studies have shown that this is unfortunately not the case. Of those, a large-scale study of API changes in the Pharo Smalltalk ecosystem documented several findings about API deprecations and their impact on API clients. We conduct a partial replication of this study, considering more than 25,000 clients of five popular Java APIs on GitHub. This work addresses several shortcomings of the previous study, namely: a study of several distinct API clients in a popular, statically-typed language, with more accurate version information. We compare and contrast our findings with the previous study and highlight new ones, particularly on the API client update practices and the startling similarities between reaction behavior in Smalltalk and Java.",
"title": ""
},
{
"docid": "70f1f5de73c3a605b296299505fd4e61",
"text": "Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called “standard dropout” is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined “dropout distillation”, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.",
"title": ""
},
{
"docid": "0932bc0e6eafeeb8b64d7b41ca820ac8",
"text": "A novel, non-invasive, imaging methodology, based on the photoacoustic effect, is introduced in the context of artwork diagnostics with emphasis on the uncovering of hidden features such as underdrawings or original sketch lines in paintings. Photoacoustic microscopy, a rapidly growing imaging method widely employed in biomedical research, exploits the ultrasonic acoustic waves, generated by light from a pulsed or intensity modulated source interacting with a medium, to map the spatial distribution of absorbing components. Having over three orders of magnitude higher transmission through strongly scattering media, compared to light in the visible and near infrared, the photoacoustic signal offers substantially improved detection sensitivity and achieves excellent optical absorption contrast at high spatial resolution. Photoacoustic images, collected from miniature oil paintings on canvas, illuminated with a nanosecond pulsed Nd:YAG laser at 1064 nm on their reverse side, reveal clearly the presence of pencil sketch lines coated over by several paint layers, exceeding 0.5 mm in thickness. By adjusting the detection bandwidth of the optically induced ultrasonic waves, photoacoustic imaging can be used for looking into a broad variety of artefacts having diverse optical properties and geometrical profiles, such as manuscripts, glass objects, plastic modern art or even stone sculpture.",
"title": ""
}
] |
scidocsrr
|
0f7906ae6cc949541333e43ff695879a
|
Statistical transformer networks: learning shape and appearance models via self supervision
|
[
{
"docid": "de1f35d0e19cafc28a632984f0411f94",
"text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"title": ""
},
{
"docid": "6936b03672c64798ca4be118809cc325",
"text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.",
"title": ""
},
{
"docid": "b7387928fe8307063cafd6723c0dd103",
"text": "We introduce learned attention models into the radio machine learning domain for the task of modulation recognition by leveraging spatial transformer networks and introducing new radio domain appropriate transformations. This attention model allows the network to learn a localization network capable of synchronizing and normalizing a radio signal blindly with zero knowledge of the signal's structure based on optimization of the network for classification accuracy, sparse representation, and regularization. Using this architecture we are able to outperform our prior results in accuracy vs signal to noise ratio against an identical system without attention, however we believe such an attention model has implication far beyond the task of modulation recognition.",
"title": ""
},
{
"docid": "4551ee1978ef563259c8da64cc0d1444",
"text": "We present an approach to matching images of objects in fine-grained datasets without using part annotations, with an application to the challenging problem of weakly supervised single-view reconstruction. This is in contrast to prior works that require part annotations, since matching objects across class and pose variations is challenging with appearance features alone. We overcome this challenge through a novel deep learning architecture, WarpNet, that aligns an object in one image with a different object in another. We exploit the structure of the fine-grained dataset to create artificial data for training this network in an unsupervised-discriminative learning approach. The output of the network acts as a spatial prior that allows generalization at test time to match real images across variations in appearance, viewpoint and articulation. On the CUB-200-2011 dataset of bird categories, we improve the AP over an appearance-only network by 13.6%. We further demonstrate that our WarpNet matches, together with the structure of fine-grained datasets, allow single-view reconstructions with quality comparable to using annotated point correspondences.",
"title": ""
}
] |
[
{
"docid": "39c2c3e7f955425cd9aaad1951d13483",
"text": "This paper proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole. The mathematical models of these three concepts are developed to perform exploration, exploitation, and local search, respectively. The MVO algorithm is first benchmarked on 19 challenging test problems. It is then applied to five real engineering problems to further confirm its performance. To validate the results, MVO is compared with four well-known algorithms: Grey Wolf Optimizer, Particle Swarm Optimization, Genetic Algorithm, and Gravitational Search Algorithm. The results prove that the proposed algorithm is able to provide very competitive results and outperforms the best algorithms in the literature on the majority of the test beds. The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces. Note that the source codes of the proposed MVO algorithm are publicly available at http://www.alimirjalili.com/MVO.html .",
"title": ""
},
{
"docid": "1afa72a646fcfa5dfe632126014f59be",
"text": "The virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/) has served as a comprehensive repository of bacterial virulence factors (VFs) for >7 years. Bacterial virulence is an exciting and dynamic field, due to the availability of complete sequences of bacterial genomes and increasing sophisticated technologies for manipulating bacteria and bacterial genomes. The intricacy of virulence mechanisms offers a challenge, and there exists a clear need to decipher the 'language' used by VFs more effectively. In this article, we present the recent major updates of VFDB in an attempt to summarize some of the most important virulence mechanisms by comparing different compositions and organizations of VFs from various bacterial pathogens, identifying core components and phylogenetic clades and shedding new light on the forces that shape the evolutionary history of bacterial pathogenesis. In addition, the 2012 release of VFDB provides an improved user interface.",
"title": ""
},
{
"docid": "fa03fe8103c69dbb8328db899400cce4",
"text": "While deploying large scale heterogeneous robots in a wide geographical area, communicating among robots and robots with a central entity pose a major challenge due to robotic motion, distance and environmental constraints. In a cloud robotics scenario, communication challenges result in computational challenges as the computation is being performed at the cloud. Therefore fog nodes are introduced which shorten the distance between the robots and cloud and reduce the communication challenges. Fog nodes also reduce the computation challenges with extra compute power. However in the above scenario, maintaining continuous communication between the cloud and the robots either directly or via fog nodes is difficult. Therefore we propose a Distributed Cooperative Multi-robots Communication (DCMC) model where Robot to Robot (R2R), Robot to Fog (R2F) and Fog to Cloud (F2C) communications are being realized. Once the DCMC framework is formed, each robot establishes communication paths to maintain a consistent communication with the cloud. Further, due to mobility and environmental condition, maintaining link with a particular robot or a fog node becomes difficult. This requires pre-knowledge of the link quality such that appropriate R2R or R2F communication can be made possible. In a scenario where Global Positioning System (GPS) and continuous scanning of channels are not advisable due to energy or security constraints, we need an accurate link prediction mechanism. In this paper we propose a Collaborative Robotic based Link Prediction (CRLP) mechanism which predicts reliable communication and quantify link quality evolution in R2R and R2F communications without GPS and continuous channel scanning. We have validated our proposed schemes using joint Gazebo/Robot Operating System (ROS), MATLAB and Network Simulator (NS3) based simulations. Our schemes are efficient in terms of energy saving and accurate link prediction.",
"title": ""
},
{
"docid": "95af5f635e876c4c66711e86fa25d968",
"text": "Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human–Computer Interaction and automatic annotation, will benefit from a robust solution. In this paper, we discuss the characteristics of human motion analysis. We divide the analysis into a modeling and an estimation phase. Modeling is the construction of the likelihood function, estimation is concerned with finding the most likely pose given the likelihood surface. We discuss model-free approaches separately. This taxonomy allows us to highlight trends in the domain and to point out limitations of the current state of the art. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "83e7119065ededfd731855fe76e76207",
"text": "Introduction: In recent years, the maturity model research has gained wide acceptance in the area of information systems and many Service Oriented Architecture (SOA) maturity models have been proposed. However, there are limited empirical studies on in-depth analysis and validation of SOA Maturity Models (SOAMMs). Objectives: The objective is to present a comprehensive comparison of existing SOAMMs to identify the areas of improvement and the research opportunities. Methods: A systematic literature review is conducted to explore the SOA adoption maturity studies. Results: A total of 20 unique SOAMMs are identified and analyzed in detail. A comparison framework is defined based on SOAMM design and usage support. The results provide guidance for SOA practitioners who are involved in selection, design, and implementation of SOAMMs. Conclusion: Although all SOAMMs propose a measurement framework, only a few SOAMMs provide guidance for selecting and prioritizing improvement measures. The current state of research shows that a gap exists in both prescriptive and descriptive purpose of SOAMM usage and it indicates the need for further research.",
"title": ""
},
{
"docid": "936048690fb043434c3ee0060c5bf7a5",
"text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "eef87d8905b621d2d0bb2b66108a56c1",
"text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.",
"title": ""
},
{
"docid": "2d73a7ab1e5a784d4755ed2fe44078db",
"text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.",
"title": ""
},
{
"docid": "18caf39ce8802f69a463cc1a4b276679",
"text": "In this thesis we describe the formal verification of a fully IEEE compliant floating point unit (FPU). The hardware is verified on the gate-level against a formalization of the IEEE standard. The verification is performed using the theorem proving system PVS. The FPU supports both single and double precision floating point numbers, normal and denormal numbers, all four IEEE rounding modes, and exceptions as required by the standard. Beside the verification of the combinatorial correctness of the FPUs we pipeline the FPUs to allow the integration into an out-of-order processor. We formally define the correctness criterion the pipelines must obey in order to work properly within the processor. We then describe a new methodology based on combining model checking and theorem proving for the verification of the pipelines.",
"title": ""
},
{
"docid": "9fc869c7e7d901e418b1b69d636cbd33",
"text": "Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2",
"title": ""
},
{
"docid": "9f660caf74f1708339f7ca2ee067dc95",
"text": "Abstruct-Vehicle following and its effects on traffic flow has been an active area of research. Human driving involves reaction times, delays, and human errors that affect traffic flow adversely. One way to eliminate human errors and delays in vehicle following is to replace the human driver with a computer control system and sensors. The purpose of this paper is to develop an autonomous intelligent cruise control (AICC) system for automatic vehicle following, examine its effect on traffic flow, and compare its performance with that of the human driver models. The AICC system developed is not cooperative; Le., it does not exchange information with other vehicles and yet is not susceptible to oscillations and \" slinky \" effects. The elimination of the \" slinky \" effect is achieved by using a safety distance separation rule that is proportional to the vehicle velocity (constant time headway) and by designing the control system appropriately. The performance of the AICC system is found to be superior to that of the human driver models considered. It has a faster and better transient response that leads to a much smoother and faster traffic flow. Computer simulations are used to study the performance of the proposed AICC system and analyze vehicle following in a single lane, without passing, under manual and automatic control. In addition, several emergency situations that include emergency stopping and cut-in cases were simulated. The simulation results demonstrate the effectiveness of the AICC system and its potentially beneficial effects on traffic flow.",
"title": ""
},
{
"docid": "6ced60cadf69a3cd73bcfd6a3eb7705e",
"text": "This review article summarizes the current literature regarding the analysis of running gait. It is compared to walking and sprinting. The current state of knowledge is presented as it fits in the context of the history of analysis of movement. The characteristics of the gait cycle and its relationship to potential and kinetic energy interactions are reviewed. The timing of electromyographic activity is provided. Kinematic and kinetic data (including center of pressure measurements, raw force plate data, joint moments, and joint powers) and the impact of changes in velocity on these findings is presented. The status of shoewear literature, alterations in movement strategies, the role of biarticular muscles, and the springlike function of tendons are addressed. This type of information can provide insight into injury mechanisms and training strategies. Copyright 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "842cd58edd776420db869e858be07de4",
"text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.",
"title": ""
},
{
"docid": "0aa566453fa3bd4bedec5ac3249d410a",
"text": "The approach of using passage-level evidence for document retrieval has shown mixed results when it is applied to a variety of test beds with different characteristics. One main reason of the inconsistent performance is that there exists no unified framework to model the evidence of individual passages within a document. This paper proposes two probabilistic models to formally model the evidence of a set of top ranked passages in a document. The first probabilistic model follows the retrieval criterion that a document is relevant if any passage in the document is relevant, and models each passage independently. The second probabilistic model goes a step further and incorporates the similarity correlations among the passages. Both models are trained in a discriminative manner. Furthermore, we present a combination approach to combine the ranked lists of document retrieval and passage-based retrieval.\n An extensive set of experiments have been conducted on four different TREC test beds to show the effectiveness of the proposed discriminative probabilistic models for passage-based retrieval. The proposed algorithms are compared with a state-of-the-art document retrieval algorithm and a language model approach for passage-based retrieval. Furthermore, our combined approach has been shown to provide better results than both document retrieval and passage-based retrieval approaches.",
"title": ""
},
{
"docid": "5aaba72970d1d055768e981f7e8e3684",
"text": "A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table. Although fast with strings, there is currently no information in the research literatur e on its performance with integer keys. More importantly, we do not know how efficient an integer-based array hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers. We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.",
"title": ""
},
{
"docid": "69ddedba98e93523f698529716cf2569",
"text": "A fast and scalable graph processing method becomes increasingly important as graphs become popular in a wide range of applications and their sizes are growing rapidly. Most of distributed graph processing methods require a lot of machines equipped with a total of thousands of CPU cores and a few terabyte main memory for handling billion-scale graphs. Meanwhile, GPUs could be a promising direction toward fast processing of large-scale graphs by exploiting thousands of GPU cores. All of the existing methods using GPUs, however, fail to process large-scale graphs that do not fit in main memory of a single machine. Here, we propose a fast and scalable graph processing method GTS that handles even RMAT32 (64 billion edges) very efficiently only by using a single machine. The proposed method stores graphs in PCI-E SSDs and executes a graph algorithm using thousands of GPU cores while streaming topology data of graphs to GPUs via PCI-E interface. GTS is fast due to no communication overhead and scalable due to no data duplication from graph partitioning among machines. Through extensive experiments, we show that GTS consistently and significantly outperforms the major distributed graph processing methods, GraphX, Giraph, and PowerGraph, and the state-of-the-art GPU-based method TOTEM.",
"title": ""
},
{
"docid": "89b54aa0009598a4cb159b196f3749ee",
"text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.",
"title": ""
},
{
"docid": "ad4596e24f157653a36201767d4b4f3b",
"text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.",
"title": ""
},
{
"docid": "708915f99102f80b026b447f858e3778",
"text": "One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporaldifference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learningrate parameter. There are no meta-learning method for λ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporaldifference learning methods in real world problems.",
"title": ""
},
{
"docid": "021bed3f2c2f09db1bad7d11108ee430",
"text": "This is a review of Introduction to Circle Packing: The Theory of Discrete Analytic Functions, by Kenneth Stephenson, Cambridge University Press, Cambridge UK, 2005, pp. i-xii, 1–356, £42, ISBN-13 978-0-521-82356-2. 1. The Context: A Personal Reminiscence Two important stories in the recent history of mathematics are those of the geometrization of topology and the discretization of geometry. Having come of age during the unfolding of these stories as both observer and practitioner, this reviewer does not hold the detachment of the historian and, perhaps, can be forgiven the personal accounting that follows, along with its idiosyncratic telling. The first story begins at a time when the mathematical world is entrapped by abstraction. Bourbaki reigns and generalization is the cry of the day. Coxeter is a curious doddering uncle, at best tolerated, at worst vilified as a practitioner of the unsophisticated mathematics of the nineteenth century. 1.1. The geometrization of topology. It is 1978 and I have just begun my graduate studies in mathematics. There is some excitement in the air over ideas of Bill Thurston that purport to offer a way to resolve the Poincaré conjecture by using nineteenth century mathematics—specifically, the noneuclidean geometry of Lobachevski and Bolyai—to classify all 3-manifolds. These ideas finally appear in a set of notes from Princeton a couple of years later, and the notes are both fascinating and infuriating—theorems are left unstated and often unproved, chapters are missing never to be seen, the particular dominates—but the notes are bulging with beautiful and exciting ideas, often with but sketches of intricate arguments to support the landscape that Thurston sees as he surveys the topology of 3-manifolds. Thurston’s vision is a throwback to the previous century, having much in common with the highly geometric, highly particular landscape that inspired Felix Klein and Max Dehn. These geometers walked around and within Riemann surfaces, one of the hot topics of the day, knew them intimately, and understood them in their particularity, not from the rarified heights that captured the mathematical world in general, and topology in particular, in the period from the 1930’s until the 1970’s. The influence of Thurston’s Princeton notes on the development of topology over the next 30 years would be pervasive, not only in its mathematical content, but AMS SUBJECT CLASSIFICATION: 52C26",
"title": ""
}
] |
scidocsrr
|
cfa8312d2b9c69d3d5ae6b445350708c
|
The Application of Data Mining to Build Classification Model for Predicting Graduate Employment
|
[
{
"docid": "bfae60b46b97cf2491d6b1136c60f6a6",
"text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.",
"title": ""
},
{
"docid": "2693030e6575cb7faec59aaec6387e2c",
"text": "Human Resource (HR) applications can be used to provide fair and consistent decisions, and to improve the effectiveness of decision making processes. Besides that, among the challenge for HR professionals is to manage organization talents, especially to ensure the right person for the right job at the right time. For that reason, in this article, we attempt to describe the potential to implement one of the talent management tasks i.e. identifying existing talent by predicting their performance as one of HR application for talent management. This study suggests the potential HR system architecture for talent forecasting by using past experience knowledge known as Knowledge Discovery in Database (KDD) or Data Mining. This article consists of three main parts; the first part deals with the overview of HR applications, the prediction techniques and application, the general view of Data mining and the basic concept of talent management in HRM. The second part is to understand the use of Data Mining technique in order to solve one of the talent management tasks, and the third part is to propose the potential HR system architecture for talent forecasting. Keywords—HR Application, Knowledge Discovery in Database (KDD), Talent Forecasting.",
"title": ""
},
{
"docid": "e390d922f802267ac4e7bd336080e2ca",
"text": "Assessment as a dynamic process produces data that reasonable conclusions are derived by stakeholders for decision making that expectedly impact on students' learning outcomes. The data mining methodology while extracting useful, valid patterns from higher education database environment contribute to proactively ensuring students maximize their academic output. This paper develops a methodology by the derivation of performance prediction indicators to deploying a simple student performance assessment and monitoring system within a teaching and learning environment by mainly focusing on performance monitoring of students' continuous assessment (tests) and examination scores in order to predict their final achievement status upon graduation. Based on various data mining techniques (DMT) and the application of machine learning processes, rules are derived that enable the classification of students in their predicted classes. The deployment of the prototyped solution, integrates measuring, 'recycling' and reporting procedures in the new system to optimize prediction accuracy.",
"title": ""
}
] |
[
{
"docid": "11a28e11ba6e7352713b8ee63291cd9c",
"text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.",
"title": ""
},
{
"docid": "ade486df9ce338e0760f357db2340e55",
"text": "The aim of the present study was to evaluate the effects of a 12-week home-based strength, explosive and plyometric (SEP) training on the cost of running (Cr) in well-trained ultra-marathoners and to assess the main mechanical parameters affecting changes in Cr. Twenty-five male runners (38.2 ± 7.1 years; body mass index: 23.0 ± 1.1 kg·m-2; V˙O2max: 55.4 ± 4.0 mlO2·kg-1·min-1) were divided into an exercise (EG = 13) and control group (CG = 12). Before and after a 12-week SEP training, Cr, spring-mass model parameters at four speeds (8, 10, 12, 14 km·h-1) were calculated and maximal muscle power (MMP) of the lower limbs was measured. In EG, Cr decreased significantly (p < .05) at all tested running speeds (-6.4 ± 6.5% at 8 km·h-1; -3.5 ± 5.3% at 10 km·h-1; -4.0 ± 5.5% at 12 km·h-1; -3.2 ± 4.5% at 14 km·h-1), contact time (tc) increased at 8, 10 and 12 km·h-1 by mean +4.4 ± 0.1% and ta decreased by -25.6 ± 0.1% at 8 km·h-1 (p < .05). Further, inverse relationships between changes in Cr and MMP at 10 (p = .013; r = -0.67) and 12 km·h-1 (p < .001; r = -0.86) were shown. Conversely, no differences were detected in the CG in any of the studied parameters. Thus, 12-week SEP training programme lower the Cr in well-trained ultra-marathoners at submaximal speeds. Increased tc and an inverse relationship between changes in Cr and changes in MMP could be in part explain the decreased Cr. Thus, adding at least three sessions per week of SEP exercises in the normal endurance-training programme may decrease the Cr.",
"title": ""
},
{
"docid": "0be3178ff2f412952934a49084ee8edc",
"text": "This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon’s theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design. In a curious twist of history, the dawn of the age of genomics has both seen the rise of the science of bioinformatics as a tool to cope with the enormous amounts of data being generated daily, and the decline of the theory of information as applied to molecular biology. Hailed as a harbinger of a “new movement” (Quastler 1953) along with Cybernetics, the principles of information theory were thought to be applicable to the higher functions of living organisms, and able to analyze such functions as metabolism, growth, and differentiation (Quastler 1953). Today, the metaphors and the jargon of information theory are still widely used (Maynard Smith 1999a, 1999b), as opposed to the mathematical formalism, which is too often considered to be inapplicable to biological information. Clearly, looking back it appears that too much hope was laid upon this theory’s relevance for biology. However, there was well-founded optimism that information theory ought to be able to address the complex issues associated with the storage of information in the genetic code, only to be repeatedly questioned and rebuked (see, e.g., Vincent 1994, Sarkar 1996). In this article, I outline the concepts of entropy and information (as defined by Shannon) in the context of molecular biology. We shall see that not only are these terms well-defined and useful, they also coincide precisely with what we intuitively mean when we speak about information stored in genes, for example. I then present examples of applications of the theory to measuring the information content of biomolecules, the identification of polymorphisms, RNA and protein secondary structure prediction, the prediction and analysis of molecular interactions, and drug design. 1 Entropy and Information Entropy and information are often used in conflicting manners in the literature. A precise understanding, both mathematical and intuitive, of the notion of information (and its relationship to entropy) is crucial for applications in molecular biology. Therefore, let us begin by outlining Shannon’s original entropy concept (Shannon, 1948). 1.1 Shannon’s Uncertainty Measure Entropy in Shannon’s theory (defined mathematically below) is a measure of uncertainty about the identity of objects in an ensemble. Thus, while “en-",
"title": ""
},
{
"docid": "5464889be41072ecff03355bf45c289f",
"text": "Grid map registration is an important field in mobile robotics. Applications in which multiple robots are involved benefit from multiple aligned grid maps as they provide an efficient exploration of the environment in parallel. In this paper, a normal distribution transform (NDT)-based approach for grid map registration is presented. For simultaneous mapping and localization approaches on laser data, the NDT is widely used to align new laser scans to reference scans. The original grid quantization-based NDT results in good registration performances but has poor convergence properties due to discontinuities of the optimization function and absolute grid resolution. This paper shows that clustering techniques overcome disadvantages of the original NDT by significantly improving the convergence basin for aligning grid maps. A multi-scale clustering method results in an improved registration performance which is shown on real world experiments on radar data.",
"title": ""
},
{
"docid": "c0610eab7d3825d6b12959fedd9656ea",
"text": "We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In experiments, CrescendoNet with only 15 layers outperforms almost all networks without residual connections on benchmark datasets, CIFAR10, CIFAR100, and SVHN. Given sufficient amount of data as in SVHN dataset, CrescendoNet with 15 layers and 4.1M parameters can match the performance of DenseNet-BC with 250 layers and 15.3M parameters. CrescendoNet provides a new way to construct high performance deep convolutional neural networks with simple network architecture. Moreover, by investigating a various combination of subnetworks in CrescendoNet, we note that the high performance of CrescendoNet may come from its implicit ensemble behavior, which gives CrescendoNet an anytime classification property. Furthermore, the independence between paths in CrescendoNet allows us to introduce a new path-wise training procedure, which can reduce the memory needed for training.",
"title": ""
},
{
"docid": "6767096adc28681387c77a68a3468b10",
"text": "This study investigates fifty small and medium enterprises by using a survey approach to find out the key factors that are determinants to EDI adoption. Based upon the existing model, the study uses six factors grouped into three categories, namely organizational, environmental and technological aspects. The findings indicate that factors such as perceived benefits government support and management support are significant determinants of EDI adoption. The remaining factors like organizational culture, motivation to use EDI and task variety remain insignificant. Based upon the analysis of data, recommendations are made.",
"title": ""
},
{
"docid": "6989ae9a7e6be738d0d2e8261251a842",
"text": "A single-feed reconfigurable square-ring patch antenna with pattern diversity is presented. The antenna structure has four shorting walls placed respectively at each edge of the square-ring patch, in which two shorting walls are directly connected to the patch and the others are connected to the patch via pin diodes. By controlling the states of the pin diodes, the antenna can be operated at two different modes: monopolar plat-patch and normal patch modes; moreover, the 10 dB impedance bandwidths of the two modes are overlapped. Consequently, the proposed antenna allows its radiation pattern to be switched electrically between conical and broadside radiations at a fixed frequency. Detailed design considerations of the proposed antenna are described. Experimental and simulated results are also shown and discussed",
"title": ""
},
{
"docid": "2c1de0ee482b3563c6b0b49bfdbbe508",
"text": "The paper summarizes our research in the area of unsupervised categorization of Wikipedia articles. As a practical result of our research, we present an application of spectral clustering algorithm used for grouping Wikipedia search results. The main contribution of the paper is a representation method for Wikipedia articles that has been based on combination of words and links and used for categoriation of search results in this repository. We evaluate the proposed approach with Primary Component projections and show, on the test data, how usage of cosine transformation to create combined representations influence data variability. On sample test datasets, we also show how combined representation improves the data separation that increases overall results of data categorization. To implement the system, we review the main spectral clustering methods and we test their usability for text categorization. We give a brief description of the system architecture that groups online Wikipedia articles retrieved with user-specified keywords. Using the system, we show how clustering increases information retrieval effectiveness for Wikipedia data repository.",
"title": ""
},
{
"docid": "cb70ab2056242ca739adde4751fbca2c",
"text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1",
"title": ""
},
{
"docid": "2c2dee4689e48f1a7c0061ac7d60a16b",
"text": "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source task) but only very limited training data for a second task (the target task) that is similar but not identical to the first. These algorithms use varying assumptions about the similarity between the tasks to carry information from the source to the target task. Common assumptions are that only certain specific marginal or conditional distributions have changed while all else remains the same. Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Alternatively, if one has only the target task, but also has the ability to choose a limited amount of additional training data to collect, then active learning algorithms are used to make choices which will most improve performance on the target task. These algorithms may be combined into active transfer learning, but previous efforts have had to apply the two methods in sequence or use restrictive transfer assumptions. This thesis focuses on active transfer learning under the model shift assumption. We start by proposing two transfer learning algorithms that allow changes in all marginal and conditional distributions but assume the changes are smooth in order to achieve transfer between the tasks. We then propose an active learning algorithm for the second method that yields a combined active transfer learning algorithm. By analyzing the risk bounds for the proposed transfer learning algorithms, we show that when the conditional distribution changes, we are able to obtain a generalization error bound of O( 1 λ∗ √ nl ) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ∗) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we consider a general case where both the support and the model change across domains. We transform both X (features) and Y (labels) by a parameterized-location-scale shift to achieve transfer between tasks. On the other hand, multi-task learning attempts to simultaneously leverage data from multiple domains in order to estimate related functions on each domain. Similar to transfer learning, multi-task problems are also solved by imposing some kind of “smooth” relationship among/between tasks. We study how different smoothness assumptions on task relations affect the upper bounds of algorithms proposed for these problems under different settings. Finally, we propose methods to predict the entire distribution P (Y ) and P (Y |X) by transfer, while allowing both marginal and conditional distributions to change. Moreover, we extend this framework to multi-source distribution transfer. We demonstrate the effectiveness of our methods on both synthetic examples and real-world applications, including yield estimation on the grape image dataset, predicting air-quality from Weibo posts for cities, predicting whether a robot successfully climbs over an obstacle, examination score prediction for schools, and location prediction for taxis. Acknowledgments First and foremost, I would like to express my sincere gratitude to my advisor Jeff Schneider, who has been the biggest help during my whole PhD life. His brilliant insights have helped me formulate the problems of this thesis, brainstorm on new ideas and exciting algorithms. I have learnt many things about research from him, including how to organize ideas in a paper, how to design experiments, and how to give a good academic talk. This thesis would not have been possible without his guidance, advice, patience and encouragement. I would like to thank my thesis committee members Christos Faloutsos, Geoff Gordon and Jerry Zhu for providing great insights and feedbacks on my thesis. Christos has been very nice and he always finds time to talk to me even if he is very busy. Geoff has provided great insights on extending my work to classification and helped me clarified many notations/descriptions in my thesis. Jerry has been very helpful in extending my work on the text data and providing me the air quality dataset. I feel very fortunate to have them as my committee members. I would also like to thank Professor Barnabás Póczos, Professor Roman Garnett and Professor Artur Dubrawski, for providing very helpful suggestions and collaborations during my PhD. I am very grateful to many of the faculty members at Carnegie Mellon. Eric Xing’s Machine Learning course has been my introduction course for Machine Learning at Carnegie Mellon and it has taught me a lot about the foundations of machine learning, including all the inspiring machine learning algorithms and the theories behind them. Larry Wasserman’s Intermediate Statistics and Statistical Machine Learning are both wonderful courses and have been keys to my understanding of the statistical perspective of many machine learning algorithms. Geoff Gordon and Ryan Tibshirani’s Convex Optimization course has been a great tutorial for me to develop all the efficient optimizing techniques for the algorithms I have proposed. Further I want to thank all my colleagues and friends at Carnegie Mellon, especially people from the Auton Lab and the Computer Science Department at CMU. I would like to thank Dougal Sutherland, Yifei Ma, Junier Oliva, Tzu-Kuo Huang for insightful discussions and advices for my research. I would also like to thank all my friends who have provided great support and help during my stay at Carnegie Mellon, and to name a few, Nan Li, Junchen Jiang, Guangyu Xia, Zi Yang, Yixin Luo, Lei Li, Lin Xiao, Liu Liu, Yi Zhang, Liang Xiong, Ligia Nistor, Kirthevasan Kandasamy, Madalina Fiterau, Donghan Wang, Yuandong Tian, Brian Coltin. I would also like to thank Prof. Alon Halevy, who has been a great mentor during my summer internship at google research and also has been a great help in my job searching process. Finally I would like to thank my family, my parents Sisi and Tiangui, for their unconditional love, endless support, and unwavering faith in me. I truly thank them for shaping who I am, for teaching me to be a person who would never lose hope and give up.",
"title": ""
},
{
"docid": "8245472f3dad1dce2f81e21b53af5793",
"text": "Butanol is an aliphatic saturated alcohol having the molecular formula of C(4)H(9)OH. Butanol can be used as an intermediate in chemical synthesis and as a solvent for a wide variety of chemical and textile industry applications. Moreover, butanol has been considered as a potential fuel or fuel additive. Biological production of butanol (with acetone and ethanol) was one of the largest industrial fermentation processes early in the 20th century. However, fermentative production of butanol had lost its competitiveness by 1960s due to increasing substrate costs and the advent of more efficient petrochemical processes. Recently, increasing demand for the use of renewable resources as feedstock for the production of chemicals combined with advances in biotechnology through omics, systems biology, metabolic engineering and innovative process developments is generating a renewed interest in fermentative butanol production. This article reviews biotechnological production of butanol by clostridia and some relevant fermentation and downstream processes. The strategies for strain improvement by metabolic engineering and further requirements to make fermentative butanol production a successful industrial process are also discussed.",
"title": ""
},
{
"docid": "f6362a62b69999bdc3d9f681b68842fc",
"text": "Women with breast cancer, whether screen detected or symptomatic, have both mammography and ultrasound for initial imaging assessment. Unlike X-ray or magnetic resonance, which produce an image of the whole breast, ultrasound provides comparatively limited 2D or 3D views located around the lesions. Combining different modalities is an essential task for accurate diagnosis and simulating ultrasound images based on whole breast data could be a way toward correlating different information about the same lesion. Very few studies have dealt with such a simulation framework since the breast undergoes large scale deformation between the prone position of magnetic resonance imaging and the largely supine or lateral position of ultrasound. We present a framework for the realistic simulation of 3D ultrasound images based on prone magnetic resonance images from which a supine position is generated using a biomechanical model. The simulation parameters are derived from a real clinical infrastructure and from transducers that are used for routine scans, leading to highly realistic ultrasound images of any region of the breast.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "a2196e1ace9469ed1408f34ea67ee510",
"text": "Most current virtual reality (VR) interactions are mediated by hand-held input devices or hand gestures and they usually display only a partial representation of the user in the synthetic environment. We believe, representing the user as a full avatar that is controlled by natural movements of the person in the real world will lead to a greater sense of presence in VR. Possible applications exist in various domains such as entertainment, therapy, travel, real estate, education, social interaction and professional assistance. In this demo, we present MetaSpace, a virtual reality system that allows co-located users to explore a VR world together by walking around in physical space. Each user's body is represented by an avatar that is dynamically controlled by their body movements. We achieve this by tracking each user's body with a Kinect device such that their physical movements are mirrored in the virtual world. Users can see their own avatar and the other person's avatar allowing them to perceive and act intuitively in the virtual environment.",
"title": ""
},
{
"docid": "6f3bfd9b592654ca451eb5850e5684bc",
"text": "Mammals and birds have evolved three primary, discrete, interrelated emotion-motivation systems in the brain for mating, reproduction, and parenting: lust, attraction, and male-female attachment. Each emotion-motivation system is associated with a specific constellation of neural correlates and a distinct behavioral repertoire. Lust evolved to initiate the mating process with any appropriate partner; attraction evolved to enable individuals to choose among and prefer specific mating partners, thereby conserving their mating time and energy; male-female attachment evolved to enable individuals to cooperate with a reproductive mate until species-specific parental duties have been completed. The evolution of these three emotion-motivation systems contribute to contemporary patterns of marriage, adultery, divorce, remarriage, stalking, homicide and other crimes of passion, and clinical depression due to romantic rejection. This article defines these three emotion-motivation systems. Then it discusses an ongoing project using functional magnetic resonance imaging of the brain to investigate the neural circuits associated with one of these emotion-motivation systems, romantic attraction.",
"title": ""
},
{
"docid": "a2fc7b5fbb88e45c84400b1fe15368ee",
"text": "There is increasing evidence from functional magnetic resonance imaging (fMRI) that visual awareness is not only associated with activity in ventral visual cortex but also with activity in the parietal cortex. However, due to the correlational nature of neuroimaging, it remains unclear whether this parietal activity plays a causal role in awareness. In the experiment presented here we disrupted activity in right or left parietal cortex by applying repetitive transcranial magnetic stimulation (rTMS) over these areas while subjects attempted to detect changes between two images separated by a brief interval (i.e. 1-shot change detection task). We found that rTMS applied over right parietal cortex but not left parietal cortex resulted in longer latencies to detect changes and a greater rate of change blindness compared with no TMS. These results suggest that the right parietal cortex plays a critical role in conscious change detection.",
"title": ""
},
{
"docid": "6f0ffda347abfd11dc78c0b76ceb11f8",
"text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.",
"title": ""
},
{
"docid": "0d65394a132dba6d4d6827be8afda33e",
"text": "PHYSICIANS’ ABILITY TO PROVIDE high-quality care can be adversely affected by many factors, including sleep deprivation. Concerns about the danger of physicians who are sleep deprived and providing care have led state legislatures and academic institutions to try to constrain the work hours of physicians in training (house staff). Unlike commercial aviation, for example, medicine is an industry in which public safety is directly at risk but does not have mandatory restrictions on work hours. Legislation before the US Congress calls for limiting resident work hours to 80 hours per week and no more than 24 hours of continuous work. Shifts of residents working in the emergency department would be limited to 12 hours. The proposed legislation, which includes public disclosure and civil penalties for hospitals that violate the work hour restrictions, does not address extended duty shifts of attending or private practice physicians. There is still substantial controversy within the medical community about the magnitude and significance of the clinical impairment resulting from work schedules that aggravate sleep deprivation. There is extensive literature on the adverse effects of sleep deprivation in laboratory and nonmedical settings. However, studies on sleep deprivation of physicians performing clinically relevant tasks have been less conclusive. Opinions have been further influenced by the potential adverse impact of reduced work schedules on the economics of health care, on continuity of care, and on quality of care. This review focuses on the consequences of sleep loss both in controlled laboratory environments and in clinical studies involving medical personnel.",
"title": ""
},
{
"docid": "2b8ca8be8d5e468d4cd285ecc726eceb",
"text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "9b1f40687d0c9b78efdf6d1e19769294",
"text": "The ideal cell type to be used for cartilage therapy should possess a proven chondrogenic capacity, not cause donor-site morbidity, and should be readily expandable in culture without losing their phenotype. There are several cell sources being investigated to promote cartilage regeneration: mature articular chondrocytes, chondrocyte progenitors, and various stem cells. Most recently, stem cells isolated from joint tissue, such as chondrogenic stem/progenitors from cartilage itself, synovial fluid, synovial membrane, and infrapatellar fat pad (IFP) have gained great attention due to their increased chondrogenic capacity over the bone marrow and subcutaneous adipose-derived stem cells. In this review, we first describe the IFP anatomy and compare and contrast it with other adipose tissues, with a particular focus on the embryological and developmental aspects of the tissue. We then discuss the recent advances in IFP stem cells for regenerative medicine. We compare their properties with other stem cell types and discuss an ontogeny relationship with other joint cells and their role on in vivo cartilage repair. We conclude with a perspective for future clinical trials using IFP stem cells.",
"title": ""
}
] |
scidocsrr
|
572f73117743b5aae18509fad7d8f075
|
Combined treatment with botulinum toxin and hyaluronic acid to correct
unsightly lateral-chin depression*
|
[
{
"docid": "8f707a1599b5d97aab020dcedfe3bb96",
"text": "Facial aging reflects the dynamic, cumulative effects of time on the skin, soft tissues, and deep structural components of the face, and is a complex synergy of skin textural changes and loss of facial volume. Many of the facial manifestations of aging reflect the combined effects of gravity, progressive bone resorption, decreased tissue elasticity, and redistribution of subcutaneous fullness. A convenient method for assessing the morphological effects of aging is to divide the face into the upper third (forehead and brows), middle third (midface and nose), and lower third (chin, jawline, and neck). The midface is an important factor in facial aesthetics because perceptions of facial attractiveness are largely founded on the synergy of the eyes, nose, lips, and cheek bones (central facial triangle). For aesthetic purposes, this area should be considered from a 3-dimensional rather than a 2-dimensional perspective, and restoration of a youthful 3-dimensional facial topography should be regarded as the primary goal in facial rejuvenation. Recent years have seen a significant increase in the number of nonsurgical procedures performed for facial rejuvenation. Patients seeking alternatives to surgical procedures include those who require restoration of lost facial volume, those who wish to enhance normal facial features, and those who want to correct facial asymmetry. Important factors in selecting a nonsurgical treatment option include the advantages of an immediate cosmetic result and a short recovery time.",
"title": ""
}
] |
[
{
"docid": "6824f227a05b30b9e09ea9a4d16429b0",
"text": "This study presents a Long Short-Term Memory (LSTM) neural network approach to Japanese word segmentation (JWS). Previous studies on Chinese word segmentation (CWS) succeeded in using recurrent neural networks such as LSTM and gated recurrent units (GRU). However, in contrast to Chinese, Japanese includes several character types, such as hiragana, katakana, and kanji, that produce orthographic variations and increase the difficulty of word segmentation. Additionally, it is important for JWS tasks to consider a global context, and yet traditional JWS approaches rely on local features. In order to address this problem, this study proposes employing an LSTMbased approach to JWS. The experimental results indicate that the proposed model achieves state-of-the-art accuracy with respect to various Japanese corpora.",
"title": ""
},
{
"docid": "d055902aa91efacb35a204132c51a68e",
"text": "This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition. In contrast with previous research where this hypothesis has been successfully tested against relatively simple compositional models, in our work we use a robust model trained with linear regression. The results we get in two experiments show the superiority of the prior disambiguation method and suggest that the effectiveness of this approach is modelindependent.",
"title": ""
},
{
"docid": "8583ec1f457469dac3e2517a90a58423",
"text": "Sentiment analysis is the automatic classification of the overall opinion conveyed by a text towards its subject matter. This paper discusses an experiment in the sentiment analysis of of a collection of movie reviews that have been automatically translated to Indonesian. Following [1], we employ three well known classification techniques: naive bayes, maximum entropy, and support vector machines, employing unigram presence and frequency values as the features. The translation is achieved through machine translation and simple word substitutions based on a bilingual dictionary constructed from various online resources. Analysis of the Indonesian translations yielded an accuracy of up to 78.82%, still short of the accuracy for the English documents (80.09%), but satisfactorily high given the simple translation approach.",
"title": ""
},
{
"docid": "4ebdfc3fe891f11902fb94973b6be582",
"text": "This work introduces the CASCADE error correction protocol and LDPC (Low-Density Parity Check) error correction codes which are both parity check based. We also give the results of computer simulations that are performed for comparing their performances (redundant information, success).",
"title": ""
},
{
"docid": "0a0ec569738b90f44b0c20870fe4dc2f",
"text": "Transactional memory provides a concurrency control mechanism that avoids many of the pitfalls of lock-based synchronization. Researchers have proposed several different implementations of transactional memory, broadly classified into software transactional memory (STM) and hardware transactional memory (HTM). Both approaches have their pros and cons: STMs provide rich and flexible transactional semantics on stock processors but incur significant overheads. HTMs, on the other hand, provide high performance but implement restricted semantics or add significant hardware complexity. This paper is the first to propose architectural support for accelerating transactions executed entirely in software. We propose instruction set architecture (ISA) extensions and novel hardware mechanisms that improve STM performance. We adapt a high-performance STM algorithm supporting rich transactional semantics to our ISA extensions (called hardware accelerated software transactional memory or HASTM). HASTM accelerates fully virtualized nested transactions, supports language integration, and provides both object-based and cache-line based conflict detection. We have implemented HASTM in an accurate multi-core IA32 simulator. Our simulation results show that (1) HASTM single-thread performance is comparable to a conventional HTM implementation; (2) HASTM scaling is comparable to a STM implementation; and (3) HASTM is resilient to spurious aborts and can scale better than HTM in a multi-core setting. Thus, HASTM provides the flexibility and rich semantics of STM, while giving the performance of HTM.",
"title": ""
},
{
"docid": "27465b2c8ce92ccfbbda6c802c76838f",
"text": "Nonlinear hyperelastic energies play a key role in capturing the fleshy appearance of virtual characters. Real-world, volume-preserving biological tissues have Poisson’s ratios near 1/2, but numerical simulation within this regime is notoriously challenging. In order to robustly capture these visual characteristics, we present a novel version of Neo-Hookean elasticity. Our model maintains the fleshy appearance of the Neo-Hookean model, exhibits superior volume preservation, and is robust to extreme kinematic rotations and inversions. We obtain closed-form expressions for the eigenvalues and eigenvectors of all of the system’s components, which allows us to directly project the Hessian to semipositive definiteness, and also leads to insights into the numerical behavior of the material. These findings also inform the design of more sophisticated hyperelastic models, which we explore by applying our analysis to Fung and Arruda-Boyce elasticity. We provide extensive comparisons against existing material models.",
"title": ""
},
{
"docid": "266b9bfde23fdfaedb35d293f7293c93",
"text": "We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.",
"title": ""
},
{
"docid": "3a9906059717d67a768c2928abbf6247",
"text": "UNLABELLED\nAssessment of depth of anesthesia is the basis in anesthesiologists work because the occurrence of awareness during general anesthesia is important due to stress, which is caused in the patient at that moment, and due to complications that may arise later. There are subjective and objective methods used to estimate the depth of anesthesia. The aim of this study was to assess the depth of anesthesia based on clinical parameters and on the basis bispectral index, and determine the part of bispectral monitoring in support to clinical assessment.\n\n\nMATERIAL AND METHODS\nSixty patients divided into two groups were analyzed in a prospective study. In first group (group 1), the depth of anesthesia was assessed by PRST score, and in the second group (group 2) was assessed by bispectral monitoring with determination PRST score concurrently. In both groups PRST score was assessed in four periods, while bispectral monitoring is used continuously. For analysis were used the BIS index values from the equivalent periods as PRST scores. PRST score value 0-3, and BIS index 40-60 were considered as adequate depth of anesthesia. The results showed that in our study were not waking patients during the surgery. In the group where the depth of anesthesia assessed clinically, we had a few of respondents (13%) for whom at some point were present indicators of light anesthesia. Postoperative interview excluded the possibility of intraoperative awareness. In the second group of patients and objective and clinical assessment indicated at all times to adequate depth of anesthesia.\n\n\nCONCLUSION\nThe use of BIS monitoring with clinical assessment allows anesthesiologists precise decision-making in balancing and dosage of anesthetics and other drugs, as well as treatment in certain situations.",
"title": ""
},
{
"docid": "13adeafcb8c1c20e71ca086a0d364e64",
"text": "This paper targets learning robust image representation for single training sample per person face recognition. Motivated by the success of deep learning in image representation, we propose a supervised autoencoder, which is a new type of building block for deep architectures. There are two features distinct our supervised autoencoder from standard autoencoder. First, we enforce the faces with variants to be mapped with the canonical face of the person, for example, frontal face with neutral expression and normal illumination; Second, we enforce features corresponding to the same person to be similar. As a result, our supervised autoencoder extracts the features which are robust to variances in illumination, expression, occlusion, and pose, and facilitates the face recognition. We stack such supervised autoencoders to get the deep architecture and use it for extracting features in image representation. Experimental results on the AR, Extended Yale B, CMU-PIE, and Multi-PIE data sets demonstrate that by coupling with the commonly used sparse representation-based classification, our stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network, in spite of much less training data and without any domain information. Moreover, supervised autoencoder can also be used for face verification, which further demonstrates its effectiveness for face representation.",
"title": ""
},
{
"docid": "bd3cedfd42e261e9685cf402fc44c914",
"text": "OBJECTIVES\nThe objective of this study was to compile existing scientific evidence regarding the effects of essential oils (EOs) administered via inhalation for the alleviation of nausea and vomiting.\n\n\nMETHODS\nCINAHL, PubMed, and EBSCO Host and Science Direct databases were searched for articles related to the use of EOs and/or aromatherapy for nausea and vomiting. Only articles using English as a language of publication were included. Eligible articles included all forms of evidence (nonexperimental, experimental, case report). Interventions were limited to the use of EOs by inhalation of their vapors to treat symptoms of nausea and vomiting in various conditions regardless of age group. Studies where the intervention did not utilize EOs or were concerned with only alcohol inhalation and trials that combined the use of aromatherapy with other treatments (massage, relaxations, or acupressure) were excluded.\n\n\nRESULTS\nFive (5) articles met the inclusion criteria encompassing trials with 328 respondents. Their results suggest that the inhaled vapor of peppermint or ginger essential oils not only reduced the incidence and severity of nausea and vomiting but also decreased antiemetic requirements and consequently improved patient satisfaction. However, a definitive conclusion could not be drawn due to methodological flaws in the existing research articles and an acute lack of additional research in this area.\n\n\nCONCLUSIONS\nThe existing evidence is encouraging but yet not compelling. Hence, further well-designed large trials are needed before confirmation of EOs effectiveness in treating nausea and vomiting can be strongly substantiated.",
"title": ""
},
{
"docid": "57225d9e25270898f78921703c5db93f",
"text": "This paper summarizes the main problems and solutions of power quality in microgrids, distributed-energy-storage systems, and ac/dc hybrid microgrids. First, the power quality enhancement of grid-interactive microgrids is presented. Then, the cooperative control for enhance voltage harmonics and unbalances in microgrids is reviewed. Afterward, the use of static synchronous compensator (STATCOM) in grid-connected microgrids is introduced in order to improve voltage sags/swells and unbalances. Finally, the coordinated control of distributed storage systems and ac/dc hybrid microgrids is explained.",
"title": ""
},
{
"docid": "320bd26aa73ca080de8ba1da70809ee3",
"text": "Attention-based sequence-to-sequence model has proved successful in Neural Machine Translation (NMT). However, the attention without consideration of decoding history, which includes the past information in the decoder and the attention mechanism, often causes much repetition. To address this problem, we propose the decoding-history-based Adaptive Control of Attention (ACA) for the NMT model. ACA learns to control the attention by keeping track of the decoding history and the current information with a memory vector, so that the model can take the translated contents and the current information into consideration. Experiments on Chinese-English translation and the EnglishVietnamese translation have demonstrated that our model significantly outperforms the strong baselines. The analysis shows that our model is capable of generating translation with less repetition and higher accuracy. The code will be available at https://github.com/lancopku",
"title": ""
},
{
"docid": "b29f2d688e541463b80006fac19eaf20",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "6a59641369fefcb7c7a917718f1d067c",
"text": "This paper presents an adaptive fuzzy sliding-mode dynamic controller (AFSMDC) of the car-like mobile robot (CLMR) for the trajectory tracking issue. First, a kinematics model of the nonholonomic CLMR is introduced. Then, according to the Lagrange formula, a dynamic model of the CLMR is created. For a real time trajectory tracking problem, an optimal controller capable of effectively driving the CLMR to track the desired trajectory is necessary. Therefore, an AFSMDC is proposed to accomplish the tracking task and to reduce the effect of the external disturbances and system uncertainties of the CLMR. The proposed controller could reduce the tracking errors between the output of the velocity controller and the real velocity of the CLMR. Therefore, the CLMR could track the desired trajectory without posture and orientation errors. Additionally, the stability of the proposed controller is proven by utilizing the Lyapunov stability theory. Finally, the simulation results validate the effectiveness of the proposed AFSMDC.",
"title": ""
},
{
"docid": "d242ef5126dfb2db12b54c15be61367e",
"text": "RankNet is one of the widely adopted ranking models for web search tasks. However, adapting a generic RankNet for personalized search is little studied. In this paper, we first continue-trained a variety of RankNets with different number of hidden layers and network structures over a previously trained global RankNet model, and observed that a deep neural network with five hidden layers gives the best performance. To further improve the performance of adaptation, we propose a set of novel methods categorized into two groups. In the first group, three methods are proposed to properly assess the usefulness of each adaptation instance and only leverage the most informative instances to adapt a user-specific RankNet model. These assessments are based on KL-divergence, click entropy or a heuristic to ignore top clicks in adaptation queries. In the second group, two methods are proposed to regularize the training of the neural network in RankNet: one of these methods regularize the error back-propagation via a truncated gradient approach, while the other method limits the depth of the back propagation when adapting the neural network. We empirically evaluate our approaches using a large-scale real-world data set. Experimental results exhibit that our methods all give significant improvements over a strong baseline ranking system, and the truncated gradient approach gives the best performance, significantly better than all others.",
"title": ""
},
{
"docid": "88e97dc5105ef142d422bec88e897ddd",
"text": "This paper reports on an experiment realized on the IBM 5Q chip which demonstrates strong evidence for the advantage of using error detection and fault-tolerant design of quantum circuits. By showing that fault-tolerant quantum computation is already within our reach, the author hopes to encourage this approach.",
"title": ""
},
{
"docid": "96a01ec78b8b7319985e5b377c50e4a2",
"text": "Electronic Voting are now being performed using World Wide Web in many countries of the world due to this advancement a voter need not to visit the polling place. But has to just logging on the computer with an internet connection. Also, thi s voting requires an access code for the e-voting through the advance report of a voter. To reduce these disadvantages, we suggest a process in which a voter, who has the wireless certificate issued in advance, uses its own mobile phone for an e-voting without the unique r gistration for a vote. In this paper, a polling scheme by mean s of mobile technology is resented as most fundamental applicat ion of GSM based Personal Response System, which allows a vote r t cast his vote in simple and convenient way without the limit of time and location by integrating an electronic voting method with the GSM infrastructure. Key Terms: Voting; Mobile Terminal; Confidentiali ty; Anonymity Full Text: http://www.ijcsmc.com/docs/papers/May2013/V2I5201354.pdf",
"title": ""
},
{
"docid": "5f9cd16a420b2f6b04e504d2b2dae111",
"text": "This paper addresses on-chip solar energy harvesting and proposes a circuit that can be employed to generate high voltages from integrated photodiodes. The proposed circuit uses a switched-inductor approach to avoid stacking photodiodes to generate high voltages. The effect of parasitic photodiodes present in integrated circuits (ICs) is addressed and a solution to minimize their impact is presented. The proposed circuit employs two switch transistors and two off-chip components: an inductor and a capacitor. A theoretical analysis of a switched-inductor dc-dc converter is carried out and a mathematical model of the energy harvester is developed. Measurements taken from a fabricated IC are presented and shown to be in good agreement with hardware measurements. Measurement results show that voltages of up to 2.81 V (depending on illumination and loading conditions) can be generated from a single integrated photodiode. The energy harvester circuit achieves a maximum conversion efficiency of 59%.",
"title": ""
},
{
"docid": "412278d78888fc4ee28c666133c9bd24",
"text": "A future Internet of Things (IoT) system will connect the physical world into cyberspace everywhere and everything via billions of smart objects. On the one hand, IoT devices are physically connected via communication networks. The service oriented architecture (SOA) can provide interoperability among heterogeneous IoT devices in physical networks. On the other hand, IoT devices are virtually connected via social networks. In this paper we propose adaptive and scalable trust management to support service composition applications in SOA-based IoT systems. We develop a technique based on distributed collaborative filtering to select feedback using similarity rating of friendship, social contact, and community of interest relationships as the filter. Further we develop a novel adaptive filtering technique to determine the best way to combine direct trust and indirect trust dynamically to minimize convergence time and trust estimation bias in the presence of malicious nodes performing opportunistic service and collusion attacks. For scalability, we consider a design by which a capacity-limited node only keeps trust information of a subset of nodes of interest and performs minimum computation to update trust. We demonstrate the effectiveness of our proposed trust management through service composition application scenarios with a comparative performance analysis against EigenTrust and PeerTrust.",
"title": ""
}
] |
scidocsrr
|
a4d745c2fc2cda17a8fae0144657e9f7
|
Single Document Summarization based on Nested Tree Structure
|
[
{
"docid": "e34c102bf9c690e394ce7e373128be10",
"text": "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a marginbased objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.",
"title": ""
}
] |
[
{
"docid": "51ac5dde554fd8363fcf95e6d3caf439",
"text": "Swarm intelligence is a relatively novel field. It addresses the study of the collective behaviors of systems made by many components that coordinate using decentralized controls and self-organization. A large part of the research in swarm intelligence has focused on the reverse engineering and the adaptation of collective behaviors observed in natural systems with the aim of designing effective algorithms for distributed optimization. These algorithms, like their natural systems of inspiration, show the desirable properties of being adaptive, scalable, and robust. These are key properties in the context of network routing, and in particular of routing in wireless sensor networks. Therefore, in the last decade, a number of routing protocols for wireless sensor networks have been developed according to the principles of swarm intelligence, and, in particular, taking inspiration from the foraging behaviors of ant and bee colonies. In this paper, we provide an extensive survey of these protocols. We discuss the general principles of swarm intelligence and of its application to routing. We also introduce a novel taxonomy for routing protocols in wireless sensor networks and use it to classify the surveyed protocols. We conclude the paper with a critical analysis of the status of the field, pointing out a number of fundamental issues related to the (mis) use of scientific methodology and evaluation procedures, and we identify some future research directions. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bb00d7cb37248e4974319ba8d5306bbe",
"text": "Attention can be focused volitionally by \"top-down\" signals derived from task demands and automatically by \"bottom-up\" signals from salient stimuli. The frontal and parietal cortices are involved, but their neural activity has not been directly compared. Therefore, we recorded from them simultaneously in monkeys. Prefrontal neurons reflected the target location first during top-down attention, whereas parietal neurons signaled it earlier during bottom-up attention. Synchrony between frontal and parietal areas was stronger in lower frequencies during top-down attention and in higher frequencies during bottom-up attention. This result indicates that top-down and bottom-up signals arise from the frontal and sensory cortex, respectively, and different modes of attention may emphasize synchrony at different frequencies.",
"title": ""
},
{
"docid": "7f5ff39232cd491e648d40b070e0709c",
"text": "Synthesizing terrain or adding detail to terrains manually is a long and tedious process. With procedural synthesis methods this process is faster but more difficult to control. This paper presents a new technique of terrain synthesis that uses an existing terrain to synthesize new terrain. To do this we use multi-resolution analysis to extract the high-resolution details from existing models and apply them to increase the resolution of terrain. Our synthesized terrains are more heterogeneous than procedural results, are superior to terrains created by texture transfer, and retain the large-scale characteristics of the original terrain.",
"title": ""
},
{
"docid": "2a4124045b9c422c3fc9aa7059613398",
"text": "Cotraining, a paradigm of semisupervised learning, is promised to alleviate effectively the shortage of labeled examples in supervised learning. The standard two-view cotraining requires the data set to be described by two views of features, and previous studies have shown that cotraining works well if the two views satisfy the sufficiency and independence assumptions. In practice, however, these two assumptions are often not known or ensured (even when the two views are given). More commonly, most supervised data sets are described by one set of attributes (one view). Thus, they need be split into two views in order to apply the standard two-view cotraining. In this paper, we first propose a novel approach to empirically verify the two assumptions of cotraining given two views. Then, we design several methods to split single view data sets into two views, in order to make cotraining work reliably well. Our empirical results show that, given a whole or a large labeled training set, our view verification and splitting methods are quite effective. Unfortunately, cotraining is called for precisely when the labeled training set is small. However, given small labeled training sets, we show that the two cotraining assumptions are difficult to verify, and view splitting is unreliable. Our conclusions for cotraining's effectiveness are mixed. If two views are given, and known to satisfy the two assumptions, cotraining works well. Otherwise, based on small labeled training sets, verifying the assumptions or splitting single view into two views are unreliable; thus, it is uncertain whether the standard cotraining would work or not.",
"title": ""
},
{
"docid": "50e7e02f9a4b8b65cf2bce212314e77c",
"text": "Over the past few years, massive amounts of world knowledge have been accumulated in publicly available knowledge bases, such as Freebase, NELL, and YAGO. Yet despite their seemingly huge size, these knowledge bases are greatly incomplete. For example, over 70% of people included in Freebase have no known place of birth, and 99% have no known ethnicity. In this paper, we propose a way to leverage existing Web-search-based question-answering technology to fill in the gaps in knowledge bases in a targeted way. In particular, for each entity attribute, we learn the best set of queries to ask, such that the answer snippets returned by the search engine are most likely to contain the correct value for that attribute. For example, if we want to find Frank Zappa's mother, we could ask the query `who is the mother of Frank Zappa'. However, this is likely to return `The Mothers of Invention', which was the name of his band. Our system learns that it should (in this case) add disambiguating terms, such as Zappa's place of birth, in order to make it more likely that the search results contain snippets mentioning his mother. Our system also learns how many different queries to ask for each attribute, since in some cases, asking too many can hurt accuracy (by introducing false positives). We discuss how to aggregate candidate answers across multiple queries, ultimately returning probabilistic predictions for possible values for each attribute. Finally, we evaluate our system and show that it is able to extract a large number of facts with high confidence.",
"title": ""
},
{
"docid": "4a52f4c8f08cefac9d81296dbb853d6e",
"text": "Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared (“echo”), and the place that allows its exposure (“chamber” — the social network), and examine closely at how these two components interact. We de ne a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we nd that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also nd that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a “price of bipartisanship” in terms of their network centrality and content appreciation. In addition, we study the role of “gatekeepers,” users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these ndings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging. ACM Reference format: Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship. In Proceedings of WWW ’18, Lyon, France, April 23–27, 2018, 10 pages. DOI: 10.1145/nnnnnnn.nnnnnnn",
"title": ""
},
{
"docid": "e444a7a0570d96d589e4238dd4458d7a",
"text": "Flood disaster is considered a norm for Malaysians since Malaysia is located near the Equator. Flood disaster usually happens due to improper irrigation method in a housing area or the sudden increase of water volume in a river. Flood disaster often causes lost of property, damages and life. Since this disaster is considered dangerous to human life, an efficient countermeasure or alert system must be implemented in order to notify people in the early stage so that safety precautions can be taken to avoid any mishaps. This paper presents a remote water level alarm system developed by applying liquid sensors and GSM technology. System focuses on monitoring water level remotely and utilizes Global System of Mobile Connections (GSM) and Short Message Service (SMS) to convey data from sensors to the respective users through their mobile phone. The hardware of the system includes Micro Controller Unit (MCU) PIC18F452, three (3) liquid sensors, Inverter and Easygate GSM Module. Software used for the system is C compiler thru (ATtention) AT commands. It is hoped that this project would be beneficial to the community and would act as a precautionary measure in case of flood disaster at any flood prone area. By having early detection, users could take swift action such as evacuation so that cases of loss of lives could be minimized.",
"title": ""
},
{
"docid": "ef8834599e66d3f40ea8309e1149bef5",
"text": "This paper introduces a Vivaldi-type ultra-wideband antenna array with low cross-polarization, termed the Sliced Notch Antenna (SNA) array. High cross-polarization when scanning in the non-principal planes has long been a problem in Vivaldi arrays without a universal solution. In this paper, we shed light on the root cause of this long-standing issue, clarifying that a high ratio of vertical-to-horizontal current potentials along the upper radiator fin is primarily responsible for elevated cross-polarization (and not strictly the element profile). This finding therein motivates the main technical innovation in the present work—a simple reconfiguration of the upper Vivaldi fin into a series of coupled conductor segments to effectively control the aforementioned current ratio for significant reductions in cross-polarization, without reducing element profile or considerably hindering the excellent match and radiation efficiency of the original Vivaldi. Theory and design methodology for the SNA array are formulated, and followed by a concise set of design guidelines that are applied to reconfigure a representative 10:1 Vivaldi element into an SNA for practical reference, exhibiting a 20 dB reduction in peak cross-polarization ratio. This paper considers the case of single-polarized arrays as a first introduction to the SNA, though the method has been verified to work equally well (if not better) in dual-polarized arrays.",
"title": ""
},
{
"docid": "e373e44d5d4445ca56a45b4800b93740",
"text": "In recent years a great deal of research efforts in ship hydromechanics have been devoted to practical navigation problems in moving larger ships safely into existing harbours and inland waterways and to ease congestion in existing shipping routes. The starting point of any navigational or design analysis lies in the accurate determination of the hydrodynamic forces generated on the ship hull moving in confined waters. The analysis of such ship motion should include the effects of shallow water. An area of particular interest is the determination of ship resistance in shallow or restricted waters at different speeds, forming the basis for the power calculation and design of the propulsion system. The present work describes the implementation of CFD techniques for determining the shallow water resistance of a river-sea ship at different speeds. The ship hull flow is analysed for different ship speeds in shallow water conditions. The results obtained from CFD analysis are compared with available standard results.",
"title": ""
},
{
"docid": "8db067caf4f120ae4b2d64f6798cdd88",
"text": "BACKGROUND\nHyperuricemia has been linked to cardiovascular and renal diseases, possibly through the generation of reactive oxygen species (ROS) and subsequent endothelial dysfunction. The enzymatic effect of xanthine oxidase is the production of ROS and uric acid. Studies have shown that inhibiting xanthine oxidase with allopurinol can reverse endothelial dysfunction. Furthermore, rat studies have shown that hyperuricemia-induced hypertension and vascular disease is at least partially reversed by the supplementation of the nitric oxide synthase (NOS) substrate, L-arginine. Therefore, we hypothesized that uric acid induces endothelial dysfunction by inhibiting nitric oxide production.\n\n\nMETHODS\nHyperuricemia was induced in male Sprague-Dawley rats with an uricase inhibitor, oxonic acid, by gavage; control rats received vehicle. Allopurinol was placed in drinking water to block hyperuricemia. Rats were randomly divided into four groups: (1) control, (2) allopurinol only, (3) oxonic acid only, and (4) oxonic acid + allopurinol. Rats were sacrificed at 1 and 7 days, and their serum analyzed for serum uric acid and nitrites/nitrates concentrations. The effect of uric acid on nitric oxide production was also determined in bovine aortic endothelial cells.\n\n\nRESULTS\nOxonic acid induced mild hyperuricemia at both 1 and 7 days (P < 0.05). Allopurinol reversed the hyperuricemia at 7 days (P < .001). Serum nitrites and nitrates (NO(X)) were reduced in hyperuricemic rats at both 1 and 7 days (P < .001). Allopurinol slightly reversed the decrease in NO(X) at 1 day and completely at 7 days (P < .001). There was a direct linear correlation between serum uric acid and NO(X) (R(2)= 0.56) and a trend toward higher systolic blood pressure in hyperuricemic rats (P= NS). Uric acid was also found to inhibit both basal and vascular endothelial growth factor (VEGF)-induced nitric oxide production in bovine aortic endothelial cells.\n\n\nCONCLUSION\nHyperuricemic rats have a decrease in serum nitric oxide which is reversed by lowering uric acid levels. Soluble uric acid also impairs nitric oxide generation in cultured endothelial cells. Thus, hyperuricemia induces endothelial dysfunction; this may provide insight into a pathogenic mechanism by which uric acid may induce hypertension and vascular disease.",
"title": ""
},
{
"docid": "7709df997c72026406d257c85dacb271",
"text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.",
"title": ""
},
{
"docid": "8f47d17dfcb2b05b31eb8a8beec7a160",
"text": "We consider the problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit. The goal is to maximize the revenue generated by these keyword sales, bearing in mind that, as some bidders may eventually exceed their budget, not all keywords should be sold to the highest bidder. We assume that the sequence of keywords (or equivalently, of bids) is revealed on-line. Our concern will be the competitive ratio for this problem versus the off-line optimum.\n We extend the current literature on this problem by considering the setting where the keywords arrive in a random order. In this setting we are able to achieve a competitive ratio of 1-ε under some mild, but necessary, assumptions. In contrast, it is already known that when the keywords arrive in an adversarial order, the best competitive ratio is bounded away from 1. Our algorithm is motivated by PAC learning, and proceeds in two parts: a training phase, and an exploitation phase.",
"title": ""
},
{
"docid": "08c97484fe3784e2f1fd42606b915f83",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "3476246809afe4e6b7cef9bbbed1926e",
"text": "The aim of this study was to investigate the efficacy of a proposed new implant mediated drug delivery system (IMDDS) in rabbits. The drug delivery system is applied through a modified titanium implant that is configured to be implanted into bone. The implant is hollow and has multiple microholes that can continuously deliver therapeutic agents into the systematic body. To examine the efficacy and feasibility of the IMDDS, we investigated the pharmacokinetic behavior of dexamethasone in plasma after a single dose was delivered via the modified implant placed in the rabbit tibia. After measuring the plasma concentration, the areas under the curve showed that the IMDDS provided a sustained release for a relatively long period. The result suggests that the IMDDS can deliver a sustained release of certain drug components with a high bioavailability. Accordingly, the IMDDS may provide the basis for a novel approach to treating patients with chronic diseases.",
"title": ""
},
{
"docid": "01d4f1311afdd38c1afae967542768e6",
"text": "Cortana, one of the new features introduced by Microsoft in Windows 10 desktop operating systems, is a voice activated personal digital assistant that can be used for searching stuff on device or web, setting up reminders, tracking users’ upcoming flights, getting news tailored to users’ interests, sending text and emails, and more. Being the platform relatively new, the forensic examination of Cortana has been largely unexplored in literature. This paper seeks to determine the data remnants of Cortana usage in a Windows 10 personal computer (PC). The research contributes in-depth understanding of the location of evidentiary artifacts on hard disk and the type of information recorded in these artifacts as a result of user activities on Cortana. For decoding and exporting data from one of the databases created by Cortana application, four custom python scripts have been developed. Additionally, as a part of this paper, a GUI tool called CortanaDigger is developed for extracting and listing web search strings, as well as timestamp of search made by a user on Cortana box. Several experiments are conducted to track reminders (based on time, place, and person) and detect anti-forensic attempts like evidence modification and evidence destruction carried out on Cortana artifacts. Finally, forensic usefulness of Cortana artifacts is demonstrated in terms of a Cortana web search timeline constructed over a period of time.",
"title": ""
},
{
"docid": "ee70ea6753a05f941a8d0123c26f075c",
"text": "On-line analytical processing (OLAP) systems considerably improve data analysis and are finding wide-spread use. OLAP systems typically employ multidimensional data models to structure their data. This paper identifies 11 modeling requirements for multidimensional data models. These requirements are derived from an assessment of complex data found in real-world applications. A survey of 14 multidimensional data models reveals shortcomings in meeting some of the requirements. Existing models do not support many-to-many relationships between facts and dimensions, lack built-in mechanisms for handling change and time, lack support for imprecision, and are generally unable to insert data with varying granularities. This paper defines an extended multidimensional data model and algebraic query language that address all 11 requirements. The model reuses the common multidimensional concepts of dimension hierarchies and granularities to capture imprecise data. For queries that cannot be answered precisely due to the imprecise data, techniques are proposed that take into account the imprecision in the grouping of the data, in the subsequent aggregate computation, and in the presentation of the imprecise result to the user. In addition, alternative queries unaffected by imprecision are offered. The data model and query evaluation techniques discussed in this paper can be implemented using relational database technology. The approach is also capable of exploiting multidimensional query processing techniques like pre-aggregation. This yields a practical solution with low computational overhead. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "93f89a636828df50dfe48ffa3e868ea6",
"text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.",
"title": ""
},
{
"docid": "ad65c71e5a158ec768f4fceeed1b68fa",
"text": "We provide an analysis of current evaluation methodologies applied to summarization metrics and identify the following areas of concern: (1) movement away from evaluation by correlation with human assessment; (2) omission of important components of human assessment from evaluations, in addition to large numbers of metric variants; (3) absence of methods of significance testing improvements over a baseline. We outline an evaluation methodology that overcomes all such challenges, providing the first method of significance testing suitable for evaluation of summarization metrics. Our evaluation reveals for the first time which metric variants significantly outperform others, optimal metric variants distinct from current recommended best variants, as well as machine translation metric BLEU to have performance on-par with ROUGE for the purpose of evaluation of summarization systems. We subsequently replicate a recent large-scale evaluation that relied on, what we now know to be, suboptimal ROUGE variants revealing distinct conclusions about the relative performance of state-of-the-art summarization systems.",
"title": ""
},
{
"docid": "e08a2c8c18d01d27608226da66d6e8ab",
"text": "We describe a new pseudorandom generator for AC0. Our generator ε-fools circuits of depth d and size M and uses a seed of length Ŏ(log<sup>d+4</sup> M/ε). The previous best construction for $d \\geq 3$ was due to Nisan, and had seed length Ŏ(log<sup>2d+6</sup> M/ε). A seed length of O(log<sup>2d+Ω(1)</sup> M) is best possible given Nisan-type generators and the current state of circuit lower bounds. Seed length Ω(log<sup>d</sup> M/ε) is a barrier for any pseudorandom generator construction given the current state of circuit lower bounds. For d=2, a pseudorandom generator of seed length Ŏ(log<sup>2</sup> M/ε) was known. Our generator is based on a \"pseudorandom restriction'' generator which outputs restrictions that satisfy the conclusions of the Hastad Switching Lemma and that uses a seed of polylogarithmic length.",
"title": ""
}
] |
scidocsrr
|
787da3f7146426cf32845800fc92d6b7
|
Training on multiple sub-flows to optimise the use of Machine Learning classifiers in real-world IP networks
|
[
{
"docid": "1f0c842e4e2158daa586d9ee46a0d52a",
"text": "The ability to accurately identify the network traffic associated with different P2P applications is important to a broad range of network operations including application-specific traffic engineering, capacity planning, provisioning, service differentiation,etc. However, traditional traffic to higher-level application mapping techniques such as default server TCP or UDP network-port baseddisambiguation is highly inaccurate for some P2P applications.In this paper, we provide an efficient approach for identifying the P2P application traffic through application level signatures. We firstidentify the application level signatures by examining some available documentations, and packet-level traces. We then utilize the identified signatures to develop online filters that can efficiently and accurately track the P2P traffic even on high-speed network links.We examine the performance of our application-level identification approach using five popular P2P protocols. Our measurements show thatour technique achieves less than 5% false positive and false negative ratios in most cases. We also show that our approach only requires the examination of the very first few packets (less than 10packets) to identify a P2P connection, which makes our approach highly scalable. Our technique can significantly improve the P2P traffic volume estimates over what pure network port based approaches provide. For instance, we were able to identify 3 times as much traffic for the popular Kazaa P2P protocol, compared to the traditional port-based approach.",
"title": ""
},
{
"docid": "95310634132ddca70bc1683931a71e42",
"text": "The early detection of applications associated with TCP flows is an essential step for network security and traffic engineering. The classic way to identify flows, i.e. looking at port numbers, is not effective anymore. On the other hand, state-of-the-art techniques cannot determine the application before the end of the TCP flow. In this editorial, we propose a technique that relies on the observation of the first five packets of a TCP connection to identify the application. This result opens a range of new possibilities for online traffic classification.",
"title": ""
}
] |
[
{
"docid": "b68da205eb9bf4a6367250c6f04d2ad4",
"text": "Trends change rapidly in today’s world, prompting this key question: What is the mechanism behind the emergence of new trends? By representing real-world dynamic systems as complex networks, the emergence of new trends can be symbolized by vertices that “shine.” That is, at a specific time interval in a network’s life, certain vertices become increasingly connected to other vertices. This process creates new high-degree vertices, i.e., network stars. Thus, to study trends, we must look at how networks evolve over time and determine how the stars behave. In our research, we constructed the largest publicly available network evolution dataset to date, which contains 38,000 real-world networks and 2.5 million graphs. Then, we performed the first precise wide-scale analysis of the evolution of networks with various scales. Three primary observations resulted: (a) links are most prevalent among vertices that join a network at a similar time; (b) the rate that new vertices join a network is a central factor in molding a network’s topology; and (c) the emergence of network stars (high-degree vertices) is correlated with fast-growing networks. We applied our learnings to develop a flexible network-generation model based on large-scale, real-world data. This model gives a better understanding of how stars rise and fall within networks, and is applicable to dynamic systems both in nature and society. Multimedia Links I Video I Interactive Data Visualization I Data I Code Tutorials",
"title": ""
},
{
"docid": "2a12af091b7c9e0cc4c63d655d03666e",
"text": "A ll around the world in matters of governance, decentralization is the rage. Even apart from the widely debated issues of subsidiarity and devolution in the European Union and states’ rights in the United States, decentralization has been at the center stage of policy experiments in the last two decades in a large number of developing and transition economies in Latin America, Africa and Asia. The World Bank, for example, has embraced it as one of the major governance reforms on its agenda (for example, World Bank, 2000; Burki, Perry and Dillinger, 1999). Take also the examples of the two largest countries of the world, China and India. Decentralization has been regarded as the major institutional framework for the phenomenal industrial growth in the last two decades in China, taking place largely in the nonstate nonprivate sector. India ushered in a landmark constitutional reform in favor of decentralization around the same time it launched a major program of economic reform in the early 1990s. On account of its many failures, the centralized state everywhere has lost a great deal of legitimacy, and decentralization is widely believed to promise a range of bene ts. It is often suggested as a way of reducing the role of the state in general, by fragmenting central authority and introducing more intergovernmental competition and checks and balances. It is viewed as a way to make government more responsive and efcient. Technological changes have also made it somewhat easier than before to provide public services (like electricity and water supply) relatively ef ciently in smaller market areas, and the lower levels of government have now a greater ability to handle certain tasks. In a world of rampant ethnic con icts and separatist movements, decentralization is also regarded as a way of diffusing social and political tensions and ensuring local cultural and political autonomy. These potential bene ts of decentralization have attracted a very diverse range",
"title": ""
},
{
"docid": "89bc9f4c3f61c83348c02f9905923e1d",
"text": "This paper presents the control strategy and power management for an integrated three-port converter, which interfaces one solar input port, one bidirectional battery port, and an isolated output port. Multimode operations and multiloop designs are vital for such multiport converters. However, control design is difficult for a multiport converter to achieve multifunctional power management because of various cross-coupled control loops. Since there are various modes of operation, it is challenging to define different modes and to further implement autonomous mode transition based on the energy state of the three power ports. A competitive method is used to realize smooth and seamless mode transition. Multiport converter has plenty of interacting control loops due to integrated power trains. It is difficult to design close-loop controls without proper decoupling method. A detailed approach is provided utilizing state-space averaging method to obtain the converter model under different modes of operation, and then a decoupling network is introduced to allow separate controller designs. Simulation and experimental results verify the converter control design and power management during various operational modes.",
"title": ""
},
{
"docid": "f828ffe5d66a98ae75c48971ba9e66b6",
"text": "BACKGROUND\nThe purpose of this study is to review our experience with the use of the facial artery musculo-mucosal (FAMM) flap for floor of mouth (FOM) reconstruction following cancer ablation to assess its reliability, associated complications, and functional results.\n\n\nMETHODS\nThis was a retrospective analysis of 61 FAMM flaps performed for FOM reconstruction from 1997 to 2006.\n\n\nRESULTS\nNo total flap loss was observed. Fifteen cases of partial flap necrosis occurred, with 2 of them requiring revision surgery. We encountered 8 other complications, with 4 of them requiring revision surgery for an overall rate of revision surgery of 10% (6/61). The majority of patients resumed to a regular diet (85%), and speech was considered as functional and/or understandable by the surgeon in 93% of the patients. Dental restoration was successful for 83% (24/29) of the patients.\n\n\nCONCLUSION\nThe FAMM flap is well suited for FOM reconstruction because it is reliable, has few significant complications, and allows preservation of oral function.",
"title": ""
},
{
"docid": "b6d71f472848de18eadff0944eab6191",
"text": "Traditional approaches for object discovery assume that there are common characteristics among objects, and then attempt to extract features specific to objects in order to discriminate objects from background. However, the assumption “common features” may not hold, considering different variations between and within objects. Instead, we look at this problem from a different angle: if we can identify background regions, then the rest should belong to foreground. In this paper, we propose to model background to localize possible object regions. Our method is based on the observations: (1) background has limited categories, such as sky, tree, water, ground, etc., and can be easier to recognize, while there are millions of objects in our world with different shapes, colors and textures; (2) background is occluded because of foreground objects. Thus, we can localize objects based on voting from fore/background occlusion boundary. Our contribution lies: (1) we use graph-based image segmentation to yield high quality segments, which effectively leverages both flat segmentation and hierarchical segmentation approaches; (2) we model background to infer and rank object hypotheses. More specifically, we use background appearance and discriminative patches around fore/background boundary to build the background model. The experimental results show that our method can generate good quality object proposals and rank them where objects are covered highly within a small pool of proposed regions. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d437d71047b70736f5a6cbf3724d62a9",
"text": "We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoderdecoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) “fool” pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.",
"title": ""
},
{
"docid": "88398c81a8706b97f427c12d63ec62cc",
"text": "In processing human produced text using natural language processing (NLP) techniques, two fundamental subtasks that arise are (i) segmentation of the plain text into meaningful subunits (e.g., entities), and (ii) dependency parsing, to establish relations between subunits. Such structural interpretation of text provides essential building blocks for upstream expert system tasks: e.g., from interpreting textual real estate ads, one may want to provide an accurate price estimate and/or provide selection filters for end users looking for a particular property — which all could rely on knowing the types and number of rooms, etc. In this paper we develop a relatively simple and effective neural joint model that performs both segmentation and dependency parsing together, instead of one after the other as in most state-of-the-art works. We will focus in particular on the real estate ad setting, aiming to convert an ad to a structured description, which we name property tree, comprising the tasks of (1) identifying important entities of a property (e.g., rooms) from classifieds and (2) structuring them into a tree format. In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by (i) avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and (ii) exploiting the interactions between the subtasks. For this purpose, we perform an extensive comparative study of the pipeline methods and the new proposed ∗Corresponding author Email addresses: giannis.bekoulis@ugent.be (Giannis Bekoulis), johannes.deleu@ugent.be (Johannes Deleu), thomas.demeester@ugent.be (Thomas Demeester), chris.develder@ugent.be (Chris Develder) Preprint submitted to Expert Systems with Applications February 23, 2018 joint model, reporting an improvement of over three percentage points in the overall edge F1 score of the property tree. Also, we propose attention methods, to encourage our model to focus on salient tokens during the construction of the property tree. Thus we experimentally demonstrate the usefulness of attentive neural architectures for the proposed joint model, showcasing a further improvement of two percentage points in edge F1 score for our application. While the results demonstrated are for the particular real estate setting, the model is generic in nature, and thus could be equally applied to other expert system scenarios requiring the general tasks of both (i) detecting entities (segmentation) and (ii) establishing relations among them (dependency parsing).",
"title": ""
},
{
"docid": "c82f4117c7c96d0650eff810f539c424",
"text": "The Stock Market is known for its volatile and unstable nature. A particular stock could be thriving in one period and declining in the next. Stock traders make money from buying equity when they are at their lowest and selling when they are at their highest. The logical question would be: \"What Causes Stock Prices To Change?\". At the most fundamental level, the answer to this would be the demand and supply. In reality, there are many theories as to why stock prices fluctuate, but there is no generic theory that explains all, simply because not all stocks are identical, and one theory that may apply for today, may not necessarily apply for tomorrow. This paper covers various approaches taken to attempt to predict the stock market without extensive prior knowledge or experience in the subject area, highlighting the advantages and limitations of the different techniques such as regression and classification. We formulate both short term and long term predictions. Through experimentation we achieve 81% accuracy for future trend direction using classification, 0.0117 RMSE for next day price and 0.0613 RMSE for next day change in price using regression techniques. The results obtained in this paper are achieved using only historic prices and technical indicators. Various methods, tools and evaluation techniques will be assessed throughout the course of this paper, the result of this contributes as to which techniques will be selected and enhanced in the final artefact of a stock prediction model. Further work will be conducted utilising deep learning techniques to approach the problem. This paper will serve as a preliminary guide to researchers wishing to expose themselves to this area.",
"title": ""
},
{
"docid": "e9b942c71646f2907de65c2641329a66",
"text": "In many vision based application identifying moving objects is important and critical task. For different computer vision application Background subtraction is fast way to detect moving object. Background subtraction separates the foreground from background. However, background subtraction is unable to remove shadow from foreground. Moving cast shadow associated with moving object also gets detected making it challenge for video surveillance. The shadow makes it difficult to detect the exact shape of object and to recognize the object.",
"title": ""
},
{
"docid": "4320278dcbf0446daf3d919c21606208",
"text": "The operation of different brain systems involved in different types of memory is described. One is a system in the primate orbitofrontal cortex and amygdala involved in representing rewards and punishers, and in learning stimulus-reinforcer associations. This system is involved in emotion and motivation. A second system in the temporal cortical visual areas is involved in learning invariant representations of objects. A third system in the hippocampus is implicated in episodic memory and in spatial function. Fourth, brain systems in the frontal and temporal cortices involved in short term memory are described. The approach taken provides insight into the neuronal operations that take place in each of these brain systems, and has the aim of leading to quantitative biologically plausible neuronal network models of how each of these memory systems actually operates.",
"title": ""
},
{
"docid": "b2a43491283732082c65f88c9b03016f",
"text": "BACKGROUND\nExpressing breast milk has become increasingly prevalent, particularly in some developed countries. Concurrently, breast pumps have evolved to be more sophisticated and aesthetically appealing, adapted for domestic use, and have become more readily available. In the past, expressed breast milk feeding was predominantly for those infants who were premature, small or unwell; however it has become increasingly common for healthy term infants. The aim of this paper is to systematically explore the literature related to breast milk expressing by women who have healthy term infants, including the prevalence of breast milk expressing, reported reasons for, methods of, and outcomes related to, expressing.\n\n\nMETHODS\nDatabases (Medline, CINAHL, JSTOR, ProQuest Central, PsycINFO, PubMed and the Cochrane library) were searched using the keywords milk expression, breast milk expression, breast milk pumping, prevalence, outcomes, statistics and data, with no limit on year of publication. Reference lists of identified papers were also examined. A hand-search was conducted at the Australian Breastfeeding Association Lactation Resource Centre. Only English language papers were included. All papers about expressing breast milk for healthy term infants were considered for inclusion, with a focus on the prevalence, methods, reasons for and outcomes of breast milk expression.\n\n\nRESULTS\nA total of twenty two papers were relevant to breast milk expression, but only seven papers reported the prevalence and/or outcomes of expressing amongst mothers of well term infants; all of the identified papers were published between 1999 and 2012. Many were descriptive rather than analytical and some were commentaries which included calls for more research, more dialogue and clearer definitions of breastfeeding. While some studies found an association between expressing and the success and duration of breastfeeding, others found the opposite. In some cases these inconsistencies were compounded by imprecise definitions of breastfeeding and breast milk feeding.\n\n\nCONCLUSIONS\nThere is limited evidence about the prevalence and outcomes of expressing breast milk amongst mothers of healthy term infants. The practice of expressing breast milk has increased along with the commercial availability of a range of infant feeding equipment. The reasons for expressing have become more complex while the outcomes, when they have been examined, are contradictory.",
"title": ""
},
{
"docid": "b0ce4a13ea4a2401de4978b6859c5ef2",
"text": "We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE). We present a general strategy to automatically generate one or more sentential hypotheses based on an input sentence and pre-existing manual semantic annotations. The resulting suite of datasets enables us to probe a statistical RTE model’s performance on different aspects of semantics. We demonstrate the value of this approach by investigating the behavior of a popular neural network RTE model.",
"title": ""
},
{
"docid": "c84ef3f7dfa5e3219a6c1c2f98109651",
"text": "We present JetStream, a system that allows real-time analysis of large, widely-distributed changing data sets. Traditional approaches to distributed analytics require users to specify in advance which data is to be backhauled to a central location for analysis. This is a poor match for domains where available bandwidth is scarce and it is infeasible to collect all potentially useful data. JetStream addresses bandwidth limits in two ways, both of which are explicit in the programming model. The system incorporates structured storage in the form of OLAP data cubes, so data can be stored for analysis near where it is generated. Using cubes, queries can aggregate data in ways and locations of their choosing. The system also includes adaptive filtering and other transformations that adjusts data quality to match available bandwidth. Many bandwidth-saving transformations are possible; we discuss which are appropriate for which data and how they can best be combined. We implemented a range of analytic queries on web request logs and image data. Queries could be expressed in a few lines of code. Using structured storage on source nodes conserved network bandwidth by allowing data to be collected only when needed to fulfill queries. Our adaptive control mechanisms are responsive enough to keep end-to-end latency within a few seconds, even when available bandwidth drops by a factor of two, and are flexible enough to express practical policies.",
"title": ""
},
{
"docid": "f9a8b4d32d23c7779ee3ea00e4d64980",
"text": "BACKGROUND\nLabor is one of the most painful events in a women's life. Frequent change in positions and back massage may be effective in reducing pain during the first stage of labor.\n\n\nAIM\nThe focus of this study was to identify the impact of either change in position or back massage on pain perception during first stage of labor.\n\n\nDESIGN\nA quasi-experimental study.\n\n\nSETTING\nTeaching hospital, Kurdistan Region, Iraq, November 2014 to October 2015.\n\n\nSUBJECTS\nEighty women were interviewed as a study sample when admitted to the labor and delivery area and divided into three groups: 20 women received frequent changes in position (group A), 20 women received back massage (Group B), and 40 women constituted the control group (group C).\n\n\nMETHODS\nA structured interview questionnaire to collect background data was completed by the researcher in personal interviews with the mothers. The intervention was performed at three points in each group, and pain perception was measured after each intervention using the Face Pain Scale.\n\n\nRESULTS\nThe mean rank of the difference in pain scores among the study groups was as follows after the first, second, and third interventions, respectively: group A-52.33, 47.00, 49.2; group B-32.8, 30.28, 30.38; group C-38.44, 42.36, 41.21. There were significant differences between groups A, B, and C after the first, second, and third interventions (p1 = .011, p2 = .042, p3 = .024).\n\n\nCONCLUSIONS\nBack massage may be a more effective pain management approach than change in position during the first stage of labor.",
"title": ""
},
{
"docid": "843e7bfe22d8b93852374dde8715ca42",
"text": "In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets.",
"title": ""
},
{
"docid": "36c11c29f6605f7c234e68ecba2a717a",
"text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.",
"title": ""
},
{
"docid": "98cebe058fccdf7ec799dfc95afd2e78",
"text": "An intuitionistic fuzzy set, characterized by a membership function and a non-membership function, is a generalization of fuzzy set. In this paper, based on score function and accuracy function, we introduce a method for the comparison between two intuitionistic fuzzy values and then develop some aggregation operators, such as the intuitionistic fuzzy weighted averaging operator, intuitionistic fuzzy ordered weighted averaging operator, and intuitionistic fuzzy hybrid aggregation operator, for aggregating intuitionistic fuzzy values and establish various properties of these operators.",
"title": ""
},
{
"docid": "b0636710e1374bb098bf4f68c1c5740a",
"text": "Successful use of ICT requires domain knowledge and interaction knowledge. It shapes and is shaped by the use of ICT and is less common among older adults. This paper focus on the validation of the computer literacy scale (CLS) introduced by [14]. The CLS is an objective knowledge test of ICT-related symbols and terms commonly used in the graphical user interface of interactive computer technology. It has been designed specifically for older adults with little computer knowledge and is based on the idea that knowing common symbols and terms is as necessary for using computers, as it is for reading and writing letters and books. In this paper the Computer literacy scale is described and compared with related meas‐ ures for example computer expertise (CE), Computer Proficiency (CPQ) and computer anxiety (CATS). In addition criterion validity is described with predic‐ tions of successful ICT use exemplified with (1) the use of different data entry methods and (2) the use of different ticket vending machine (TVM) designs.",
"title": ""
},
{
"docid": "9900d928d601e62cf8480cb28d3574e9",
"text": "Cellular technology has dramatically changed our society and the way we communicate. First it impacted voice telephony, and then has been making inroads into data access, applications, and services. However, today potential capabilities of the Internet have not yet been fully exploited by cellular systems. With the advent of 5G we will have the opportunity to leapfrog beyond current Internet capabilities.",
"title": ""
},
{
"docid": "95196bd9be49b426217b7d81fc51a04b",
"text": "This paper builds on the idea that private sector logistics can and should be applied to improve the performance of disaster logistics but that before embarking on this the private sector needs to understand the core capabilities of humanitarian logistics. With this in mind, the paper walks us through the complexities of managing supply chains in humanitarian settings. It pinpoints the cross learning potential for both the humanitarian and private sectors in emergency relief operations as well as possibilities of getting involved through corporate social responsibility. It also outlines strategies for better preparedness and the need for supply chains to be agile, adaptable and aligned—a core competency of many humanitarian organizations involved in disaster relief and an area which the private sector could draw on to improve their own competitive edge. Finally, the article states the case for closer collaboration between humanitarians, businesses and academics to achieve better and more effective supply chains to respond to the complexities of today’s logistics be it the private sector or relieving the lives of those blighted by disaster. Journal of the Operational Research Society (2006) 57, 475–489. doi:10.1057/palgrave.jors.2602125 Published online 14 December 2005",
"title": ""
}
] |
scidocsrr
|
85a04042bf93360f558b46066d525295
|
A Low-Power Bidirectional Telemetry Device With a Near-Field Charging Feature for a Cardiac Microstimulator
|
[
{
"docid": "3c7154162996f3fecbedd2aa79555ca4",
"text": "This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-/spl mu/m 1M/2P N-epi BiCMOS, and the AMI 1.5-/spl mu/m 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm/sup 2/ in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.",
"title": ""
}
] |
[
{
"docid": "1d26fc3a5f07e7ea678753e7171846c4",
"text": "Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. When data mining techniques are applied to these data, their uncertainty has to be considered to obtain high quality results. We present UK-means clustering, an algorithm that enhances the K-means algorithm to handle data uncertainty. We apply UKmeans to the particular pattern of moving-object uncertainty. Experimental results show that by considering uncertainty, a clustering algorithm can produce more accurate results.",
"title": ""
},
{
"docid": "2a4e5635e2c15ce8ed84e6e296c4bbf4",
"text": "The games with a purpose paradigm proposed by Luis von Ahn [9] is a new approach for game design where useful but boring tasks, like labeling a random image found in the web, are packed within a game to make them entertaining. But there are not only large numbers of internet users that can be used as voluntary data producers but legions of mobile device owners, too. In this paper we describe the design of a location-based mobile game with a purpose: CityExplorer. The purpose of this game is to produce geospatial data that is useful for non-gaming applications like a location-based service. From the analysis of four use case studies of CityExplorer we report that such a purposeful game is entertaining and can produce rich geospatial data collections.",
"title": ""
},
{
"docid": "cbda3aafb8d8f76a8be24191e2fa7c54",
"text": "With the rapid development of robot and other intelligent and autonomous agents, how a human could be influenced by a robot’s expressed mood when making decisions becomes a crucial question in human-robot interaction. In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human’s decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting. More specifically, we create an NLP model to generate sentences that adhere to a specific affective expression profile. We use these sentences for a humanoid robot as it plays a Stackelberg security game against a human. We investigate the behavioral model of the human player.",
"title": ""
},
{
"docid": "9611686ff4eedf047460becec43ce59d",
"text": "We propose a novel location-based second-factor authentication solution for modern smartphones. We demonstrate our solution in the context of point of sale transactions and show how it can be effectively used for the detection of fraudulent transactions caused by card theft or counterfeiting. Our scheme makes use of Trusted Execution Environments (TEEs), such as ARM TrustZone, commonly available on modern smartphones, and resists strong attackers, even those capable of compromising the victim phone applications and OS. It does not require any changes in the user behavior at the point of sale or to the deployed terminals. In particular, we show that practical deployment of smartphone-based second-factor authentication requires a secure enrollment phase that binds the user to his smartphone TEE and allows convenient device migration. We then propose two novel enrollment schemes that resist targeted attacks and provide easy migration. We implement our solution within available platforms and show that it is indeed realizable, can be deployed with small software changes, and does not hinder user experience.",
"title": ""
},
{
"docid": "1c68d660e00040c73c043de47bf6d9e0",
"text": "In Germany 18 GW wind power will have been installed by the end of 2005. Until 2020, this figure reaches the 50 GW mark. Based on the results of recent studies and on the experience with existing wind projects modification of the existing grid code for connection and operation of wind farms in the high voltage grid is necessary. The paper discusses main issues of the suggested requirements by highlighting major changes and extensions. The topics considered are fault ride-through, grid voltage maintenance respective voltage control, system monitoring and protection as well as retrofitting of old units. The new requirements are defined taking into account some new developments in wind turbine technologies which should be utilized in the future to meet grid requirement. Monitoring and system protection is defined under the aspect of sustainability of the measures introduced",
"title": ""
},
{
"docid": "0872240a9df85e190bddc4d3f037381f",
"text": "This study presents a unique synthesized set of data for community college students entering the university with the intention of earning a degree in engineering. Several cohorts of longitudinal data were combined with transcript-level data from both the community college and the university to measure graduation rates in engineering. The emphasis of the study is to determine academic variables that had significant correlations with graduation in engineering, and levels of these academic variables. The article also examines the utility of data mining methods for understanding the academic variables related to achievement in science, technology, engineering, and mathematics. The practical purpose of each model is to develop a useful strategy for policy, based on success variables, that relates to the preparation and achievement of this important group of students as they move through the community college pathway.",
"title": ""
},
{
"docid": "a574355d46c6e26efe67aefe2869a0cb",
"text": "The continuously increasing cost of the US healthcare system has received significant attention. Central to the ideas aimed at curbing this trend is the use of technology in the form of the mandate to implement electronic health records (EHRs). EHRs consist of patient information such as demographics, medications, laboratory test results, diagnosis codes, and procedures. Mining EHRs could lead to improvement in patient health management as EHRs contain detailed information related to disease prognosis for large patient populations. In this article, we provide a structured and comprehensive overview of data mining techniques for modeling EHRs. We first provide a detailed understanding of the major application areas to which EHR mining has been applied and then discuss the nature of EHR data and its accompanying challenges. Next, we describe major approaches used for EHR mining, the metrics associated with EHRs, and the various study designs. With this foundation, we then provide a systematic and methodological organization of existing data mining techniques used to model EHRs and discuss ideas for future research.",
"title": ""
},
{
"docid": "bebd034597144d4656f6383d9bd22038",
"text": "The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in today’s social media context, where limited attention and technology constrain the expressive power of humans, while incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.",
"title": ""
},
{
"docid": "935445679a3e94f96bcb05a947363995",
"text": "While theories abound concerning knowledge transfer in organisations, little empirical work has been undertaken to assess any possible relationship between repositories of knowledge and those responsible for the use of knowledge. This paper develops a knowledge transfer framework based on an empirical analysis of part of the UK operation of a Fortune 100 corporation, which extends existing knowledge transfer theory. The proposed framework integrates knowledge storage and knowledge administration within a model of effective knowledge transfer. This integrated framework encompasses five components: the actors engaged in the transfer of knowledge, the typology of organisational knowledge that is transferred between the actors, the mechanisms by which the knowledge transfer is carried out, the repositories where explicit knowledge is retained and the knowledge administrator equivalent whose function is to manage and maintain knowledge. The paper concludes that a ‘hybridisation’ of knowledge transfer approach, revealed by the framework, offers some promise in organisational applications.",
"title": ""
},
{
"docid": "13659d5f693129620132bf22e021ad70",
"text": "Individuals with high functioning autism (HFA) or Asperger Syndrome (AS) exhibit difficulties in the knowledge or correct performance of social skills. This subgroup's social difficulties appear to be associated with deficits in three social cognition processes: theory of mind, emotion recognition and executive functioning. The current study outlines the development and initial administration of the group-based Social Competence Intervention (SCI), which targeted these deficits using cognitive behavioral principles. Across 27 students age 11-14 with a HFA/AS diagnosis, results indicated significant improvement on parent reports of social skills and executive functioning. Participants evidenced significant growth on direct assessments measuring facial expression recognition, theory of mind and problem solving. SCI appears promising, however, larger samples and application in naturalistic settings are warranted.",
"title": ""
},
{
"docid": "6ccca10914c09715fae47a7b832bfd6a",
"text": "This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols, and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols, and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services.",
"title": ""
},
{
"docid": "526fd32e2486338a1db4228bdaa9aaaf",
"text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. Researchers have shown that attackers can manipulate a system's recommendations by injecting biased profiles into it. In this paper, we examine attacks that concentrate on a targeted set of users with similar tastes, biasing the system's responses to these users. We show that such attacks are both pragmatically reasonable and also highly effective against both user-based and item-based algorithms. As a result, an attacker can mount such a \"segmented\" attack with little knowledge of the specific system being targeted and with strong likelihood of success.",
"title": ""
},
{
"docid": "6931f8727f2c4e2aab19c94bcd783f59",
"text": "The steady-state and dynamic performance of a stator voltage-controlled current source inverter (CSI) induction motor drive are presented. Commutation effects are neglected and the analytical results are based on the fundamental component. A synchronously rotating reference frame linearized model in terms of a set of nondimensional parameters, based on the rotor transient time constant, is developed. It is shown that the control scheme is capable of stabilizing the drive over a region identical to the statically stable region of a conventional voltage-fed induction motor. A simple approximate expression for the drive dominant poles under no-load conditions and graphical representations of the drive dynamics under load conditions are presented. The effect of parameter variations on the drive dynamic response can be evaluated from these results. An analog simulation of the drive is developed, and the results confirm the small signal analysis of the drive system. In addition the steady-state results of the analog simulation are compared with experimental results, as well as with corresponding values obtained from a stator referred equivalent circuit. The comparison indicates good correspondence under load conditions and the limitation of applying the equivalent circuit for no-load conditions without proper recognition of the system losses.",
"title": ""
},
{
"docid": "fbb76049d6192e4571ede961f1e413a8",
"text": "We present ongoing work on a gold standard annotation of German terminology in an inhomogeneous domain. The text basis is thematically broad and contains various registers, from expert text to user-generated data taken from an online discussion forum. We identify issues related with these properties, and show our approach how to model the domain. Futhermore, we present our approach to handle multiword terms, including discontinuous ones. Finally, we evaluate the annotation quality.",
"title": ""
},
{
"docid": "06856cf61207a99146782e9e6e0911ef",
"text": "Customer ratings are valuable sources to understand their satisfaction and are critical for designing better customer experiences and recommendations. The majority of customers, however, do not respond to rating surveys, which makes the result less representative. To understand overall satisfaction, this paper aims to investigate how likely customers without responses had satisfactory experiences compared to those respondents. To infer customer satisfaction of such unlabeled sessions, we propose models using recurrent neural networks (RNNs) that learn continuous representations of unstructured text conversation. By analyzing online chat logs of over 170,000 sessions from Samsung’s customer service department, we make a novel finding that while labeled sessions contributed by a small fraction of customers received overwhelmingly positive reviews, the majority of unlabeled sessions would have received lower ratings by customers. The data analytics presented in this paper not only have practical implications for helping detect dissatisfied customers on live chat services but also make theoretical contributions on discovering the level of biases in online rating platforms. ACM Reference Format: Kunwoo Park, Meeyoung Cha, and Eunhee Rhim. 2018. Positivity Bias in Customer Satisfaction Ratings. InWWW ’18 Companion: The 2018 Web Conference Companion, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3184558.3186579",
"title": ""
},
{
"docid": "c4bc226e59648be0191b95b91b3b9f33",
"text": "In this paper we present a new class of side-channel attacks on computer hard drives. Hard drives contain one or more spinning disks made of a magnetic material. In addition, they contain different magnets which rapidly move the head to a target position on the disk to perform a write or a read. The magnetic fields from the disk’s material and head are weak and well shielded. However, we show that the magnetic field due to the moving head can be picked up by sensors outside of the hard drive. With these measurements, we are able to deduce patterns about ongoing operations. For example, we can detect what type of the operating system is booting up or what application is being started. Most importantly, no special equipment is necessary. All attacks can be performed by using an unmodified smartphone placed in proximity of a hard drive.",
"title": ""
},
{
"docid": "07eb3f5527e985c33ff7132381ee266d",
"text": "Since the first application of indirect composite resins, numerous advances in adhesive dentistry have been made. Furthermore, improvements in structure, composition and polymerization techniques led to the development of a second-generation of indirect resin composites (IRCs). IRCs have optimal esthetic performance, enhanced mechanical properties and reparability. Due to these characteristics they can be used for a wide range of clinical applications. IRCs can be used for inlays, onlays, crowns’ veneering material, fixed dentures prostheses and removable prostheses (teeth and soft tissue substitution), both on teeth and implants. The purpose of this article is to review the properties of these materials and describe a case series of patients treated with different type of restorations in various indications. *Corresponding author: Aikaterini Petropoulou, Clinical Instructor, Department of Prosthodontics, School of Dentistry, National and Kapodistrian University of Athens, Greece, Tel: +306932989104; E-mail: aikatpetropoulou@gmail.com Received November 10, 2013; Accepted November 28, 2013; Published November 30, 2013 Citation: Petropoulou A, Pantzari F, Nomikos N, Chronopoulos V, Kourtis S (2013) The Use of Indirect Resin Composites in Clinical Practice: A Case Series. Dentistry 3: 173. doi:10.4172/2161-1122.1000173 Copyright: © 2013 Petropoulou A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
"title": ""
},
{
"docid": "25098f36d4a782911f523ce1ae20cf31",
"text": "The problem of stress-management has been receiving an increasing attention in related research communities due to a wider recognition of potential problems caused by chronic stress and due to the recent developments of technologies providing non-intrusive ways of collecting continuously objective measurements to monitor person's stress level. Experimental studies have shown already that stress level can be judged based on the analysis of Galvanic Skin Response (GSR) and speech signals. In this paper we investigate how classification techniques can be used to automatically determine periods of acute stress relying on information contained in GSR and/or speech of a person.",
"title": ""
},
{
"docid": "1949871b7c32416061043b46f7ed581c",
"text": "Privacy is an important issue in data publishing. Many organizations distribute non-aggregate personal data for research, and they must take steps to ensure that an adversary cannot predict sensitive information pertaining to individuals with high confidence. This problem is further complicated by the fact that, in addition to the published data, the adversary may also have access to other resources (e.g., public records and social networks relating individuals), which we call external knowledge. A robust privacy criterion should take this external knowledge into consideration. In this paper, we first describe a general framework for reasoning about privacy in the presence of external knowledge. Within this framework, we propose a novel multidimensional approach to quantifying an adversary’s external knowledge. This approach allows the publishing organization to investigate privacy threats and enforce privacy requirements in the presence of various types and amounts of external knowledge. Our main technical contributions include a multidimensional privacy criterion that is more intuitive and flexible than previous approaches to modeling background knowledge. In addition, we provide algorithms for measuring disclosure and sanitizing data that improve computational efficiency several orders of magnitude over the best known techniques.",
"title": ""
},
{
"docid": "c66e38f3be7760c8ca0b6ef2dfc5bec2",
"text": "Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.",
"title": ""
}
] |
scidocsrr
|
42186b00d162e07d15164ac508e4a539
|
Motivation in Software Engineering: A systematic literature review
|
[
{
"docid": "46ad437443c58d90d4956d4e8ba99888",
"text": "The attributes of individual software engineers are perhaps the most important factors in determining the success of software development. Our goal is to identify the professional competencies that are most essential. In particular, we seek to identify the attributes that di erentiate between exceptional and non-exceptional software engineers. Phase 1 of our research is a qualitative study designed to identify competencies to be used in the quantitative analysis performed in Phase 2. In Phase 1, we conduct an in-depth review of ten exceptional and ten non-exceptional software engineers working for a major computing rm. We use biographical data and Myers-Briggs Type Indicator test results to characterize our sample. We conduct Critical Incident Interviews focusing on the subjects experience in software and identify 38 essential competencies of software engineers. Phase 2 of this study surveys 129 software engineers to determine the competencies that are di erential between exceptional and non-exceptional engineers. Years of experience in software is the only biographical predictor of performance. Analysis of the participants Q-Sort of the 38 competencies identi ed in Phase 1 reveals that nine of these competencies are di erentially related to engineer performance using a t-test. A ten variable Canonical Discrimination Function consisting of three biographical variables and seven competencies is capable of correctly classifying 81% of the cases. The statistical analyses indicate that exceptional engineers (at the company studied) can be distinguished by behaviors associated with an external focus | behaviors directed at people or objects outside the individual. Exceptional engineers are more likely than non-exceptional engineers to maintain a \\big picture\", have a bias for action, be driven by a sense of mission, exhibit and articulate strong convictions, play a pro-active role with management, and help other engineers. Authors addresses: R. Turley, Colorado Memory Systems, Inc., 800 S. Taft Ave., Loveland, CO 80537. Email: RICKTURL.COMEMSYS@CMS SMTP.gr.hp.com, (303) 635-6490, Fax: (303) 635-6613; J. Bieman, Department of Computer Science, Colorado State University, Fort Collins, CO 80523. Email: bieman@cs.colostate.edu, (303)4917096, Fax: (303) 491-6639. Copyright c 1993 by Richard T. Turley and James M. Bieman. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the author. Direct correspondence concerning this paper to: J. Bieman, Department of Computer Science, Colorado State University, Fort Collins, CO 80523, bieman@cs.colostate.edu, (303)491-7096, Fax: (303)491-6639.",
"title": ""
}
] |
[
{
"docid": "7c5abed8220171f38e3801298f660bfa",
"text": "Heavy metal remediation of aqueous streams is of special concern due to recalcitrant and persistency of heavy metals in environment. Conventional treatment technologies for the removal of these toxic heavy metals are not economical and further generate huge quantity of toxic chemical sludge. Biosorption is emerging as a potential alternative to the existing conventional technologies for the removal and/or recovery of metal ions from aqueous solutions. The major advantages of biosorption over conventional treatment methods include: low cost, high efficiency, minimization of chemical or biological sludge, regeneration of biosorbents and possibility of metal recovery. Cellulosic agricultural waste materials are an abundant source for significant metal biosorption. The functional groups present in agricultural waste biomass viz. acetamido, alcoholic, carbonyl, phenolic, amido, amino, sulphydryl groups etc. have affinity for heavy metal ions to form metal complexes or chelates. The mechanism of biosorption process includes chemisorption, complexation, adsorption on surface, diffusion through pores and ion exchange etc. The purpose of this review article is to provide the scattered available information on various aspects of utilization of the agricultural waste materials for heavy metal removal. Agricultural waste material being highly efficient, low cost and renewable source of biomass can be exploited for heavy metal remediation. Further these biosorbents can be modified for better efficiency and multiple reuses to enhance their applicability at industrial scale.",
"title": ""
},
{
"docid": "63d26f3336960c1d92afbd3a61a9168c",
"text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.",
"title": ""
},
{
"docid": "fcc021f052f261c27cb67205692cd9ab",
"text": "Various studies showed that inhaled fine particles with diameter less than 10 micrometers (PM10) in the air can cause adverse health effects on human, such as heart disease, asthma, stroke, bronchitis and the like. This is due to their ability to penetrate further into the lung and alveoli. The aim of this study is to develop a state-of-art reliable technique to use surveillance camera for monitoring the temporal patterns of PM10 concentration in the air. Once the air quality reaches the alert thresholds, it will provide warning alarm to alert human to prevent from long exposure to these fine particles. This is important for human to avoid the above mentioned adverse health effects. In this study, an internet protocol (IP) network camera was used as an air quality monitoring sensor. It is a 0.3 mega pixel charge-couple-device (CCD) camera integrates with the associate electronics for digitization and compression of images. This network camera was installed on the rooftop of the school of physics. The camera observed a nearby hill, which was used as a reference target. At the same time, this network camera was connected to network via a cat 5 cable or wireless to the router and modem, which allowed image data transfer over the standard computer networks (Ethernet networks), internet, or even wireless technology. Then images were stored in a server, which could be accessed locally or remotely for computing the air quality information with a newly developed algorithm. The results were compared with the alert thresholds. If the air quality reaches the alert threshold, alarm will be triggered to inform us this situation. The newly developed algorithm was based on the relationship between the atmospheric reflectance and the corresponding measured air quality of PM10 concentration. In situ PM10 air quality values were measured with DustTrak meter and the sun radiation was measured simultaneously with a spectroradiometer. Regression method was use to calibrate this algorithm. Still images captured by this camera were separated into three bands namely red, green and blue (RGB), and then digital numbers (DN) were determined. These DN were used to determine the atmospherics reflectance values of difference bands, and then used these values in the newly developed algorithm to determine PM10 concentration. The results of this study showed that the proposed algorithm produced a high correlation coefficient (R2) of 0.7567 and low root-mean-square error (RMS) of plusmn 5 mu g/m3 between the measured and estimated PM10 concentration. A program was written by using microsoft visual basic 6.0 to download the still images automatically from the camera via the internet and utilize the newly developed algorithm to determine PM10 concentration automatically and continuously. This concluded that surveillance camera can be used for temporal PM10 concentration monitoring. It is more than an air pollution monitoring device; it provides continuous, on-line, real-time monitoring for air pollution at multi location and air pollution warning or alert system. This system also offers low implementation, operation and maintenance cost of ownership because the surveillance cameras become cheaper and cheaper now.",
"title": ""
},
{
"docid": "7af557e5fb3d217458d7b635ee18fee0",
"text": "The growth of mobile commerce, or the purchase of services or goods using mobile technology, heavily depends on the availability, reliability, and acceptance of mobile wallet systems. Although several researchers have attempted to create models on the acceptance of such mobile payment systems, no single comprehensive framework has yet emerged. Based upon a broad literature review of mobile technology adoption, a comprehensive model integrating eleven key consumer-related variables affecting the adoption of mobile payment systems is proposed. This model, based on established theoretical underpinnings originally established in the technology acceptance literature, extends existing frameworks by including attractiveness of alternatives and by proposing relationships between the key constructs. Japan is at the forefront of such technology and a number of domestic companies have been effectively developing and marketing mobile wallets for some time. Using this proposed framework, we present the case of the successful adoption of Mobile Suica in Japan, which can serve as a model for the rapid diffusion of such payment systems for other countries where adoption has been unexpectedly slow.",
"title": ""
},
{
"docid": "173f6fa3b43d2ec394c9bec0d45753dd",
"text": "Semantic instance segmentation remains a challenging task. In this work we propose to tackle the problem with a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step. The loss function encourages the network to map each pixel to a point in feature space so that pixels belonging to the same instance lie close together while different instances are separated by a wide margin. Our approach of combining an offthe-shelf network with a principled loss function inspired by a metric learning objective is conceptually simple and distinct from recent efforts in instance segmentation. In contrast to previous works, our method does not rely on object proposals or recurrent mechanisms. A key contribution of our work is to demonstrate that such a simple setup without bells and whistles is effective and can perform onpar with more complex methods. Moreover, we show that it does not suffer from some of the limitations of the popular detect-and-segment approaches. We achieve competitive performance on the Cityscapes and CVPPP leaf segmentation benchmarks.",
"title": ""
},
{
"docid": "4dd2fc66b1a2f758192b02971476b4cc",
"text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.",
"title": ""
},
{
"docid": "50471274efcc7fd7547dc6c0a1b3d052",
"text": "Recently, the UAS has been extensively exploited for data collection from remote and dangerous or inaccessible areas. While most of its existing applications have been directed toward surveillance and monitoring tasks, the UAS can play a significant role as a communication network facilitator. For example, the UAS may effectively extend communication capability to disaster-affected people (who have lost cellular and Internet communication infrastructures on the ground) by quickly constructing a communication relay system among a number of UAVs. However, the distance between the centers of trajectories of two neighboring UAVs, referred to as IUD, plays an important role in the communication delay and throughput. For instance, the communication delay increases rapidly while the throughput is degraded when the IUD increases. In order to address this issue, in this article, we propose a simple but effective dynamic trajectory control algorithm for UAVs. Our proposed algorithm considers that UAVs with queue occupancy above a threshold are experiencing congestion resulting in communication delay. To alleviate the congestion at UAVs, our proposal adjusts their center coordinates and also, if needed, the radius of their trajectory. The performance of our proposal is evaluated through computer-based simulations. In addition, we conduct several field experiments in order to verify the effectiveness of UAV-aided networks.",
"title": ""
},
{
"docid": "588b6979d71edbcc82769c1782eacb5c",
"text": "Following a centuries-long decline in the rate of self-employment, a discontinuity in this downward trend is observed for many advanced economies starting in the 1970s and 1980s. In some countries the rate of self-employment appears to increase. At the same time, cross-sectional analysis shows a U-shaped relationship between start-up rates of enterprise and levels of economic development. We provide an overview of the empirical evidence concerning the relationship between independent entrepreneurship, also known as self-employment or business ownership, and economic development. We argue that the reemergence of independent entrepreneurship is based on at least two ‘revolutions’. If we distinguish between solo selfemployed at the lower end of the entrepreneurship spectrum, and ambitious and/or innovative entrepreneurs at the upper end, many advanced economies show a revival at both extremes. Policymakers in advanced economies should be aware of both revolutions and tailor their policies accordingly.",
"title": ""
},
{
"docid": "70859cc5754a4699331e479a566b70f1",
"text": "The relationship between mind and brain has philosophical, scientific, and practical implications. Two separate but related surveys from the University of Edinburgh (University students, n= 250) and the University of Liège (health-care workers, lay public, n= 1858) were performed to probe attitudes toward the mind-brain relationship and the variables that account for differences in views. Four statements were included, each relating to an aspect of the mind-brain relationship. The Edinburgh survey revealed a predominance of dualistic attitudes emphasizing the separateness of mind and brain. In the Liège survey, younger participants, women, and those with religious beliefs were more likely to agree that the mind and brain are separate, that some spiritual part of us survives death, that each of us has a soul that is separate from the body, and to deny the physicality of mind. Religious belief was found to be the best predictor for dualistic attitudes. Although the majority of health-care workers denied the distinction between consciousness and the soma, more than one-third of medical and paramedical professionals regarded mind and brain as separate entities. The findings of the study are in line with previous studies in developmental psychology and with surveys of scientists' attitudes toward the relationship between mind and brain. We suggest that the results are relevant to clinical practice, to the formulation of scientific questions about the nature of consciousness, and to the reception of scientific theories of consciousness by the general public.",
"title": ""
},
{
"docid": "76d260180b588f881f1009a420a35b3b",
"text": "Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving.",
"title": ""
},
{
"docid": "1f56f045a9b262ce5cd6566d47c058bb",
"text": "The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.",
"title": ""
},
{
"docid": "013270914bfee85265f122b239c9fc4c",
"text": "Current study is with the aim to identify similarities and distinctions between irony and sarcasm by adopting quantitative sentiment analysis as well as qualitative content analysis. The result of quantitative sentiment analysis shows that sarcastic tweets are used with more positive tweets than ironic tweets. The result of content analysis corresponds to the result of quantitative sentiment analysis in identifying the aggressiveness of sarcasm. On the other hand, from content analysis it shows that irony owns two senses. The first sense of irony is equal to aggressive sarcasm with speaker awareness. Thus, tweets of first sense of irony may attack a specific target, and the speaker may tag his/her tweet irony because the tweet itself is ironic. These tweets though tagged as irony are in fact sarcastic tweets. Different from this, the tweets of second sense of irony is tagged to classify an event to be ironic. However, from the distribution in sentiment analysis and examples in content analysis, irony seems to be more broadly used in its second sense.",
"title": ""
},
{
"docid": "aae97dd982300accb15c05f9aa9202cd",
"text": "Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans, as partner and as friends for us. The authors think that the emotion expression of a robot is effective in joint activities of human and robot. In addition, we also think that bipedal walking is necessary to robots which are active in human living environment. But, there was no robot which has those functions. And, it is not clear what kinds of functions are effective actually. Therefore we developed a new bipedal walking robot which is capable to express emotions. In this paper, we present the design and the preliminary evaluation of the new head of the robot with only a small number of degrees of freedom for facial expression.",
"title": ""
},
{
"docid": "e50ba614fc997f058f8d495b59c18af5",
"text": "We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.",
"title": ""
},
{
"docid": "f7cdf631c12567fd37b04419eb8e4daa",
"text": "A multiple-beam photonic beamforming receiver is proposed and demonstrated. The architecture is based on a large port-count demultiplexer and fast tunable lasers to achieve a passive design, with independent beam steering for multiple beam operation. A single true time delay module with four independent beams is experimentally demonstrated, showing extremely smooth RF response in the -band, fast switching capabilities, and negligible crosstalk.",
"title": ""
},
{
"docid": "653bdddafdb40af00d5d838b1a395351",
"text": "Advances in electronic location technology and the coming of age of mobile computing have opened the door for location-aware applications to permeate all aspects of everyday life. Location is at the core of a large number of high-value applications ranging from the life-and-death context of emergency response to serendipitous social meet-ups. For example, the market for GPS products and services alone is expected to grow to US$200 billion by 2015. Unfortunately, there is no single location technology that is good for every situation and exhibits high accuracy, low cost, and universal coverage. In fact, high accuracy and good coverage seldom coexist, and when they do, it comes at an extreme cost. Instead, the modern localization landscape is a kaleidoscope of location systems based on a multitude of different technologies including satellite, mobile telephony, 802.11, ultrasound, and infrared among others. This lecture introduces researchers and developers to the most popular technologies and systems for location estimation and the challenges and opportunities that accompany their use. For each technology, we discuss the history of its development, the various systems that are based on it, and their trade-offs and their effects on cost and performance. We also describe technology-independent algorithms that are commonly used to smooth streams of location estimates and improve the accuracy of object tracking. Finally, we provide an overview of the wide variety of application domains where location plays a key role, and discuss opportunities and new technologies on the horizon. KEyWoRDS localization, location systems, location tracking, context awareness, navigation, location sensing, tracking, Global Positioning System, GPS, infrared location, ultrasonic location, 802.11 location, cellular location, Bayesian filters, RFID, RSSI, triangulation",
"title": ""
},
{
"docid": "80153230a2ffba44c827b965955eab9d",
"text": "Th e environmentally friendly Eff ective Microorganisms (EM) technology claims an enormous amount of benefi ts (claimed by the companies). Th e use of EM as an addictive to manure or as a spray directly in the fi elds may increase the microfauna diversity of the soil and many benefi ts are derived from that increase. It seems that suffi cient information is available about this new technology.",
"title": ""
},
{
"docid": "1778e5f82da9e90cbddfa498d68e461e",
"text": "Today’s business environment is characterized by fast and unexpected changes, many of which are driven by technological advancement. In such environment, the ability to respond effectively and adapt to the new requirements is not only desirable but essential to survive. Comprehensive and quick understanding of intricacies of market changes facilitates firm’s faster and better response. Two concepts contribute to the success of this scenario; organizational agility and business intelligence (BI). As of today, despite BI’s capabilities to foster organizational agility and consequently improve organizational performance, a clear link between BI and organizational agility has not been established. In this paper we argue that BI solutions have the potential to be facilitators for achieving agility. We aim at showing how BI capabilities can help achieve agility at operational, portfolio, and strategic levels.",
"title": ""
},
{
"docid": "4e1ba3178e40738ccaf2c2d76dd417d8",
"text": "We present the results of a recent large-scale subjective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to assess the visual quality of digital videos as perceived by human observers are becoming increasingly important, due to the large number of applications that target humans as the end users of video. Owing to the many approaches to video quality assessment (VQA) that are being developed, there is a need for a diverse independent public database of distorted videos and subjective scores that is freely available. The resulting Laboratory for Image and Video Engineering (LIVE) Video Quality Database contains 150 distorted videos (obtained from ten uncompressed reference videos of natural scenes) that were created using four different commonly encountered distortion types. Each video was assessed by 38 human subjects, and the difference mean opinion scores (DMOS) were recorded. We also evaluated the performance of several state-of-the-art, publicly available full-reference VQA algorithms on the new database. A statistical evaluation of the relative performance of these algorithms is also presented. The database has a dedicated web presence that will be maintained as long as it remains relevant and the data is available online.",
"title": ""
}
] |
scidocsrr
|
916b4f36791517fc8c322d6773bacd75
|
Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images
|
[
{
"docid": "c8e8d82af2d8d2c6c51b506b4f26533f",
"text": "We present an efficient method for detecting anomalies in videos. Recent applications of convolutional neural networks have shown promises of convolutional layers for object detection and recognition, especially in images. However, convolutional neural networks are supervised and require labels as learning signals. We propose a spatiotemporal architecture for anomaly detection in videos including crowded scenes. Our architecture includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features. Experimental results on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of our method is comparable to state-of-the-art methods at a considerable speed of up to 140 fps.",
"title": ""
}
] |
[
{
"docid": "96d123a5c9a01922ebb99623fddd1863",
"text": "Previous studies have shown that Wnt signaling is involved in postnatal mammalian myogenesis; however, the downstream mechanism of Wnt signaling is not fully understood. This study reports that the murine four-and-a-half LIM domain 1 (Fhl1) could be stimulated by β-catenin or LiCl treatment to induce myogenesis. In contrast, knockdown of the Fhl1 gene expression in C2C12 cells led to reduced myotube formation. We also adopted reporter assays to demonstrate that either β-catenin or LiCl significantly activated the Fhl1 promoter, which contains four putative consensus TCF/LEF binding sites. Mutations of two of these sites caused a significant decrease in promoter activity by luciferase reporter assay. Thus, we suggest that Wnt signaling induces muscle cell differentiation, at least partly, through Fhl1 activation.",
"title": ""
},
{
"docid": "4a934aeb23657b8cde97b5cb543f8153",
"text": "Refactoring is recognized as an essential practice in the context of evolutionary and agile software development. Recognizing the importance of the practice, modern IDEs provide some support for low-level refactorings. A notable exception in the list of supported refactorings is the “Extract Class” refactoring, which is conceived to simplify large, complex, unwieldy and less cohesive classes. In this work, we describe a method and a tool, implemented as an Eclipse plugin, designed to fulfill exactly this need. Our method involves three steps: (a) recognition of Extract Class opportunities, (b) ranking of the identified opportunities in terms of the improvement each one is anticipated to bring about to the system design, and (c) fully automated application of the refactoring chosen by the developer. The first step relies on an agglomerative clustering algorithm, which identifies cohesive sets of class members",
"title": ""
},
{
"docid": "2d7251e7c6029dae6e32c742c2ad3709",
"text": "Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory.",
"title": ""
},
{
"docid": "dd5a45464936906e7b4c987274c66839",
"text": "Visual analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this paper we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.",
"title": ""
},
{
"docid": "4faafaa33bca5d8f56cce393e1227019",
"text": "Sodium hypochlorite (NaOCl) is the most common irrigant used in modern endodontics. It is highly effective at dissolving organic debris and disinfecting the root canal system due to the high pH. Extravasation of NaOCl into intra-oral and extra-oral tissues can lead to devastating outcomes leading to long-term functional and aesthetic deficits. Currently no clear guidelines are available which has caused confusion among the dental and oral and maxillofacial (OMFS) surgical community how best to manage these patients. Following a literature review and considering our own experience we have formulated clear and precise guidelines to manage patients with NaOCl injury.",
"title": ""
},
{
"docid": "1b0abb269fcfddc9dd00b3f8a682e873",
"text": "Fully convolutional neural networks (F-CNNs) have set the state-of-the-art in image segmentation for a plethora of applications. Architectural innovations within F-CNNs have mainly focused on improving spatial encoding or network connectivity to aid gradient flow. In this paper, we explore an alternate direction of recalibrating the feature maps adaptively, to boost meaningful features, while suppressing weak ones. We draw inspiration from the recently proposed squeeze & excitation (SE) module for channel recalibration of feature maps for image classification. Towards this end, we introduce three variants of SE modules for image segmentation, (i) squeezing spatially and exciting channel-wise (cSE), (ii) squeezing channel-wise and exciting spatially (sSE) and (iii) concurrent spatial and channel squeeze & excitation (scSE). We effectively incorporate these SE modules within three different state-of-theart F-CNNs (DenseNet, SD-Net, U-Net) and observe consistent improvement of performance across all architectures, while minimally effecting model complexity. Evaluations are performed on two challenging applications: whole brain segmentation on MRI scans and organ segmentation on whole body contrast enhanced CT scans.",
"title": ""
},
{
"docid": "7a7b4a5f5bc4df3372a57f5e0724c685",
"text": "In the Modern scenario, the naturally available resources for power generation are being depleted at an alarming rate; firstly due to wastage of power at consumer end, secondly due to inefficiency of various power system components. A Combined Cycle Gas Turbine (CCGT) integrates two cycles- Brayton cycle (Gas Turbine) and Rankine cycle (Steam Turbine) with the objective of increasing overall plant efficiency. This is accomplished by utilising the exhaust of Gas Turbine through a waste-heat recovery boiler to run a Steam Turbine. The efficiency of a gas turbine which ranges from 28% to 33% can hence be raised to about 60% by recovering some of the low grade thermal energy from the exhaust gas for steam turbine process. This paper is a study for the modelling of CCGT and comparing it with actual operational data. The performance model for CCGT plant was developed in MATLAB/Simulink.",
"title": ""
},
{
"docid": "cf5cd34ea664a81fabe0460e4e040a2d",
"text": "A novel p-trench phase-change memory (PCM) cell and its integration with a MOSFET selector in a standard 0.18 /spl mu/m CMOS technology are presented. The high-performance capabilities of PCM cells are experimentally investigated and their application in embedded systems is discussed. Write times as low as 10 ns and 20 ns have been measured for the RESET and SET operation, respectively, still granting a 10/spl times/ read margin. The impact of the RESET pulse on PCH cell endurance has been also evaluated. Finally, cell distributions and first statistical endurance measurements on a 4 Mbit MOS demonstrator clearly assess the feasibility of the PCM technology.",
"title": ""
},
{
"docid": "b1e8f1b40c3a1ca34228358a2e8d8024",
"text": "When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "56d8a92810ec9579de73dd9fa7b8f362",
"text": "Recent successes in training large, deep neural networks have prompted active investigation into the representations learned on their intermediate layers. Such research is difficult because it requires making sense of non-linear computations performed by millions of learned parameters, but valuable because it increases our ability to understand current models and training algorithms and thus create improved versions of them. In this paper we investigate the extent to which neural networks exhibit what we call convergent learning, which is when the representations learned by multiple nets converge to a set of features which are either individually similar between networks or where subsets of features span similar lowdimensional spaces. We propose a specific method of probing representations: training multiple networks and then comparing and contrasting their individual, learned representations at the level of neurons or groups of neurons. We begin research into this question by introducing three techniques to approximately align different neural networks on a feature or subspace level: a bipartite matching approach that makes one-to-one assignments between neurons, a sparse prediction and clustering approach that finds one-to-many mappings, and a spectral clustering approach that finds many-to-many mappings. This initial investigation reveals a few interesting, previously unknown properties of neural networks, and we argue that future research into the question of convergent learning will yield many more. The insights described here include (1) that some features are learned reliably in multiple networks, yet other features are not consistently learned; (2) that units learn to span low-dimensional subspaces and, while these subspaces are common to multiple networks, the specific basis vectors learned are not; (3) that the representation codes are a mix between a local (single unit) code and slightly, but not fully, distributed codes across multiple units; (4) that the average activation values of neurons vary considerably within a network, yet the mean activation values across different networks converge to an almost identical distribution. 1",
"title": ""
},
{
"docid": "9806e837e1d988aa2cfb10e7500d2267",
"text": "The high-functioning Autism Spectrum Screening Questionnaire (ASSQ) is a 27-item checklist for completion by lay informants when assessing symptoms characteristic of Asperger syndrome and other high-functioning autism spectrum disorders in children and adolescents with normal intelligence or mild mental retardation. Data for parent and teacher ratings in a clinical sample are presented along with various measures of reliability and validity. Optimal cutoff scores were estimated, using Receiver Operating Characteristic analysis. Findings indicate that the ASSQ is a useful brief screening device for the identification of autism spectrum disorders in clinical settings.",
"title": ""
},
{
"docid": "c9bbd74b5a74d8fac1aa3197b3085104",
"text": "We propose a new image-interpolation technique using a combination of an adaptive directional wavelet transform (ADWT) and discrete cosine transform (DCT). In the proposed method, we use ADWT to decompose the low-resolution image into different frequency subband images. The high-frequency subband images are further transformed to DCT coefficients, and the zero-padding method is used for interpolation. Simultaneously, the low-frequency subband image is replaced by the original low-resolution image. Finally, we generate the interpolated image by combining the original low-resoultion image and the interpolated subband images by using inverse DWT. Experimental results demonstrate that the proposed algorithm yields better quality images in terms of subjective and objective quality metrics compared to the other methods considered in this paper.",
"title": ""
},
{
"docid": "66056b4d6cd15282e676a836cc31f8de",
"text": "In this paper, we propose a new approach for cross-scenario clothing retrieval and fine-grained clothing style recognition. The query clothing photos captured by cameras or other mobile devices are filled with noisy background while the product clothing images online for shopping are usually presented in a pure environment. We tackle this problem by two steps. Firstly, a hierarchical super-pixel merging algorithm based on semantic segmentation is proposed to obtain the intact query clothing item. Secondly, aiming at solving the problem of clothing style recognition in different scenarios, we propose sparse coding based on domain-adaptive dictionary learning to improve the accuracy of the classifier and adaptability of the dictionary. In this way, we obtain fine-grained attributes of the clothing items and use the attributes matching score to re-rank the retrieval results further. The experiment results show that our method outperforms the state-of-the-art approaches. Furthermore, we build a well labeled clothing dataset, where the images are selected from 1.5 billion product clothing images.",
"title": ""
},
{
"docid": "5dfac9fc9612cf386419e95da1652153",
"text": "This paper introduces the concept of strategic advantage and distinguishes it from competitive advantage. This concept helps to explain the full nature of sustainable competitive advantage through uncovering the dynamics of resource-based strategy. A new classification of resources emerges, demonstrating that rents are more relevant than profits in the analysis of sustainable competitive advantage. Introduction The search for sustainable competitive advantage has been the dominant theme in the study of strategy for many years (Bain, 1956; Kay, 1994; Porter, 1980). The “resourcebased view” has recently found favour as making a key contribution to developing and delivering competitive advantage. Within this context, the concept of “core competence” is being presented as a ready-made solution to many, if not all, competitive shortcomings permeating organisations (Collis and Montgomery, 1995; Prahalad and Hamel, 1990). Both the concept of sustainable competitive advantage and the resource-based view, however, limit organisations in understanding the full nature and dynamics of strategy for the following reasons: • Sustainable competitive advantage is a journey and not a destination – it is like tomorrow which is inescapable but never arrives. Sustainable competitive advantage only becomes meaningful when this journey is experienced. For most organisations, however, the problem is how to identify where the journey lies. In fast-moving competitive environments, the nature of the journey itself keeps changing in an unpredictable fashion. As a result, the process of identifying the journey presents the main challenge. • The resource-based view strives to identify and nurture those resources that enable organisations to develop competitive advantage. The primary focus of such an analysis, however, is on the existing resources which are treated as being largely static and unchanging. The problem is that dynamic environments ceaselessly call for a new generation of resources as the context constantly shifts. Given the above considerations, organisations often fail to exploit fully the potential of both the concept of sustainable competitive advantage and the resource-based view. To reverse this situation, it is necessary to develop the competitive advantage and the resources of an organisation as a dynamic concept. This calls for rediscovering sustainable competitive advantage through exploring its origins, together with the processes that make it happen. For this purpose it is first necessary to make explicit what is meant by the terms “sustainability” and “competitive advantage” and then raise the following philosophical and practical questions: • Can the terms “sustainability” and “competitive advantage”, which can be argued to serve different purposes, be brought together in the name of unity of interest? • Is such a unity real or a discursive, aimless marriage? • Can sustainable competitive advantage assume a shared meaning for those who want to make it happen? These questions sound simple but the answers are quite difficult because the purpose of an organisation can potentially be twofold. First, the organisation has to focus on its existing resources in exploiting existing business opportunities. Second, the organisation has to develop, at the same time, a new generation of resources in order to sustain its competitiveness. There is therefore a need to balance living and unborn resources. This balance, which determines the effectiveness of strategy, is achieved when organisations succeed in marrying sustainability and competitive advantage in a way that it does not become a marriage of convenience. Competitive advantage and sustainability: the missing link The term “competitive advantage” has traditionally been described in terms of the attributes and resources of an organisation that allow it to outperform others in the same industry or product market (Christensen and Fahey, 1984; Kay, 1994; Porter, 1980). In contrast, the term “sustainable” considers the protection such attributes and resources have to offer over some usually undefined period of time into the future for the organisation to maintain its competitiveness. Within this context, “sustainable” can assume a number of meanings depending on the frame of reference through which it is viewed. It can be interpreted to mean endurable, defensible,",
"title": ""
},
{
"docid": "32dbbc1b9cc78f2a4db0cffd12cd2467",
"text": "OBJECTIVE\nTo evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task.\n\n\nDESIGN AND MEASUREMENTS\nThe authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system's performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level.\n\n\nRESULTS\nNuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%.\n\n\nCONCLUSION\nWithout modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems.",
"title": ""
},
{
"docid": "21db70be88df052de82990109941e49a",
"text": "We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.",
"title": ""
},
{
"docid": "8411c13863aeb4338327ea76e0e2725b",
"text": "There is often the need to update an installed Intrusion Detection System (IDS) due to new attack methods or upgraded computing environments. Since many current IDSs are constructed by manual encoding of expert security knowledge, changes to IDSs are expensive and slow. In this paper, we describe a data mining framework for adaptively building Intrusion Detection (ID) models. The central idea is to utilize auditing programs to extract an extensive set of features that describe each network connection or host session, and apply data mining programs to learn rules that accurately capture the behavior of intrusions and normal activities. These rules can then be used for misuse detection and anomaly detection. Detection models for new intrusions or specific components of a network system are incorporated into an existing IDS through a meta-learning (or co-operative learning) process, which produces a meta detection model that combines evidence from multiple models. We discuss the strengths of our data mining programs, namely, classification, meta-learning, association rules, and frequent episodes. We report our results of applying these programs to the (extensively gathered) network audit data from the DARPA Intrusion Detection Evaluation Program.",
"title": ""
},
{
"docid": "c0db1cd3688a18c853331772dbdfdedc",
"text": "In this review we describe the challenges and opportunities for creating magnetically active metamaterials in the optical part of the spectrum. The emphasis is on the sub-wavelength periodic metamaterials whose unit cell is much smaller than the optical wavelength. The conceptual differences between microwave and optical metamaterials are demonstrated. We also describe several theoretical techniques used for calculating the effective parameters of plasmonic metamaterials: the effective dielectric permittivity eff(ω) and magnetic permeability μeff(ω). Several examples of negative permittivity and negative permeability plasmonic metamaterials are used to illustrate the theory. c © 2008 Elsevier Ltd. All rights reserved. PACS: 42.70.-a; 41.20.Gz; 78.67.Bf",
"title": ""
},
{
"docid": "b740fd9a56701ddd8c54d92f45895069",
"text": "In vivo imaging of apoptosis in a preclinical setting in anticancer drug development could provide remarkable advantages in terms of translational medicine. So far, several imaging technologies with different probes have been used to achieve this goal. Here we describe a bioluminescence imaging approach that uses a new formulation of Z-DEVD-aminoluciferin, a caspase 3/7 substrate, to monitor in vivo apoptosis in tumor cells engineered to express luciferase. Upon apoptosis induction, Z-DEVD-aminoluciferin is cleaved by caspase 3/7 releasing aminoluciferin that is now free to react with luciferase generating measurable light. Thus, the activation of caspase 3/7 can be measured by quantifying the bioluminescent signal. Using this approach, we have been able to monitor caspase-3 activation and subsequent apoptosis induction after camptothecin and temozolomide treatment on xenograft mouse models of colon cancer and glioblastoma, respectively. Treated mice showed more than 2-fold induction of Z-DEVD-aminoluciferin luminescent signal when compared to the untreated group. Combining D-luciferin that measures the total tumor burden, with Z-DEVD-aminoluciferin that assesses apoptosis induction via caspase activation, we confirmed that it is possible to follow non-invasively tumor growth inhibition and induction of apoptosis after treatment in the same animal over time. Moreover, here we have proved that following early apoptosis induction by caspase 3 activation is a good biomarker that accurately predicts tumor growth inhibition by anti-cancer drugs in engineered colon cancer and glioblastoma cell lines and in their respective mouse xenograft models.",
"title": ""
},
{
"docid": "2a262a72133922a9232e9a3808341359",
"text": "Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Lowprecision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just 1 bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNetand MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.",
"title": ""
}
] |
scidocsrr
|
803d8880bf9599f667a280d9049eec60
|
IHS_RD: Lexical Normalization for English Tweets
|
[
{
"docid": "28846c26b51e53e4d42bb49c6d410379",
"text": "Social media language contains huge amount and wide variety of nonstandard tokens, created both intentionally and unintentionally by the users. It is of crucial importance to normalize the noisy nonstandard tokens before applying other NLP techniques. A major challenge facing this task is the system coverage, i.e., for any user-created nonstandard term, the system should be able to restore the correct word within its top n output candidates. In this paper, we propose a cognitivelydriven normalization system that integrates different human perspectives in normalizing the nonstandard tokens, including the enhanced letter transformation, visual priming, and string/phonetic similarity. The system was evaluated on both wordand messagelevel using four SMS and Twitter data sets. Results show that our system achieves over 90% word-coverage across all data sets (a 10% absolute increase compared to state-ofthe-art); the broad word-coverage can also successfully translate into message-level performance gain, yielding 6% absolute increase compared to the best prior approach.",
"title": ""
},
{
"docid": "33447e2bf55a419dfec2520e9449ef0e",
"text": "We present a unified unsupervised statistical model for text normalization. The relationship between standard and non-standard tokens is characterized by a log-linear model, permitting arbitrary features. The weights of these features are trained in a maximumlikelihood framework, employing a novel sequential Monte Carlo training algorithm to overcome the large label space, which would be impractical for traditional dynamic programming solutions. This model is implemented in a normalization system called UNLOL, which achieves the best known results on two normalization datasets, outperforming more complex systems. We use the output of UNLOL to automatically normalize a large corpus of social media text, revealing a set of coherent orthographic styles that underlie online language variation.",
"title": ""
},
{
"docid": "571c73de53da3ed4d9a465325c9e9746",
"text": "Twitter provides access to large volumes of data in real time, but is notoriously noisy, hampering its utility for NLP. In this paper, we target out-of-vocabulary words in short text messages and propose a method for identifying and normalising ill-formed words. Our method uses a classifier to detect ill-formed words, and generates correction candidates based on morphophonemic similarity. Both word similarity and context are then exploited to select the most probable correction candidate for the word. The proposed method doesn’t require any annotations, and achieves state-of-the-art performance over an SMS corpus and a novel dataset based on Twitter.",
"title": ""
}
] |
[
{
"docid": "9c0d65ee42ccfaa291b576568bad59e0",
"text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.",
"title": ""
},
{
"docid": "c1d95246f5d1b8c67f4ff4769bb6b9ce",
"text": "BACKGROUND\nA previous open-label study of melatonin, a key substance in the circadian system, has shown effects on migraine that warrant a placebo-controlled study.\n\n\nMETHOD\nA randomized, double-blind, placebo-controlled crossover study was carried out in 2 centers. Men and women, aged 18-65 years, with migraine but otherwise healthy, experiencing 2-7 attacks per month, were recruited from the general population. After a 4-week run-in phase, 48 subjects were randomized to receive either placebo or extended-release melatonin (Circadin®, Neurim Pharmaceuticals Ltd., Tel Aviv, Israel) at a dose of 2 mg 1 hour before bedtime for 8 weeks. After a 6-week washout treatment was switched. The primary outcome was migraine attack frequency (AF). A secondary endpoint was sleep quality assessed by the Pittsburgh Sleep Quality Index (PSQI).\n\n\nRESULTS\nForty-six subjects completed the study (96%). During the run-in phase, the average AF was 4.2 (±1.2) per month and during melatonin treatment the AF was 2.8 (±1.6). However, the reduction in AF during placebo was almost equal (p = 0.497). Absolute risk reduction was 3% (95% confidence interval -15 to 21, number needed to treat = 33). A highly significant time effect was found. The mean global PSQI score did not improve during treatment (p = 0.09).\n\n\nCONCLUSION\nThis study provides Class I evidence that prolonged-release melatonin (2 mg 1 hour before bedtime) does not provide any significant effect over placebo as migraine prophylaxis.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class I evidence that 2 mg of prolonged release melatonin given 1 hour before bedtime for a duration of 8 weeks did not result in a reduction in migraine frequency compared with placebo (p = 0.497).",
"title": ""
},
{
"docid": "5343db8a8bc5e300b9ad488d0eda56d4",
"text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to differences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second, two ambiguous elements are present, each of which functions both as a connector and a disjunctor.",
"title": ""
},
{
"docid": "191b5477cd8ba0cc26a0f4a51604dc85",
"text": "In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called \" sleeping beauties \" , SBs) or have presented single publications as cases of SBs. Most recently, Ke et al. (2015) proposed the so called \" beauty coefficient \" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term \" smart girl \" (SG) is suggested to differentiate instant credit or \" flashes in the pan \" from SBs. While SG and SB are qualitatively defined, the dynamic citation angle β is introduced in this study as a simple way for identifying SGs and SBs quantitatively – complementing the beauty coefficient B. The citation angles for all articles from 1980 (n=166870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.",
"title": ""
},
{
"docid": "be8b65d39ee74dbee0835052092040da",
"text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.",
"title": ""
},
{
"docid": "0e262d89497d9baad6a35d505139dccd",
"text": "Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible. What sort of papers best serve their readers? We can enumerate desirable characteristics: these papers should (i) provide intuition to aid the reader’s understanding, but clearly distinguish it from stronger conclusions supported by evidence; (ii) describe empirical investigations that consider and rule out alternative hypotheses [62]; (iii) make clear the relationship between theoretical analysis and intuitive or empirical claims [64]; and (iv) use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts [56]. Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship:",
"title": ""
},
{
"docid": "095f4ea337421d6e1310acf73977fdaa",
"text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.",
"title": ""
},
{
"docid": "b0103474ecd369a9f0ba637c34bacc56",
"text": "BACKGROUND\nThe Internet Addiction Test (IAT) by Kimberly Young is one of the most utilized diagnostic instruments for Internet addiction. Although many studies have documented psychometric properties of the IAT, consensus on the optimal overall structure of the instrument has yet to emerge since previous analyses yielded markedly different factor analytic results.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the psychometric properties of the Italian version of the IAT, specifically testing the factor structure stability across cultures.\n\n\nMETHODS\nIn order to determine the dimensional structure underlying the questionnaire, both exploratory and confirmatory factor analyses were performed. The reliability of the questionnaire was computed by the Cronbach alpha coefficient.\n\n\nRESULTS\nData analyses were conducted on a sample of 485 college students (32.3%, 157/485 males and 67.7%, 328/485 females) with a mean age of 24.05 years (SD 7.3, range 17-47). Results showed 176/485 (36.3%) participants with IAT score from 40 to 69, revealing excessive Internet use, and 11/485 (1.9%) participants with IAT score from 70 to 100, suggesting significant problems because of Internet use. The IAT Italian version showed good psychometric properties, in terms of internal consistency and factorial validity. Alpha values were satisfactory for both the one-factor solution (Cronbach alpha=.91), and the two-factor solution (Cronbach alpha=.88 and Cronbach alpha=.79). The one-factor solution comprised 20 items, explaining 36.18% of the variance. The two-factor solution, accounting for 42.15% of the variance, showed 11 items loading on Factor 1 (Emotional and Cognitive Preoccupation with the Internet) and 7 items on Factor 2 (Loss of Control and Interference with Daily Life). Goodness-of-fit indexes (NNFI: Non-Normed Fit Index; CFI: Comparative Fit Index; RMSEA: Root Mean Square Error of Approximation; SRMR: Standardized Root Mean Square Residual) from confirmatory factor analyses conducted on a random half subsample of participants (n=243) were satisfactory in both factorial solutions: two-factor model (χ²₁₃₂= 354.17, P<.001, χ²/df=2.68, NNFI=.99, CFI=.99, RMSEA=.02 [90% CI 0.000-0.038], and SRMR=.07), and one-factor model (χ²₁₆₉=483.79, P<.001, χ²/df=2.86, NNFI=.98, CFI=.99, RMSEA=.02 [90% CI 0.000-0.039], and SRMR=.07).\n\n\nCONCLUSIONS\nOur study was aimed at determining the most parsimonious and veridical representation of the structure of Internet addiction as measured by the IAT. Based on our findings, support was provided for both single and two-factor models, with slightly strong support for the bidimensionality of the instrument. Given the inconsistency of the factor analytic literature of the IAT, researchers should exercise caution when using the instrument, dividing the scale into factors or subscales. Additional research examining the cross-cultural stability of factor solutions is still needed.",
"title": ""
},
{
"docid": "b3b27246ed1ef97fb1994b8dbaf023f3",
"text": "Malicious botnets are networks of compromised computers that are controlled remotely to perform large-scale distributed denial-of-service (DDoS) attacks, send spam, trojan and phishing emails, distribute pirated media or conduct other usually illegitimate activities. This paper describes a methodology to detect, track and characterize botnets on a large Tier-1 ISP network. The approach presented here differs from previous attempts to detect botnets by employing scalable non-intrusive algorithms that analyze vast amounts of summary traffic data collected on selected network links. Our botnet analysis is performed mostly on transport layer data and thus does not depend on particular application layer information. Our algorithms produce alerts with information about controllers. Alerts are followed up with analysis of application layer data, that indicates less than 2% false positive rates.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "872946be0c4897dc33bc1276593ee7a4",
"text": "BACKGROUND\nMusic therapy is a therapeutic method that uses musical interaction as a means of communication and expression. The aim of the therapy is to help people with serious mental disorders to develop relationships and to address issues they may not be able to using words alone.\n\n\nOBJECTIVES\nTo review the effects of music therapy, or music therapy added to standard care, compared with 'placebo' therapy, standard care or no treatment for people with serious mental disorders such as schizophrenia.\n\n\nSEARCH METHODS\nWe searched the Cochrane Schizophrenia Group Trials Register (December 2010) and supplemented this by contacting relevant study authors, handsearching of music therapy journals and manual searches of reference lists.\n\n\nSELECTION CRITERIA\nAll randomised controlled trials (RCTs) that compared music therapy with standard care, placebo therapy, or no treatment.\n\n\nDATA COLLECTION AND ANALYSIS\nStudies were reliably selected, quality assessed and data extracted. We excluded data where more than 30% of participants in any group were lost to follow-up. We synthesised non-skewed continuous endpoint data from valid scales using a standardised mean difference (SMD). If statistical heterogeneity was found, we examined treatment 'dosage' and treatment approach as possible sources of heterogeneity.\n\n\nMAIN RESULTS\nWe included eight studies (total 483 participants). These examined effects of music therapy over the short- to medium-term (one to four months), with treatment 'dosage' varying from seven to 78 sessions. Music therapy added to standard care was superior to standard care for global state (medium-term, 1 RCT, n = 72, RR 0.10 95% CI 0.03 to 0.31, NNT 2 95% CI 1.2 to 2.2). Continuous data identified good effects on negative symptoms (4 RCTs, n = 240, SMD average endpoint Scale for the Assessment of Negative Symptoms (SANS) -0.74 95% CI -1.00 to -0.47); general mental state (1 RCT, n = 69, SMD average endpoint Positive and Negative Symptoms Scale (PANSS) -0.36 95% CI -0.85 to 0.12; 2 RCTs, n=100, SMD average endpoint Brief Psychiatric Rating Scale (BPRS) -0.73 95% CI -1.16 to -0.31); depression (2 RCTs, n = 90, SMD average endpoint Self-Rating Depression Scale (SDS) -0.63 95% CI -1.06 to -0.21; 1 RCT, n = 30, SMD average endpoint Hamilton Depression Scale (Ham-D) -0.52 95% CI -1.25 to -0.21 ); and anxiety (1 RCT, n = 60, SMD average endpoint SAS -0.61 95% CI -1.13 to -0.09). Positive effects were also found for social functioning (1 RCT, n = 70, SMD average endpoint Social Disability Schedule for Inpatients (SDSI) score -0.78 95% CI -1.27 to -0.28). Furthermore, some aspects of cognitive functioning and behaviour seem to develop positively through music therapy. Effects, however, were inconsistent across studies and depended on the number of music therapy sessions as well as the quality of the music therapy provided.\n\n\nAUTHORS' CONCLUSIONS\nMusic therapy as an addition to standard care helps people with schizophrenia to improve their global state, mental state (including negative symptoms) and social functioning if a sufficient number of music therapy sessions are provided by qualified music therapists. Further research should especially address the long-term effects of music therapy, dose-response relationships, as well as the relevance of outcomes measures in relation to music therapy.",
"title": ""
},
{
"docid": "e8d33e2ac3e1edeabf0522d23e39f53c",
"text": "To build conversational robots, roboticists are required to have deep knowledge of both robotics and spoken dialogue systems. Although they can use existing cloud services that were built for other services, e.g., voice search, it will be difficult to share robotics-specific speech corpora obtained as server logs, because they will get buried in non-robotics-related logs. Building a cloud platform especially for the robotics community will benefit not only individual robot developers but also the robotics community since we can share the log corpus collected by it. This is challenging because we need to build a wide variety of functionalities ranging from a stable cloud platform to high-quality multilingual speech recognition and synthesis engines. In this paper, we propose “rospeex,” which is a cloud robotics platform for multilingual spoken dialogues with robots. We analyze the logs we have collected by operating rospeex for more than a year. Our key contribution lies in building a cloud robotics platform and allowing the robotics community to use it without payment or authentication.",
"title": ""
},
{
"docid": "3ba586c49e662c29f373eb08ad9eb1cb",
"text": "The first pathologic alterations of the retina are seen in the vessel network. These modifications affect very differently arteries and veins, and the appearance and entity of the modification differ as the retinopathy becomes milder or more severe. In order to develop an automatic procedure for the diagnosis and grading of retinopathy, it is necessary to be able to discriminate arteries from veins. The problem is complicated by the similarity in the descriptive features of these two structures and by the contrast and luminosity variability of the retina. We developed a new algorithm for classifying the vessels, which exploits the peculiarities of retinal images. By applying a divide et imperaapproach that partitioned a concentric zone around the optic disc into quadrants, we were able to perform a more robust local classification analysis. The results obtained by the proposed technique were compared with those provided by a manual classification on a validation set of 443 vessels and reached an overall classification error of 12 %, which reduces to 7 % if only the diagnostically important retinal vessels are considered.",
"title": ""
},
{
"docid": "29f1c91fccfbeaa7ec352bdbe1c300c6",
"text": "Absorption in the stellar Lyman-alpha (Lyalpha) line observed during the transit of the extrasolar planet HD 209458b in front of its host star reveals high-velocity atomic hydrogen at great distances from the planet. This has been interpreted as hydrogen atoms escaping from the planet's exosphere, possibly undergoing hydrodynamic blow-off, and being accelerated by stellar radiation pressure. Energetic neutral atoms around Solar System planets have been observed to form from charge exchange between solar wind protons and neutral hydrogen from the planetary exospheres, however, and this process also should occur around extrasolar planets. Here we show that the measured transit-associated Lyalpha absorption can be explained by the interaction between the exosphere of HD 209458b and the stellar wind, and that radiation pressure alone cannot explain the observations. As the stellar wind protons are the source of the observed energetic neutral atoms, this provides a way of probing stellar wind conditions, and our model suggests a slow and hot stellar wind near HD 209458b at the time of the observations.",
"title": ""
},
{
"docid": "059583d1d8a6f99bae3736d900008caa",
"text": "Ultraviolet disinfection is a frequent option for eliminating viable organisms in ballast water to fulfill international and national regulations. The objective of this work is to evaluate the reduction of microalgae able to reproduce after UV irradiation, based on their growth features. A monoculture of microalgae Tisochrysis lutea was irradiated with different ultraviolet doses (UV-C 254 nm) by a flow-through reactor. A replicate of each treated sample was held in the dark for 5 days simulating a treatment during the ballasting; another replicate was incubated directly under the light, corresponding to the treatment application during de-ballasting. Periodic measurements of cell density were taken in order to obtain the corresponding growth curves. Irradiated samples depicted a regrowth following a logistic curve in concordance with the applied UV dose. By modeling these curves, it is possible to obtain the initial concentration of organisms able to reproduce for each applied UV dose, thus obtaining the dose-survival profiles, needed to determine the disinfection kinetics. These dose-survival profiles enable detection of a synergic effect between the ultraviolet irradiation and a subsequent dark period; in this sense, the UV dose applied during the ballasting operation and subsequent dark storage exerts a strong influence on microalgae survival. The proposed methodology, based on growth modeling, established a framework for comparing the UV disinfection by different devices and technologies on target organisms. This procedure may also assist the understanding of the evolution of treated organisms in more complex assemblages such as those that exist in natural ballast water.",
"title": ""
},
{
"docid": "13d913cba71b9c0308b67cfe7c625dbd",
"text": "This paper presents an approach to image understanding on the aspect of unsupervised scene segmentation. With the goal of image understanding in mind, we consider ‘unsupervised scene segmentation’ a task of dividing a given image into semantically meaningful regions without using annotation or other human-labeled information. We seek to investigate how well an algorithm can achieve at partitioning an image with limited human-involved learning procedures. Specifically, we are interested in developing an unsupervised segmentation algorithm that only relies on the contextual prior learned from a set of images. Our algorithm incorporates a small set of images that are similar to the input image in their scene structures. We use the sparse coding technique to analyze the appearance of this set of images; the effectiveness of sparse coding allows us to derive a priori the context of the scene from the set of images. Gaussian mixture models can then be constructed for different parts of the input image based on the sparse-coding contextual prior, and can be combined into an Markov-random-field-based segmentation process. The experimental results show that our unsupervised segmentation algorithm is able to partition an image into semantic regions, such as buildings, roads, trees, and skies, without using human-annotated information. The semantic regions generated by our algorithm can be useful, as pre-processed inputs for subsequent classification-based labeling algorithms, in achieving automatic scene annotation and scene parsing.",
"title": ""
},
{
"docid": "e3c8f10316152f0bc775f4823b79c7f6",
"text": "The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision.",
"title": ""
},
{
"docid": "9ed59146a891cbb9b537a68ecda7f77b",
"text": "Attachment theory, developed by Bowlby to explain human bonding, has profound implications for conducting and adapting psychotherapy. We summarize the prevailing definitions and measures of attachment style. We review the results of three meta-analyses examining the association between attachment anxiety, avoidance, and security and psychotherapy outcome. Fourteen studies were synthesized, which included 19 separate therapy cohorts with a combined sample size of 1,467. Attachment anxiety showed a d of -.46 with posttherapy outcome, while attachment security showed a d of.37 association with outcome. Attachment avoidance was uncorrelated with outcome. The age and gender composition of the samples moderated the relation between attachment security and outcome: samples with a higher proportion of female clients and a higher mean age showed a smaller relation between security and outcome. We discuss the practice implications of these findings and related research on the link between attachment and the therapy relationship.",
"title": ""
},
{
"docid": "ac8aea4d68b3a8e0a294d2b520412cd5",
"text": "Forest autotrophic respiration (R(a)) plays an important role in the carbon balance of forest ecosystems. However, its drivers at the global scale are not well known. Based on a global forest database, we explore the relationships of annual R(a) with mean annual temperature (MAT) and biotic factors including net primary productivity (NPP), total biomass, stand age, mean tree height, and maximum leaf area index (LAI). The results show that the spatial patterns of forest annual R(a) at the global scale are largely controlled by temperature. R(a) is composed of growth (R(g)) and maintenance respiration (R(m)). We used a modified Arrhenius equation to express the relationship between R(a) and MAT. This relationship was calibrated with our data and shows that a 10 degrees C increase in MAT will result in an increase of annual R(m) by a factor of 1.9-2.5 (Q10). We also found that the fraction of total assimilation (gross primary production, GPP) used in R(a) is lowest in the temperate regions characterized by a MAT of approximately 11 degrees C. Although we could not confirm a relationship between the ratio of R(a) to GPP and age across all forest sites, the R(a) to GPP ratio tends to significantly increase in response to increasing age for sites with MAT between 8 degrees and 12 degrees C. At the plant scale, direct up-scaled R(a) estimates were found to increase as a power function with forest total biomass; however, the coefficient of the power function (0.2) was much smaller than that expected from previous studies (0.75 or 1). At the ecosystem scale, R(a) estimates based on both GPP - NPP and TER - R(h) (total ecosystem respiration - heterotrophic respiration) were not significantly correlated with forest total biomass (P > 0.05) with either a linear or a power function, implying that the previous individual-based metabolic theory may be not suitable for the application at ecosystem scale.",
"title": ""
},
{
"docid": "7db1b370d0e14e80343cbc7718bbb6c9",
"text": "T free-riding problem occurs if the presales activities needed to sell a product can be conducted separately from the actual sale of the product. Intuitively, free riding should hurt the retailer that provides that service, but the author shows analytically that free riding benefits not only the free-riding retailer, but also the retailer that provides the service when customers are heterogeneous in terms of their opportunity costs for shopping. The service-providing retailer has a postservice advantage, because customers who have resolved their matching uncertainty through sales service incur zero marginal shopping cost if they purchase from the service-providing retailer rather than the free-riding retailer. Moreover, allowing free riding gives the free rider less incentive to compete with the service provider on price, because many customers eventually will switch to it due to their own free riding. In turn, this induced soft strategic response enables the service provider to charge a higher price and enjoy the strictly positive profit that otherwise would have been wiped away by head-to-head price competition. Therefore, allowing free riding can be regarded as a necessary mechanism that prevents an aggressive response from another retailer and reduces the intensity of price competition.",
"title": ""
}
] |
scidocsrr
|
7c8401c55239df878548d668281024e4
|
The Problem of Trusted Third Party in Authentication and Digital Signature Protocols
|
[
{
"docid": "59308c5361d309568a94217c79cf0908",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read cryptography an introduction to computer security now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
}
] |
[
{
"docid": "32d0a26f21a25fe1e783b1edcfbcf673",
"text": "Histologic grading has been used as a guide for clinical management in follicular lymphoma (FL). Proliferation index (PI) of FL generally correlates with tumor grade; however, in cases of discordance, it is not clear whether histologic grade or PI correlates with clinical aggressiveness. To objectively evaluate these cases, we determined PI by Ki-67 immunostaining in 142 cases of FL (48 grade 1, 71 grade 2, and 23 grade 3). A total of 24 cases FL with low histologic grade but high PI (LG-HPI) were identified, a frequency of 18%. On histologic examination, LG-HPI FL often exhibited blastoid features. Patients with LG-HPI FL had inferior disease-specific survival but a higher 5-year disease-free rate than low-grade FL with concordantly low PI (LG-LPI). However, transformation to diffuse large B-cell lymphoma was uncommon in LG-HPI cases (1 of 19; 5%) as compared with LG-LPI cases (27 of 74; 36%). In conclusion, LG-HPI FL appears to be a subgroup of FL with clinical behavior more akin to grade 3 FL. We propose that these LG-HPI FL cases should be classified separately from cases of low histologic grade FL with concordantly low PI.",
"title": ""
},
{
"docid": "f10353fe0c78877a6e78509badba9fcd",
"text": "Chronic Wounds are ulcers presenting a difficult or nearly interrupted cicatrization process that increase the risk of complications to the health of patients, like amputation and infections. This research proposes a general noninvasive methodology for the segmentation and analysis of chronic wounds images by computing the wound areas affected by necrosis. Invasive techniques are usually used for this calculation, such as manual planimetry with plastic films. We investigated algorithms to perform the segmentation of wounds as well as the use of several convolutional networks for classifying tissue as Necrotic, Granulation or Slough. We tested four architectures: U-Net, Segnet, FCN8 and FCN32, and proposed a color space reduction methodology that increased the reported accuracies, specificities, sensitivities and Dice coefficients for all 4 networks, achieving very good levels.",
"title": ""
},
{
"docid": "b32d6bc2d14683c4bf3557dad560edca",
"text": "In this paper, we describe the fabrication and testing of a stretchable fabric sleeve with embedded elastic strain sensors for state reconstruction of a soft robotic joint. The strain sensors are capacitive and composed of graphite-based conductive composite electrodes and a silicone elastomer dielectric. The sensors are screenprinted directly into the fabric sleeve, which contrasts the approach of pre-fabricating sensors and subsequently attaching them to a host. We demonstrate the capabilities of the sensor-embedded fabric sleeve by determining the joint angle and end effector position of a soft pneumatic joint with similar accuracy to a traditional IMU. Furthermore, we show that the sensory sleeve is capable of capturing more complex material states, such as fabric buckling and non-constant curvatures along linkages and joints.",
"title": ""
},
{
"docid": "999070b182a328b1927be4575f04e434",
"text": "Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.",
"title": ""
},
{
"docid": "df6d4e6d74d96b7ab1951cc869caad59",
"text": "A broadband commonly fed antenna with dual polarization is proposed in this letter. The main radiator of the antenna is designed as a loop formed by four staircase-like branches. In this structure, the 0° polarization and 90° polarization share the same radiator and reflector. Measurement shows that the proposed antenna obtains a broad impedance bandwidth of 70% (1.5–3.1 GHz) with <inline-formula><tex-math notation=\"LaTeX\">$\\vert {{S}}_{11}\\vert < -{\\text{10 dB}}$</tex-math></inline-formula> and a high port-to-port isolation of 35 dB. The antenna gain within the operating frequency band is between 7.2 and 9.5 dBi, which indicates a stable broadband radiation performance. Moreover, a high cross-polarization discrimination of 25 dB is achieved across the whole operating frequency band.",
"title": ""
},
{
"docid": "04d5824991ada6194f3028a900d7f31b",
"text": "In this work, we present a solution to real-time monocular dense mapping. A tightly-coupled visual-inertial localization module is designed to provide metric and high-accuracy odometry. A motion stereo algorithm is proposed to take the video input from one camera to produce local depth measurements with semi-global regularization. The local measurements are then integrated into a global map for noise filtering and map refinement. The global map obtained is able to support navigation and obstacle avoidance for aerial robots through our indoor and outdoor experimental verification. Our system runs at 10Hz on an Nvidia Jetson TX1 by properly distributing computation to CPU and GPU. Through onboard experiments, we demonstrate its ability to close the perception-action loop for autonomous aerial robots. We release our implementation as open-source software1.",
"title": ""
},
{
"docid": "e294307ea4108d8cf467585f27d3a48b",
"text": "Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "40ebf37907d738dd64b5a87b93b4a432",
"text": "Deep learning has led to many breakthroughs in machine perception and data mining. Although there are many substantial advances of deep learning in the applications of image recognition and natural language processing, very few work has been done in video analysis and semantic event detection. Very deep inception and residual networks have yielded promising results in the 2014 and 2015 ILSVRC challenges, respectively. Now the question is whether these architectures are applicable to and computationally reasonable in a variety of multimedia datasets. To answer this question, an efficient and lightweight deep convolutional network is proposed in this paper. This network is carefully designed to decrease the depth and width of the state-of-the-art networks while maintaining the high-performance. The proposed deep network includes the traditional convolutional architecture in conjunction with residual connections and very light inception modules. Experimental results demonstrate that the proposed network not only accelerates the training procedure, but also improves the performance in different multimedia classification tasks.",
"title": ""
},
{
"docid": "bc5c008b5e443b83b2a66775c849fffb",
"text": "Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors' lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones.",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "712be4d6aabf8e76b050c30e6241ad0f",
"text": "The United States, like many nations, continues to experience rapid growth in its racial minority population and is projected to attain so-called majority-minority status by 2050. Along with these demographic changes, staggering racial disparities persist in health, wealth, and overall well-being. In this article, we review the social psychological literature on race and race relations, beginning with the seemingly simple question: What is race? Drawing on research from different fields, we forward a model of race as dynamic, malleable, and socially constructed, shifting across time, place, perceiver, and target. We then use classic theoretical perspectives on intergroup relations to frame and then consider new questions regarding contemporary racial dynamics. We next consider research on racial diversity, focusing on its effects during interpersonal encounters and for groups. We close by highlighting emerging topics that should top the research agenda for the social psychology of race and race relations in the twenty-first century.",
"title": ""
},
{
"docid": "1d56b3aa89484e3b25557880ec239930",
"text": "We present an FPGA accelerator for the Non-uniform Fast Fourier Transform, which is a technique to reconstruct images from arbitrarily sampled data. We accelerate the compute-intensive interpolation step of the NuFFT Gridding algorithm by implementing it on an FPGA. In order to ensure efficient memory performance, we present a novel FPGA implementation for Geometric Tiling based sorting of the arbitrary samples. The convolution is then performed by a novel Data Translation architecture which is composed of a multi-port local memory, dynamic coordinate-generator and a plug-and-play kernel pipeline. Our implementation is in single-precision floating point and has been ported onto the BEE3 platform. Experimental results show that our FPGA implementation can generate fairly high performance without sacrificing flexibility for various data-sizes and kernel functions. We demonstrate up to 8X speedup and up to 27 times higher performance-per-watt over a comparable CPU implementation and up to 20% higher performance-per-watt when compared to a relevant GPU implementation.",
"title": ""
},
{
"docid": "6504562f140b49d412446817e76383e8",
"text": "As more businesses realized that data, in all forms and sizes, is critical to making the best possible decisions, we see the continued growth of systems that support massive volume of non-relational or unstructured forms of data. Nothing shows the picture more starkly than the Gartner Magic quadrant for operational database management systems, which assumes that, by 2017, all leading operational DBMSs will offer multiple data models, relational and NoSQL, in a single DBMS platform. Having a single data platform for managing both well-structured data and NoSQL data is beneficial to users; this approach reduces significantly integration, migration, development, maintenance, and operational issues. Therefore, a challenging research work is how to develop efficient consolidated single data management platform covering both relational data and NoSQL to reduce integration issues, simplify operations, and eliminate migration issues. In this tutorial, we review the previous work on multi-model data management and provide the insights on the research challenges and directions for future work. The slides and more materials of this tutorial can be found at http://udbms.cs.helsinki.fi/?tutorials/edbt2017.",
"title": ""
},
{
"docid": "660465cbd4bd95108a2381ee5a97cede",
"text": "In this paper we discuss the design and implementation of an automated usability evaluation method for iOS applications. In contrast to common usability testing methods, it is not explicitly necessary to involve an expert or subjects. These circumstances reduce costs, time and personnel expenditures. Professionals are replaced by the automation tool while test participants are exchanged with consumers of the launched application. Interactions of users are captured via a fully automated capturing framework which creates a record of user interactions for each session and sends them to a central server. A usability problem is defined as a sequence of interactions and pattern recognition specified by interaction design patterns is applied to find these problems. Nevertheless, it falls back to the user input for accurate results. Similar to the problem, the solution of the problem is based on the HCI design pattern. An evaluation shows the functionality of our approach compared to a traditional usability evaluation method.",
"title": ""
},
{
"docid": "7d1faee4929d60d952cc8c2c12fa16d3",
"text": "We recently showed that improved perceptual performance on a visual motion direction–discrimination task corresponds to changes in how an unmodified sensory representation in the brain is interpreted to form a decision that guides behavior. Here we found that these changes can be accounted for using a reinforcement-learning rule to shape functional connectivity between the sensory and decision neurons. We modeled performance on the basis of the readout of simulated responses of direction-selective sensory neurons in the middle temporal area (MT) of monkey cortex. A reward prediction error guided changes in connections between these sensory neurons and the decision process, first establishing the association between motion direction and response direction, and then gradually improving perceptual sensitivity by selectively strengthening the connections from the most sensitive neurons in the sensory population. The results suggest a common, feedback-driven mechanism for some forms of associative and perceptual learning.",
"title": ""
},
{
"docid": "3eec1e9abcb677a4bc8f054fa8827f4f",
"text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.",
"title": ""
},
{
"docid": "0c025ec05a1f98d71c9db5bfded0a607",
"text": "Many organizations, such as banks, airlines, telecommunications companies, and police departments, routinely use queueing models to help determine capacity levels needed to respond to experienced demands in a timely fashion. Though queueing analysis has been used in hospitals and other healthcare settings, its use in this sector is not widespread. Yet, given the pervasiveness of delays in healthcare and the fact that many healthcare facilities are trying to meet increasing demands with tightly constrained resources, queueing models can be very useful in developing more effective policies for bed allocation and staffing, and in identifying other opportunities for improving service. Queueing analysis is also a key tool in estimating capacity requirements for possible future scenarios, including demand surges due to new diseases or acts of terrorism. This chapter describes basic queueing models as well as some simple modifications and extensions that are particularly useful in the healthcare setting, and give examples of their use. The critical issue of data requirements is also be discussed as well as model choice, modelbuilding and the interpretation and use of results.",
"title": ""
},
{
"docid": "f5e934d65fa436cdb8e5cfa81ea29028",
"text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.",
"title": ""
},
{
"docid": "188e971e34192af93c36127b69d89064",
"text": "1 1 This paper has been revised and extended from the authors' previous work [23][24][25]. ABSTRACT Ontology mapping seeks to find semantic correspondences between similar elements of different ontologies. It is a key challenge to achieve semantic interoperability in building the Semantic Web. This paper proposes a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence. The approach consists of three major modules, i.e., the IR-based similarity generator, the adaptive similarity filter and weighted similarity aggregator, and the neural network based constraint satisfaction solver. The approach first measures both linguistic and structural similarity of ontologies in a vector space model, and then aggregates them using an adaptive method based on their harmonies, which is defined as an estimator of performance of similarity. Finally to improve mapping accuracy the interactive activation and competition neural network is activated, if necessary, to search for a solution that can satisfy ontology constraints. The experimental results show that harmony is a good estimator of f-measure; the harmony based adaptive aggregation outperforms other aggregation methods; neural network approach significantly boosts the performance in most cases. Our approach is competitive with top ranked systems on benchmark tests at OAEI campaign 2007, and performs the best on real cases in OAEI benchmark tests.",
"title": ""
}
] |
scidocsrr
|
63353dfe47623fca110fe9eb341f4d5c
|
Extracting general-purpose features from LIDAR data
|
[
{
"docid": "59f3c511765c52702b9047a688256532",
"text": "Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots. Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult— especially when using inexpensive (and therefore preferable) sensors. This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described. In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data. We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.",
"title": ""
}
] |
[
{
"docid": "803b3d29c5514865cd8e17971f2dd8d6",
"text": "This paper comprehensively analyzes the relationship between space-vector modulation and three-phase carrier-based pulsewidth modualtion (PWM). The relationships involved, such as the relationship between modulation signals (including zero-sequence component and fundamental components) and space vectors, the relationship between the modulation signals and the space-vector sectors, the relationship between the switching pattern of space-vector modulation and the type of carrier, and the relationship between the distribution of zero vectors and different zero-sequence signal are systematically established. All the relationships provide a bidirectional bridge for the transformation between carrier-based PWM modulators and space-vector modulation modulators. It is shown that all the drawn conclusions are independent of the load type. Furthermore, the implementations of both space-vector modulation and carrier-based PWM in a closed-loop feedback converter are discussed.",
"title": ""
},
{
"docid": "6f34ef57fcf0a2429e7dc2a3e56a99fd",
"text": "Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.",
"title": ""
},
{
"docid": "f29d0ea5ff5c96dadc440f4d4aa229c6",
"text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.",
"title": ""
},
{
"docid": "8a85d05f4ed31d3dba339bb108b39ba4",
"text": "Access to genetic and genomic resources can greatly facilitate biological understanding of plant species leading to improved crop varieties. While model plant species such as Arabidopsis have had nearly two decades of genetic and genomic resource development, many major crop species have seen limited development of these resources due to the large, complex nature of their genomes. Cultivated potato is among the ranks of crop species that, despite substantial worldwide acreage, have seen limited genetic and genomic tool development. As technologies advance, this paradigm is shifting and a number of tools are being developed for important crop species such as potato. This review article highlights numerous tools that have been developed for the potato community with a specific focus on the reference de novo genome assembly and annotation, genetic markers, transcriptomics resources, and newly emerging resources that extend beyond a single reference individual. El acceso a los recursos genéticos y genómicos puede facilitar en gran medida el entendimiento biológico de las especies de plantas, lo que conduce a variedades mejoradas de cultivos. Mientras que el modelo de las especies de plantas como Arabidopsis ha tenido cerca de dos décadas de desarrollo de recursos genéticos y genómicos, muchas especies de cultivos principales han visto desarrollo limitado de estos recursos debido a la naturaleza grande, compleja, de sus genomios. La papa cultivada está ubicada entre las especies de plantas que a pesar de su superficie substancial mundial, ha visto limitado el desarrollo de las herramientas genéticas y genómicas. A medida que avanzan las tecnologías, este paradigma está girando y se han estado desarrollando un número de herramientas para especies importantes de cultivo tales como la papa. Este artículo de revisión resalta las numerosas herramientas que se han desarrollado para la comunidad de la papa con un enfoque específico en la referencia de ensamblaje y registro de genomio de novo, marcadores genéticos, recursos transcriptómicos, y nuevas fuentes emergentes que se extienden más allá de la referencia de un único individuo.",
"title": ""
},
{
"docid": "5a18a7f42ab40cd238c92e19d23e0550",
"text": "As memory scales down to smaller technology nodes, new failure mechanisms emerge that threaten its correct operation. If such failure mechanisms are not anticipated and corrected, they can not only degrade system reliability and availability but also, perhaps even more importantly, open up security vulnerabilities: a malicious attacker can exploit the exposed failure mechanism to take over the entire system. As such, new failure mechanisms in memory can become practical and significant threats to system security. In this work, we discuss the RowHammer problem in DRAM, which is a prime (and perhaps the first) example of how a circuit-level failure mechanism in DRAM can cause a practical and widespread system security vulnerability. RowHammer, as it is popularly referred to, is the phenomenon that repeatedly accessing a row in a modern DRAM chip causes bit flips in physically-adjacent rows at consistently predictable bit locations. It is caused by a hardware failure mechanism called DRAM disturbance errors, which is a manifestation of circuit-level cell-to-cell interference in a scaled memory technology. Researchers from Google Project Zero recently demonstrated that this hardware failure mechanism can be effectively exploited by user-level programs to gain kernel privileges on real systems. Several other recent works demonstrated other practical attacks exploiting RowHammer. These include remote takeover of a server vulnerable to RowHammer, takeover of a victim virtual machine by another virtual machine running on the same system, and takeover of a mobile device by a malicious user-level application that requires no permissions. We analyze the root causes of the RowHammer problem and examine various solutions. We also discuss what other vulnerabilities may be lurking in DRAM and other types of memories, e.g., NAND flash memory or Phase Change Memory, that can potentially threaten the foundations of secure systems, as the memory technologies scale to higher densities. We conclude by describing and advocating a principled approach to memory reliability and security research that can enable us to better anticipate and prevent such vulnerabilities.",
"title": ""
},
{
"docid": "9847518e92a8f1b6cef2365452b01008",
"text": "This paper presents a Planar Inverted F Antenna (PIFA) tuned with a fixed capacitor to the low frequency bands supported by the Long Term Evolution (LTE) technology. The tuning range is investigated and optimized with respect to the bandwidth and the efficiency of the resulting antenna. Simulations and mock-ups are presented.",
"title": ""
},
{
"docid": "910a3be33d479be4ed6e7e44a56bb8fb",
"text": "Support vector machine (SVM) is a supervised machine learning approach that was recognized as a statistical learning apotheosis for the small-sample database. SVM has shown its excellent learning and generalization ability and has been extensively employed in many areas. This paper presents a performance analysis of six types of SVMs for the diagnosis of the classical Wisconsin breast cancer problem from a statistical point of view. The classification performance of standard SVM (St-SVM) is analyzed and compared with those of the other modified classifiers such as proximal support vector machine (PSVM) classifiers, Lagrangian support vector machines (LSVM), finite Newton method for Lagrangian support vector machine (NSVM), Linear programming support vector machines (LPSVM), and smooth support vector machine (SSVM). The experimental results reveal that these SVM classifiers achieve very fast, simple, and efficient breast cancer diagnosis. The training results indicated that LSVM has the lowest accuracy of 95.6107 %, while St-SVM performed better than other methods for all performance indices (accuracy = 97.71 %) and is closely followed by LPSVM (accuracy = 97.3282). However, in the validation phase, the overall accuracies of LPSVM achieved 97.1429 %, which was superior to LSVM (95.4286 %), SSVM (96.5714 %), PSVM (96 %), NSVM (96.5714 %), and St-SVM (94.86 %). Value of ROC and MCC for LPSVM achieved 0.9938 and 0.9369, respectively, which outperformed other classifiers. The results strongly suggest that LPSVM can aid in the diagnosis of breast cancer.",
"title": ""
},
{
"docid": "f071a3d699ba4b3452043b6efb14b508",
"text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.",
"title": ""
},
{
"docid": "58cc081ac8e75c77de192f473e1cc10d",
"text": "We present an efficient approach for end-to-end out-of-core construction and interactive inspection of very large arbitrary surface models. The method tightly integrates visibility culling and out-of-core data management with a level-of-detail framework. At preprocessing time, we generate a coarse volume hierarchy by binary space partitioning the input triangle soup. Leaf nodes partition the original data into chunks of a fixed maximum number of triangles, while inner nodes are discretized into a fixed number of cubical voxels. Each voxel contains a compact direction dependent approximation of the appearance of the associated volumetric subpart of the model when viewed from a distance. The approximation is constructed by a visibility aware algorithm that fits parametric shaders to samples obtained by casting rays against the full resolution dataset. At rendering time, the volumetric structure, maintained off-core, is refined and rendered in front-to-back order, exploiting vertex programs for GPU evaluation of view-dependent voxel representations, hardware occlusion queries for culling occluded subtrees, and asynchronous I/O for detecting and avoiding data access latencies. Since the granularity of the multiresolution structure is coarse, data management, traversal and occlusion culling cost is amortized over many graphics primitives. The efficiency and generality of the approach is demonstrated with the interactive rendering of extremely complex heterogeneous surface models on current commodity graphics platforms.",
"title": ""
},
{
"docid": "dd05084594640b9ab87c702059f7a366",
"text": "Researchers and theorists have proposed that feelings of attachment to subgroups within a larger online community or site can increase users' loyalty to the site. They have identified two types of attachment, with distinct causes and consequences. With bond-based attachment, people feel connections to other group members, while with identity-based attachment they feel connections to the group as a whole. In two experiments we show that these feelings of attachment to subgroups increase loyalty to the larger community. Communication with other people in a subgroup but not simple awareness of them increases attachment to the larger community. By varying how the communication is structured, between dyads or with all group members simultaneously, the experiments show that bond- and identity-based attachment have different causes. But the experiments show no evidence that bond and identity attachment have different consequences. We consider both theoretical and methodological reasons why the consequences of bond-based and identity-based attachment are so similar.",
"title": ""
},
{
"docid": "549c8d2033f84890c91966630246e06e",
"text": "Propagation models are used to abstract the actual propagation characteristics of electromagnetic waves utilized for conveying information in a compact form (i.e., a model with a small number of parameters). The correct modeling of propagation and path loss is of paramount importance in wireless sensor network (WSN) system design and analysis [1]. Most of the important performance metrics commonly employed for WSNs, such as energy dissipation, route optimization, reliability, and connectivity, are affected by the utilized propagation model. However, in many studies on WSNs, overly simplistic and unrealistic propagation models are used. One of the reasons for the utilization of such impractical propagation models is the lack of awareness of experimentally available WSN-specific propagation and path-loss models. In this article, necessary succint background information is given on general wireless propagation modeling, and salient WSN-specific constraints on path-loss modeling are summarized. Building upon the provided background, an overview of the experimentally verified propagation models for WSNs is presented, and quantitative comparisons of propagation models employed in WSN research under various scenarios and frequency bands are provided.",
"title": ""
},
{
"docid": "b374975ae9690f96ed750a888713dbc9",
"text": "We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance.",
"title": ""
},
{
"docid": "90e5eaa383c00a0551a5161f07c683e7",
"text": "The importance of the Translation Lookaside Buffer (TLB) on system performance is well known. There have been numerous prior efforts addressing TLB design issues for cutting down access times and lowering miss rates. However, it was only recently that the first exploration [26] on prefetching TLB entries ahead of their need was undertaken and a mechanism called Recency Prefetching was proposed. There is a large body of literature on prefetching for caches, and it is not clear how they can be adapted (or if the issues are different) for TLBs, how well suited they are for TLB prefetching, and how they compare with the recency prefetching mechanism.This paper presents the first detailed comparison of different prefetching mechanisms (previously proposed for caches) - arbitrary stride prefetching, and markov prefetching - for TLB entries, and evaluates their pros and cons. In addition, this paper proposes a novel prefetching mechanism, called Distance Prefetching, that attempts to capture patterns in the reference behavior in a smaller space than earlier proposals. Using detailed simulations of a wide variety of applications (56 in all) from different benchmark suites and all the SPEC CPU2000 applications, this paper demonstrates the benefits of distance prefetching.",
"title": ""
},
{
"docid": "42e2aec24a5ab097b5fff3ec2fe0385d",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "8608ccbb61cbfbf3aae7e832ad4be0aa",
"text": "Part A: Fundamentals and Cryptography Chapter 1: A Framework for System Security Chapter 1 aims to describe a conceptual framework for the design and analysis of secure systems with the goal of defining a common language to express “concepts”. Since it is designed both for theoreticians and for practitioners, there are two kinds of applicability. On the one hand a meta-model is proposed to theoreticians, enabling them to express arbitrary axioms of other security models in this special framework. On the other hand the framework provides a language for describing the requirements, designs, and evaluations of secure systems. This information is given to the reader in the introduction and as a consequence he wants to get the specification of the framework. Unfortunately, the framework itself is not described! However, the contents cover first some surrounding concepts like “systems, owners, security and functionality”. These are described sometimes in a confusing way, so that it remains unclear, what the author really wants to focus on. The following comparison of “Qualitative and Quantitative Security” is done 1For example: if the reader is told, that “every system has an owner, and every owner is a system”, there obviously seems to be no difference between these entities (cp. p. 4).",
"title": ""
},
{
"docid": "47a8f987548d6fc03191844e392d9d05",
"text": "A major challenge in collaborative filtering based recommender systems is how to provide recommendations when rating data is sparse or entirely missing for a subset of users or items, commonly known as the cold-start problem. In recent years, there has been considerable interest in developing new solutions that address the cold-start problem. These solutions are mainly based on the idea of exploiting other sources of information to compensate for the lack of rating data. In this paper, we propose a novel algorithmic framework based on matrix factorization that simultaneously exploits the similarity information among users and items to alleviate the cold-start problem. In contrast to existing methods, the proposed algorithm decouples the following two aspects of the cold-start problem: (a) the completion of a rating sub-matrix, which is generated by excluding cold-start users and items from the original rating matrix; and (b) the transduction of knowledge from existing ratings to cold-start items/users using side information. This crucial difference significantly boosts the performance when appropriate side information is incorporated. We provide theoretical guarantees on the estimation error of the proposed two-stage algorithm based on the richness of similarity information in capturing the rating data. To the best of our knowledge, this is the first algorithm that addresses the cold-start problem with provable guarantees. We also conduct thorough experiments on synthetic and real datasets that demonstrate the effectiveness of the proposed algorithm and highlights the usefulness of auxiliary information in dealing with both cold-start users and items.",
"title": ""
},
{
"docid": "28b1cc95aa385664cacbf20661f5cf56",
"text": "Many organizations now emphasize the use of technology that can help them get closer to consumers and build ongoing relationships with them. The ability to compile consumer data profiles has been made even easier with Internet technology. However, it is often assumed that consumers like to believe they can trust a company with their personal details. Lack of trust may cause consumers to have privacy concerns. Addressing such privacy concerns may therefore be crucial to creating stable and ultimately profitable customer relationships. Three specific privacy concerns that have been frequently identified as being of importance to consumers include unauthorized secondary use of data, invasion of privacy, and errors. Results of a survey study indicate that both errors and invasion of privacy have a significant inverse relationship with online purchase behavior. Unauthorized use of secondary data appears to have little impact. Managerial implications include the careful selection of communication channels for maximum impact, the maintenance of discrete “permission-based” contact with consumers, and accurate recording and handling of data.",
"title": ""
},
{
"docid": "0c477aa54f5da088613d1376174feca8",
"text": "In today’s online social networks, it becomes essential to help newcomers as well as existing community members to find new social contacts. In scientific literature, this recommendation task is known as link prediction. Link prediction has important practical applications in social network platforms. It allows social network platform providers to recommend friends to their users. Another application is to infer missing links in partially observed networks. The shortcoming of many of the existing link prediction methods is that they mostly focus on undirected graphs only. This work closes this gap and introduces link prediction methods and metrics for directed graphs. Here, we compare well-known similarity metrics and their suitability for link prediction in directed social networks. We advance existing techniques and propose mining of subgraph patterns that are used to predict links in networks such as GitHub, GooglePlus, and Twitter. Our results show that the proposed metrics and techniques yield more accurate predictions when compared with metrics not accounting for the directed nature of the underlying networks.",
"title": ""
},
{
"docid": "0cf81998c0720405e2197c62afa08ee7",
"text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media",
"title": ""
},
{
"docid": "049c1597f063f9c5fcc098cab8885289",
"text": "When one captures images in low-light conditions, the images often suffer from low visibility. This poor quality may significantly degrade the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a very simple and effective method, named as LIME, to enhance low-light images. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G and B channels. Further, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging real-world low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts.",
"title": ""
}
] |
scidocsrr
|
5edf85680d1e77a148f69ad7d261b6c2
|
Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning
|
[
{
"docid": "28ee32149227e4a26bea1ea0d5c56d8c",
"text": "We consider an agent’s uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA’S REVENGE.",
"title": ""
},
{
"docid": "771611dc99e22b054b936fce49aea7fc",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "c0d7b92c1b88a2c234eac67c5677dc4d",
"text": "To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization",
"title": ""
}
] |
[
{
"docid": "f83bf92a38f1ce7734a5c1abce65f92f",
"text": "This paper presents an Adaptive fuzzy logic PID controller for speed control of Brushless Direct current Motor drives which is widely used in various industrial systems, such as servo motor drives, medical, automobile and aerospace industry. BLDC motors were electronically commutated motor offer many advantages over Brushed DC Motor which includes increased efficiency, longer life, low volume and high torque. This paper presents an overview of performance of fuzzy PID controller and Adaptive fuzzy PID controller using Simulink model. Tuning Parameters and computing using Normal PID controller is difficult and also it does not give satisfied control characteristics when compare to Adaptive Fuzzy PID controller. From the Simulation results we verify that Adaptive Fuzzy PID controller give better control performance when compared to fuzzy PID controller. The software Package SIMULINK was used in control and Modelling of BLDC Motor.",
"title": ""
},
{
"docid": "2d9921e49e58725c9c85da02249c8d27",
"text": "Recently, the performance of Si power devices gradually approaches the physical limit, and the latest SiC device seemingly has the ability to substitute the Si insulated gate bipolar transistor (IGBT) in 1200 V class. In this paper, we demonstrate the feasibility of further improving the Si IGBT based on the new concept of CSTBTtrade. In point of view of low turn-off loss and high uniformity in device characteristics, we employ the techniques of fine-pattern and retro grade doping in the design of new device structures, resulting in significant reduction on the turn-off loss and the VGE(th) distribution, respectively.",
"title": ""
},
{
"docid": "0be24a284a7490b709bbbdfea458b211",
"text": "This article provides a meta-analytic review of the relationship between the quality of leader-member exchanges (LMX) and citizenship behaviors performed by employees. Results based on 50 independent samples (N = 9,324) indicate a moderately strong, positive relationship between LMX and citizenship behaviors (rho = .37). The results also support the moderating role of the target of the citizenship behaviors on the magnitude of the LMX-citizenship behavior relationship. As expected, LMX predicted individual-targeted behaviors more strongly than it predicted organizational targeted behaviors (rho = .38 vs. rho = .31), and the difference was statistically significant. Whether the LMX and the citizenship behavior ratings were provided by the same source or not also influenced the magnitude of the correlation between the 2 constructs.",
"title": ""
},
{
"docid": "d36c3839127ecee4f22e846a91b32d6c",
"text": "Michelangelo Buonarroti (1475-1564) was a master anatomist as well as an artistic genius. He dissected numerous cadavers and developed a profound understanding of human anatomy. Among his best-known artworks are the frescoes painted on the ceiling of the Sistine Chapel (1508-1512), in Rome. Currently, there is some debate over whether the frescoes merely represent the teachings of the Catholic Church at the time or if there are other meanings hidden in the images. In addition, there is speculation regarding the image of the brain embedded in the fresco known as \"The Creation of Adam,\" which contains anatomic features of the midsagittal and lateral surfaces of the brain. Within this context, we report our use of Image Pro Plus Software 6.0 to demonstrate mathematical evidence that Michelangelo painted \"The Creation of Adam\" using the Divine Proportion/Golden Ratio (GR) (1.6). The GR is classically associated with greater structural efficiency and is found in biological structures and works of art by renowned artists. Thus, according to the evidence shown in this article, we can suppose that the beauty and harmony recognized in all Michelangelo's works may not be based solely on his knowledge of human anatomical proportions, but that the artist also probably knew anatomical structures that conform to the GR display greater structural efficiency. It is hoped that this report will at least stimulate further scientific and scholarly contributions to this fascinating topic, as the study of these works of art is essential for the knowledge of the history of Anatomy.",
"title": ""
},
{
"docid": "b540fb20a265d315503543a5d752f486",
"text": "Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as welldefined quantifiers of a deep network’s expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep convolutional arithmetic circuit in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in the graph which characterizes it. Thus, we demonstrate a direct control over the inductive bias of the designed deep convolutional network via its channel numbers, which we show to be related to this min-cut in the underlying graph. This result is relevant to any practitioner designing a convolutional network for a specific task. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on more common convolutional networks which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.",
"title": ""
},
{
"docid": "2fbcd34468edf53ee08e0a76a048c275",
"text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.",
"title": ""
},
{
"docid": "bd178b04fe57db1ce408452edeb8a6d4",
"text": "BACKGROUND\nIn 1998, the French Ministry of Environment revealed that of 71 French municipal solid waste incinerators processing more than 6 metric tons of material per hour, dioxin emission from 15 of them was above the 10 ng international toxic equivalency factor/m3 (including Besançon, emitting 16.3 ng international toxic equivalency factor/m3) which is substantially higher than the 0.1 international toxic equivalency factor/m3 prescribed by a European directive of 1994. In 2000, a macrospatial epidemiological study undertaken in the administrative district of Doubs, identified two significant clusters of soft-tissue sarcoma and non Hodgkin lymphoma in the vicinity of the municipal solid waste incinerator of Besançon. This microspatial study (at the Besançon city scale), was designed to test the association between the exposure to dioxins emitted by the municipal solid waste incinerator of Besançon and the risk of soft-tissue sarcoma.\n\n\nMETHODS\nGround-level concentrations of dioxin were modeled with a dispersion model (Air Pollution Control 3 software). Four increasing zones of exposure were defined. For each case of soft tissue sarcoma, ten controls were randomly selected from the 1990 census database and matched for gender and age. A geographic information system allowed the attribution of a dioxin concentration category to cases and controls, according to their place of residence.\n\n\nRESULTS\nThirty-seven cases of soft tissue sarcoma were identified by the Doubs cancer registry between 1980 and 1995, corresponding to a standardized incidence (French population) of 2.44 per 100,000 inhabitants. Compared with the least exposed zone, the risk of developing a soft tissue sarcoma was not significantly increased for people living in the more exposed zones.\n\n\nCONCLUSION\nBefore definitely concluding that there is no relationship between the exposure to dioxin released by a solid waste incinerator and soft tissue sarcoma, a nationwide investigation based on other registries should be conducted.",
"title": ""
},
{
"docid": "d952de00554b9a6bb21fbce802729b3f",
"text": "In the past five years there has been a dramatic increase in work on Search Based Software Engineering (SBSE), an approach to software engineering in which search based optimisation algorithms are used to address problems in Software Engineering. SBSE has been applied to problems throughout the Software Engineering lifecycle, from requirements and project planning to maintenance and re-engineering. The approach is attractive because it offers a suite of adaptive automated and semi-automated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This paper provides a review and classification of literature on SBSE. The paper identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.",
"title": ""
},
{
"docid": "cb7e4a454d363b9cb1eb6118a4b00855",
"text": "Stream processing applications reduce the latency of batch data pipelines and enable engineers to quickly identify production issues. Many times, a service can log data to distinct streams, even if they relate to the same real-world event (e.g., a search on Facebook’s search bar). Furthermore, the logging of related events can appear on the server side with different delay, causing one stream to be significantly behind the other in terms of logged event times for a given log entry. To be able to stitch this information together with low latency, we need to be able to join two different streams where each stream may have its own characteristics regarding the degree in which its data is out-of-order. Doing so in a streaming fashion is challenging as a join operator consumes lots of memory, especially with significant data volumes. This paper describes an end-to-end streaming join service that addresses the challenges above through a streaming join operator that uses an adaptive stream synchronization algorithm that is able to handle the different distributions we observe in real-world streams regarding their event times. This synchronization scheme paces the parsing of new data and reduces overall operator memory footprint while still providing high accuracy. We have integrated this into a streaming SQL system and have successfully reduced the latency of several batch pipelines using this approach. PVLDB Reference Format: G. Jacques-Silva, R. Lei, L. Cheng, G. J. Chen, K. Ching, T. Hu, Y. Mei, K. Wilfong, R. Shetty, S. Yilmaz, A. Banerjee, B. Heintz, S. Iyer, A. Jaiswal. Providing Streaming Joins as a Service at Facebook. PVLDB, 11 (12): 1809-1821, 2018. DOI: : https://doi.org/10.14778/3229863.3229869",
"title": ""
},
{
"docid": "7753a65e07ace406d29822c9d165c83f",
"text": "A new technique is presented for matching image features to maps or models. The technique forms all possible pairs of image features and model features which match on the basis of local evidence alone. For each possible pair of matching features the parameters of an RST (rotation, scaling, and translation) transformation are derived. Clustering in the space of all possible RST parameter sets reveals a good global transformation which matches many image features to many model features. Results with a variety of data sets are presented which demonstrate that the technique does not require sophisticated feature detection and is robust with respect to changes of image orientation and content. Examples in both cartography and object detection are given.",
"title": ""
},
{
"docid": "74d2d780291e9dbf2e725b55ccadd278",
"text": "Organizational climate and organizational culture theory and research are reviewed. The article is first framed with definitions of the constructs, and preliminary thoughts on their interrelationships are noted. Organizational climate is briefly defined as the meanings people attach to interrelated bundles of experiences they have at work. Organizational culture is briefly defined as the basic assumptions about the world and the values that guide life in organizations. A brief history of climate research is presented, followed by the major accomplishments in research on the topic with regard to levels issues, the foci of climate research, and studies of climate strength. A brief overview of the more recent study of organizational culture is then introduced, followed by samples of important thinking and research on the roles of leadership and national culture in understanding organizational culture and performance and culture as a moderator variable in research in organizational behavior. The final section of the article proposes an integration of climate and culture thinking and research and concludes with practical implications for the management of effective contemporary organizations. Throughout, recommendations are made for additional thinking and research.",
"title": ""
},
{
"docid": "281e8785214bb209a142d420dfdc5f26",
"text": "This study examined achievement when podcasts were used in place of lecture in the core technology course required for all students seeking teacher licensure at a large research-intensive university in the Southeastern United States. Further, it examined the listening preferences of the podcast group and the barriers to podcast use. The results revealed that there was no significant difference in the achievement of preservice teachers who experienced podcast instruction versus those who received lecture instruction. Further, there was no significant difference in their study habits. Participants preferred to use a computer and Blackboard for downloading the podcasts, which they primarily listened to at home. They tended to like the podcasts as well as the length of the podcasts and felt that they were reasonably effective for learning. They agreed that the podcasts were easy to use but disagreed that they should be used to replace lecture. Barriers to podcast use include unfamiliarity with podcasts, technical problems in accessing and downloading podcasts, and not seeing the relevance of podcasts to their learning. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7ea2f7c549721f95e10b27af9de3d44b",
"text": "Declaration Declaration I hereby declare that except where specific reference is made to the work of others, the contents of this thesis are original and have not been submitted in whole or in part for consideration for any other degree or qualification in this, or any other university. This thesis is my own work and contains nothing, which is the outcome of work done in collaboration with others, except as specified in the text and Acknowledgements. Abstract I Abstract Nowadays, with the smart device developing and life quality improving, people's requirement of real-time, fast, accurate and smart health service has been increased. As the technology advances, E-Health Care concept has been emerging in the last decades and received extensive attention. With the help of Internet and computing technologies, a lot of E-Health Systems have been proposed that change traditional medical treatment mode to remote or online medical treatment. Furthermore, due to the rapidly development of Internet and wireless network in recent years, many enhanced E-Health Systems based on Wireless Sensor Network have been proposed that open a new research field. Sensor Network by taking the advantage of the latest technologies. The proposed E-Health System is a wireless and portable system, which consists of the Wireless E-Health Gateway and Wireless E-Health Sensor Nodes. The system has been further enhanced by Smart Technology that combined the advantages of the smart phone. The proposed system has change the mechanisms of traditional medical care and provide real-time, portable, accurate and flexible medical care services to users. With the E-Health System wieldy deployed, it requires powerful computing center to deal with the mass health record data. Cloud technology as an emerging technology has applied in the proposed system. This research has used Amazon Web Services (AWS) – Cloud Computing Services to develop a powerful, scalable and fast connection web service for proposed E-Health Management System. Abstract II The security issue is a common problem in the wireless network, and it is more important for E-Health System as the personal health data is private and should be safely transferred and storage. Hence, this research work also focused on the cryptographic algorithm to reinforce the security of E-Health System. Due to the limitations of embedded system resources, such as: lower computing, smaller battery, and less memory, which cannot support modem advance encryption standard. In this research, Rivest Cipher Version 5 (RC5) as the simple, security and software …",
"title": ""
},
{
"docid": "274829e884c6ba5f425efbdce7604108",
"text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.",
"title": ""
},
{
"docid": "1a9be0a664da314c143ca430bd6f4502",
"text": "Fingerprint image quality is an important factor in the perf ormance of Automatic Fingerprint Identification Systems(AFIS). It is used to evaluate the system performance, assess enrollment acceptability, and evaluate fingerprint sensors. This paper presents a novel methodology for fingerp rint image quality measurement. We propose limited ring-wedge spectral measu r to estimate the global fingerprint image features, and inhomogeneity with d rectional contrast to estimate local fingerprint image features. Experimental re sults demonstrate the effectiveness of our proposal.",
"title": ""
},
{
"docid": "b9ca1209ce50bf527d68109dbdf7431c",
"text": "The MATLAB model of the analog multiplier based on the sigma delta modulation is developed. Different modes of multiplier are investigated and obtained results are compared with analytical results.",
"title": ""
},
{
"docid": "99faeab3adcf89a3f966b87547cea4e7",
"text": "In-service structural health monitoring of composite aircraft structures plays a key role in the assessment of their performance and integrity. In recent years, Fibre Optic Sensors (FOS) have proved to be a potentially excellent technique for real-time in-situ monitoring of these structures due to their numerous advantages, such as immunity to electromagnetic interference, small size, light weight, durability, and high bandwidth, which allows a great number of sensors to operate in the same system, and the possibility to be integrated within the material. However, more effort is still needed to bring the technology to a fully mature readiness level. In this paper, recent research and applications in structural health monitoring of composite aircraft structures using FOS have been critically reviewed, considering both the multi-point and distributed sensing techniques.",
"title": ""
},
{
"docid": "e0301bf133296361b4547730169d2672",
"text": "Radar warning receivers (RWRs) classify the intercepted pulses into clusters utilizing multiple parameter deinterleaving. In order to make classification more elaborate time-of-arrival (TOA) deinterleaving should be performed for each cluster. In addition, identification of the classified pulse sequences has been exercised at last. It is essential to identify the classified sequences with a minimum number of pulses. This paper presents a method for deinterleaving of intercepted signals having small number of pulses that belong to stable or jitter pulse repetition interval (PRI) types in the presence of missed pulses. It is necessary for both stable and jitter PRI TOA deinterleaving algorithms to utilize predefined PRI range. However, jitter PRI TOA deinterleaving also requires variation about mean PRI value of emitter of interest as a priori.",
"title": ""
},
{
"docid": "0cb0c5f181ef357cd81d4a290d2cbc14",
"text": "With 3D sensing becoming cheaper, environment-aware and visually-guided robot arms capable of safely working in collaboration with humans will become common. However, a reliable calibration is needed, both for camera internal calibration, as well as Eye-to-Hand calibration, to make sure the whole system functions correctly. We present a framework, using a novel combination of well proven methods, allowing a quick automatic calibration for the integration of systems consisting of the robot and a varying number of 3D cameras by using a standard checkerboard calibration grid. Our approach allows a quick camera-to-robot recalibration after any changes to the setup, for example when cameras or robot have been repositioned. Modular design of the system ensures flexibility regarding a number of sensors used as well as different hardware choices. The framework has been proven to work by practical experiments to analyze the quality of the calibration versus the number of positions of the checkerboard used for each of the calibration procedures.",
"title": ""
}
] |
scidocsrr
|
f9e1d9c1323a1e2e78f7fe6d59e30bee
|
Facial Expression Recognition Based on Facial Components Detection and HOG Features
|
[
{
"docid": "1e2768be2148ff1fd102c6621e8da14d",
"text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.",
"title": ""
}
] |
[
{
"docid": "8d3c4598b7d6be5894a1098bea3ed81a",
"text": "Retrieval enhances long-term retention. However, reactivation of a memory also renders it susceptible to modifications as shown by studies on memory reconsolidation. The present study explored whether retrieval diminishes or enhances subsequent retroactive interference (RI) and intrusions. Participants learned a list of objects. Two days later, they were either asked to recall the objects, given a subtle reminder, or were not reminded of the first learning session. Then, participants learned a second list of objects or performed a distractor task. After another two days, retention of List 1 was tested. Although retrieval enhanced List 1 memory, learning a second list impaired memory in all conditions. This shows that testing did not protect memory from RI. While a subtle reminder before List 2 learning caused List 2 items to later intrude into List 1 recall, very few such intrusions were observed in the testing and the no reminder conditions. The findings are discussed in reference to the reconsolidation account and the testing effect literature, and implications for educational practice are outlined. © 2015 Elsevier Inc. All rights reserved. Retrieval practice or testing is one of the most powerful memory enhancers. Testing that follows shortly after learning benefits long-term retention more than studying the to-be-remembered material again (Roediger & Karpicke, 2006a, 2006b). This effect has been shown using a variety of materials and paradigms, such as text passages (e.g., Roediger & Karpicke, 2006a), paired associates (Allen, Mahler, & Estes, 1969), general knowledge questions (McDaniel & Fisher, 1991), and word and picture lists (e.g., McDaniel & Masson, 1985; Wheeler & Roediger, 1992; Wheeler, Ewers, & Buonanno, 2003). Testing effects have been observed in traditional lab as well as educational settings (Grimaldi & Karpicke, 2015; Larsen, Butler, & Roediger, 2008; McDaniel, Anderson, Derbish, & Morrisette, 2007). Testing not only improves long-term retention, it also enhances subsequent encoding (Pastötter, Schicker, Niedernhuber, & Bäuml, 2011), protects memories from the buildup of proactive interference (PI; Nunes & Weinstein, 2012; Wahlheim, 2014), and reduces the probability that the tested items intrude into subsequently studied lists (Szpunar, McDermott, & Roediger, 2008; Weinstein, McDermott, & Szpunar, 2011). The reduced PI and intrusion rates are assumed to reflect enhanced list discriminability or improved within-list organization. Enhanced list discriminability in turn helps participants distinguish different sets or sources of information and allows them to circumscribe the search set during retrieval to the relevant list (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). ∗ Correspondence to: Department of Psychology, Lehigh University, 17 Memorial Drive East, Bethlehem, PA 18015, USA. E-mail address: hupbach@lehigh.edu http://dx.doi.org/10.1016/j.lmot.2015.01.004 0023-9690/© 2015 Elsevier Inc. All rights reserved. 24 A. Hupbach / Learning and Motivation 49 (2015) 23–30 If testing increases list discriminability, then it should also protect the tested list(s) from RI and intrusions from material that is encoded after retrieval practice. However, testing also necessarily reactivates a memory, and according to the reconsolidation account reactivation re-introduces plasticity into the memory trace, making it especially vulnerable to modifications (e.g., Dudai, 2004; Nader, Schafe, & LeDoux, 2000; for a recent review, see e.g., Hupbach, Gomez, & Nadel, 2013). Increased vulnerability to modification would suggest increased rather than reduced RI and intrusions. The few studies addressing this issue have yielded mixed results, with some suggesting that retrieval practice diminishes RI (Halamish & Bjork, 2011; Potts & Shanks, 2012), and others showing that retrieval practice can exacerbate the potential negative effects of post-retrieval learning (e.g., Chan & LaPaglia, 2013; Chan, Thomas, & Bulevich, 2009; Walker, Brakefield, Hobson, & Stickgold, 2003). Chan and colleagues (Chan & Langley, 2011; Chan et al., 2009; Thomas, Bulevich, & Chan, 2010) assessed the effects of testing on suggestibility in a misinformation paradigm. After watching a television episode, participants answered cuedrecall questions about it (retrieval practice) or performed an unrelated distractor task. Then, all participants read a narrative, which summarized the video but also contained some misleading information. A final cued-recall test revealed that participants in the retrieval practice condition recalled more misleading details and fewer correct details than participants in the distractor condition; that is, retrieval increased the misinformation effect (retrieval-enhanced suggestibility, RES). Chan et al. (2009) discuss two mechanisms that can explain this finding. First, since testing can potentiate subsequent new learning (e.g., Izawa, 1967; Tulving & Watkins, 1974), initial testing might have improved encoding of the misinformation. Indeed, when a modified final test was used, which encouraged the recall of both the correct information and the misinformation, participants in the retrieval practice condition recalled more misinformation than participants in the distractor condition (Chan et al., 2009). Second, retrieval might have rendered the memory more susceptible to interference by misinformation, an explanation that is in line with the reconsolidation account. Indeed, Chan and LaPaglia (2013) found reduced recognition of the correct information when retrieval preceded the presentation of misinformation (cf. Walker et al., 2003 for a similar effect in procedural memory). In contrast to Chan and colleagues’ findings, a study by Potts and Shanks (2012) suggests that testing protects memories from the negative influences of post-retrieval encoding of related material. Potts and Shanks asked participants to learn English–Swahili word pairs (List 1, A–B). One day later, one group of participants took a cued recall test of List 1 (testing condition) immediately before learning English–Finnish word pairs with the same English cues as were used in List 1 (List 2, A–C). Additionally, several control groups were implemented: one group was tested on List 1 without learning a second list, one group learned List 2 without prior retrieval practice, and one group did not participate in this session at all. On the third day, all participants took a final cued-recall test of List 1. Although retrieval practice per se did not enhance List 1 memory (i.e., no testing effect in the groups that did not learn List 2), it protected memory from RI (see Halamish & Bjork, 2011 for a similar result in a one-session study). Crucial for assessing the reconsolidation account is the comparison between the groups that learned List 2 either after List 1 recall or without prior List 1 recall. Contrary to the predictions derived from the reconsolidation account, final List 1 recall was enhanced when retrieval of List 1 preceded learning of List 2.1 While this clearly shows that testing counteracts RI, it would be premature to conclude that testing prevented the disruption of memory reconsolidation, because (a) retrieval practice without List 2 learning led to minimal forgetting between Day 2 and 3, while retrieval practice followed by List 2 learning led to significant memory decline, and (b) a reactivation condition that is independent from retrieval practice is missing. One could argue that repeating the cue words in List 2 likely reactivated memory for the original associations. It has been shown that the strength of reactivation (Detre, Natarajan, Gershman, & Norman, 2013) and the specific reminder structure (Forcato, Argibay, Pedreira, & Maldonado, 2009) determine whether or not a memory will be affected by post-reactivation procedures. The current study re-evaluates the question of how testing affects RI and intrusions. It uses a reconsolidation paradigm (Hupbach, Gomez, Hardt, & Nadel, 2007; Hupbach, Hardt, Gomez, & Nadel, 2008; Hupbach, Gomez, & Nadel, 2009; Hupbach, Gomez, & Nadel, 2011) to assess how testing in comparison to other reactivation procedures affects declarative memory. This paradigm will allow for a direct evaluation of the hypotheses that testing makes declarative memories vulnerable to interference, or that testing protects memories from the potential negative effects of subsequently learned material, as suggested by the list-separation hypothesis (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). This question has important practical implications. For instance, when students test their memory while preparing for an exam, will such testing increase or reduce interference and intrusions from information that is learned afterwards?",
"title": ""
},
{
"docid": "69d42340c09303b69eafb19de7170159",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "2e3ffdd6e9ee0bfee5653c3f21422f7e",
"text": "Neural networks have recently solved many hard problems in Machine Learning, but their impact in control remains limited. Trajectory optimization has recently solved many hard problems in robotic control, but using it online remains challenging. Here we leverage the high-fidelity solutions obtained by trajectory optimization to speed up the training of neural network controllers. The two learning problems are coupled using the Alternating Direction Method of Multipliers (ADMM). This coupling enables the trajectory optimizer to act as a teacher, gradually guiding the network towards better solutions. We develop a new trajectory optimizer based on inverse contact dynamics, and provide not only the trajectories but also the feedback gains as training data to the network. The method is illustrated on rolling, reaching, swimming and walking tasks.",
"title": ""
},
{
"docid": "b5004502c5ce55f2327e52639e65d0b6",
"text": "Public health applications using social media often require accurate, broad-coverage location information. However, the standard information provided by social media APIs, such as Twitter, cover a limited number of messages. This paper presents Carmen, a geolocation system that can determine structured location information for messages provided by the Twitter API. Our system utilizes geocoding tools and a combination of automatic and manual alias resolution methods to infer location structures from GPS positions and user-provided profile data. We show that our system is accurate and covers many locations, and we demonstrate its utility for improving influenza surveillance.",
"title": ""
},
{
"docid": "e502cdbbbf557c8365b0d4b69745e225",
"text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.",
"title": ""
},
{
"docid": "17c6b63d850292f5f1c78e156103c3b4",
"text": "Continual learning is the constant development of complex behaviors with no nal end in mind. It is the process of learning ever more complicated skills by building on those skills already developed. In order for learning at one stage of development to serve as the foundation for later learning, a continual-learning agent should learn hierarchically. CHILD, an agent capable of Continual, Hierarchical, Incremental Learning and Development is proposed, described, tested, and evaluated in this dissertation. CHILD accumulates useful behaviors in reinforcement environments by using the Temporal Transition Hierarchies learning algorithm, also derived in the dissertation. This constructive algorithm generates a hierarchical, higher-order neural network that can be used for predicting context-dependent temporal sequences and can learn sequential-task benchmarks more than two orders of magnitude faster than competing neural-network systems. Consequently, CHILD can quickly solve complicated non-Markovian reinforcement-learning tasks and can then transfer its skills to similar but even more complicated tasks, learning these faster still. This continual-learning approach is made possible by the unique properties of Temporal Transition Hierarchies, which allow existing skills to be amended and augmented in precisely the same way that they were constructed in the rst place. Table of",
"title": ""
},
{
"docid": "4fc276d2f0ca869d84d372f4bb4622ac",
"text": "An electrocardiogram (ECG) is a bioelectrical signal which records the heart's electrical activity versus time. It is an important diagnostic tool for assessing heart functions. The early detection of arrhythmia is very important for the cardiac patients. ECG arrhythmia can be defined as any of a group of conditions in which the electrical activity of the heart is irregular and can cause heartbeat to be slow or fast. It can take place in a healthy heart and be of minimal consequence, but they may also indicate a serious problem that leads to stroke or sudden cardiac death. As ECG signal being non stationary signal, the arrhythmia may occur at random in the time-scale, which means, the arrhythmia symptoms may not show up all the time but would manifest at certain irregular intervals during the day. Thus, automatic classification of arrhythmia is critical in clinical cardiology, especially for the treatment of patients in the intensive care unit. This project implements a simulation tool on MATLAB platform to detect abnormalities in the ECG signal. The ECG signal is downloaded from MIT-BIH Arrhythmia database, since this signal contains some noise and artifacts hence pre-processing of ECG signal are performed first. The preprocessing of ECG signal is performed with help of Wavelet toolbox wherein baseline wandering, denoising and removal of high frequency and low frequency is performed to improve SNR ratio of ECG signal. The Wavelet toolbox is also used for feature extraction of ECG signal. Classification of arrhythmia is based on basic classification rules. The complete project is implemented on MATLAB platform. The performance of the algorithm is evaluated on MIT–BIH Database. The different types of arrhythmia classes including normal beat, Tachycardia, Bradycardia and Myocardial Infract (MI) are classified. KeywordsDb6 , feature extraction, arrhythmia.",
"title": ""
},
{
"docid": "41b8c1b04f11f5ac86d1d6e696007036",
"text": "The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to \"other voice' from a prerecorded tape.",
"title": ""
},
{
"docid": "34e2eafd055e097e167afe7cb244f99b",
"text": "This paper describes the functional verification effort during a specific hardware development program that included three of the largest ASICs designed at Nortel. These devices marked a transition point in methodology as verification took front and centre on the critical path of the ASIC schedule. Both the simulation and emulation strategies are presented. The simulation methodology introduced new techniques such as ASIC sub-system level behavioural modeling, large multi-chip simulations, and random pattern simulations. The emulation strategy was based on a plan that consisted of integrating parts of the real software on the emulated system. This paper describes how these technologies were deployed, analyzes the bugs that were found and highlights the bottlenecks in functional verification as systems become more complex.",
"title": ""
},
{
"docid": "b42c9db51f55299545588a1ee3f7102f",
"text": "With the increasing development of Web 2.0, such as social media and online businesses, the need for perception of opinions, attitudes, and emotions grows rapidly. Sentiment analysis, the topic studying such subjective feelings expressed in text, has attracted significant attention from both the research community and industry. Although we have known sentiment analysis as a task of mining opinions expressed in text and analyzing the entailed sentiments and emotions, so far the task is still vaguely defined in the research literature because it involves many overlapping concepts and sub-tasks. Because this is an important area of scientific research, the field needs to clear this vagueness and define various directions and aspects in detail, especially for students, scholars, and developers new to the field. In fact, the field includes numerous natural language processing tasks with different aims (such as sentiment classification, opinion information extraction, opinion summarization, sentiment retrieval, etc.) and these have multiple solution paths. Bing Liu has done a great job in this book in providing a thorough exploration and an anatomy of the sentiment analysis problem and conveyed a wealth of knowledge about different aspects of the field.",
"title": ""
},
{
"docid": "b1d348e2095bd7054cc11bd84eb8ccdc",
"text": "Epidermolysis bullosa (EB) is a group of inherited, mechanobullous disorders caused by mutations in various structural proteins in the skin. There have been several advances in the classification of EB since it was first introduced in the late 19th century. We now recognize four major types of EB, depending on the location of the target proteins and level of the blisters: EB simplex (epidermolytic), junctional EB (lucidolytic), dystrophic EB (dermolytic), and Kindler syndrome (mixed levels of blistering). This contribution will summarize the most recent classification and discuss the molecular basis, target genes, and proteins involved. We have also included new subtypes, such as autosomal dominant junctional EB and autosomal recessive EB due to mutations in the dystonin (DST) gene, which encodes the epithelial isoform of bullouspemphigoid antigen 1. The main laboratory diagnostic techniques-immunofluorescence mapping, transmission electron microscopy, and mutation analysis-will also be discussed. Finally, the clinical characteristics of the different major EB types and subtypes will be reviewed.",
"title": ""
},
{
"docid": "9342e1adb849f07a385714a24ac2fea5",
"text": "MOTIVATION\nIn 2001 and 2002, we published two papers (Bioinformatics, 17, 282-283, Bioinformatics, 18, 77-82) describing an ultrafast protein sequence clustering program called cd-hit. This program can efficiently cluster a huge protein database with millions of sequences. However, the applications of the underlying algorithm are not limited to only protein sequences clustering, here we present several new programs using the same algorithm including cd-hit-2d, cd-hit-est and cd-hit-est-2d. Cd-hit-2d compares two protein datasets and reports similar matches between them; cd-hit-est clusters a DNA/RNA sequence database and cd-hit-est-2d compares two nucleotide datasets. All these programs can handle huge datasets with millions of sequences and can be hundreds of times faster than methods based on the popular sequence comparison and database search tools, such as BLAST.",
"title": ""
},
{
"docid": "5f70d96454e4a6b8d2ce63bc73c0765f",
"text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.",
"title": ""
},
{
"docid": "c8d56c100db663ba532df4766e458345",
"text": "Decomposing sensory measurements into relevant parts is a fundamental prerequisite for solving complex tasks, e.g., in the field of mobile manipulation in domestic environments. In this paper, we present a fast approach to surface reconstruction in range images by means of approximate polygonal meshing. The obtained local surface information and neighborhoods are then used to 1) smooth the underlying measurements, and 2) segment the image into planar regions and other geometric primitives. An evaluation using publicly available data sets shows that our approach does not rank behind state-of-the-art algorithms while allowing to process range images at high frame rates.",
"title": ""
},
{
"docid": "c3473e7fe7b46628d384cbbe10bfe74c",
"text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.",
"title": ""
},
{
"docid": "4159eacb27d820fd7cb93dfb9c605dd4",
"text": "Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences, and unpaired abstractive summarization is thereby achieved. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.",
"title": ""
},
{
"docid": "b2e689cc561569f2c87e72aa955b54fe",
"text": "Ensemble learning is attracting much attention from pattern recognition and machine learning domains for good generalization. Both theoretical and experimental researches show that combining a set of accurate and diverse classifiers will lead to a powerful classification system. An algorithm, called FS-PP-EROS, for selective ensemble of rough subspaces is proposed in this paper. Rough set-based attribute reduction is introduced to generate a set of reducts, and then each reduct is used to train a base classifier. We introduce an accuracy-guided forward search and post-pruning strategy to select part of the base classifiers for constructing an efficient and effective ensemble system. The experiments show that classification accuracies of ensemble systems with accuracy-guided forward search strategy will increase at first, arrive at a maximal value, then decrease in sequentially adding the base classifiers. We delete the base classifiers added after the maximal accuracy. The experimental results show that the proposed ensemble systems outperform bagging and random subspace methods in terms of accuracy and size of ensemble systems. FS-PP-EROS can keep or improve the classification accuracy with very few base classifiers, which leads to a powerful and compact classification system. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "949da61747af5cd33cc56a2163b7f7cc",
"text": "The tomato crop is an important staple in the Indian market with high commercial value and is produced in large quantities. Diseases are detrimental to the plant's health which in turn affects its growth. To ensure minimal losses to the cultivated crop, it is crucial to supervise its growth. There are numerous types of tomato diseases that target the crop's leaf at an alarming rate. This paper adopts a slight variation of the convolutional neural network model called LeNet to detect and identify diseases in tomato leaves. The main aim of the proposed work is to find a solution to the problem of tomato leaf disease detection using the simplest approach while making use of minimal computing resources to achieve results comparable to state of the art techniques. Neural network models employ automatic feature extraction to aid in the classification of the input image into respective disease classes. This proposed system has achieved an average accuracy of 94–95 % indicating the feasibility of the neural network approach even under unfavourable conditions.",
"title": ""
},
{
"docid": "db5f5f0b7599f1e9b3ebe81139eab1e6",
"text": "In the manufacturing industry, supply chain management is playing an important role in providing profit to the enterprise. Information that is useful in improving existing products and development of new products can be obtained from databases and ontology. The theory of inventive problem solving (TRIZ) supports designers of innovative product design by searching a knowledge base. The existing TRIZ ontology supports innovative design of specific products (Flashlight) for a TRIZ ontology. The research reported in this paper aims at developing a metaontology for innovative product design that can be applied to multiple products in different domain areas. The authors applied the semantic TRIZ to a product (Smart Fan) as an interim stage toward a metaontology that can manage general products and other concepts. Modeling real-world (Smart Pen and Smart Machine) ontologies is undertaken as an evaluation of the metaontology. This may open up new possibilities to innovative product designs. Innovative Product Design using Metaontology with Semantic TRIZ",
"title": ""
},
{
"docid": "082b1c341435ce93cfab869475ed32bd",
"text": "Given a graph where vertices are partitioned into k terminals and non-terminals, the goal is to compress the graph (i.e., reduce the number of non-terminals) using minor operations while preserving terminal distances approximately. The distortion of a compressed graph is the maximum multiplicative blow-up of distances between all pairs of terminals. We study the trade-off between the number of non-terminals and the distortion. This problem generalizes the Steiner Point Removal (SPR) problem, in which all non-terminals must be removed. We introduce a novel black-box reduction to convert any lower bound on distortion for the SPR problem into a super-linear lower bound on the number of non-terminals, with the same distortion, for our problem. This allows us to show that there exist graphs such that every minor with distortion less than 2 / 2.5 / 3 must have Ω(k2) / Ω(k5/4) / Ω(k6/5) non-terminals, plus more trade-offs in between. The black-box reduction has an interesting consequence: if the tight lower bound on distortion for the SPR problem is super-constant, then allowing any O(k) non-terminals will not help improving the lower bound to a constant. We also build on the existing results on spanners, distance oracles and connected 0-extensions to show a number of upper bounds for general graphs, planar graphs, graphs that exclude a fixed minor and bounded treewidth graphs. Among others, we show that any graph admits a minor with O(log k) distortion and O(k2) non-terminals, and any planar graph admits a minor with 1 + ε distortion and Õ((k/ε)2) non-terminals. 1998 ACM Subject Classification G.2.2 Graph Theory",
"title": ""
}
] |
scidocsrr
|
5c953b150016a442810c30ba1c79f65a
|
Image Segmentation by Probabilistic Bottom-Up Aggregation and Cue Integration
|
[
{
"docid": "1589e72380265787a10288c5ad906670",
"text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.",
"title": ""
},
{
"docid": "e364a2ac82f42c87f88b6ed508dc0d8e",
"text": "In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior model and measured CCD camera response functions. We also learn the space of noise level functions how noise level changes with respect to brightness and use Bayesian MAP inference to infer the noise level function from a single image. We illustrate the utility of this noise estimation for two algorithms: edge detection and featurepreserving smoothing through bilateral filtering. For a variety of different noise levels, we obtain good results for both these algorithms with no user-specified inputs.",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] |
[
{
"docid": "f370a8ff8722d341d6e839ec2c7217c1",
"text": "We give the first O(mpolylog(n)) time algorithms for approximating maximum flows in undirected graphs and constructing polylog(n)-quality cut-approximating hierarchical tree decompositions. Our algorithm invokes existing algorithms for these two problems recursively while gradually incorporating size reductions. These size reductions are in turn obtained via ultra-sparsifiers, which are key tools in solvers for symmetric diagonally dominant (SDD) linear systems.",
"title": ""
},
{
"docid": "99ddcb898895b04f4e86337fe35c1713",
"text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.",
"title": ""
},
{
"docid": "496d0bfff9a88dd6c5c6641bad62c0cd",
"text": "Governments envisioning large-scale national egovernment policies increasingly draw on collaboration with private actors, yet the relationship between dynamics and outcomes of public-private partnership (PPP) is still unclear. The involvement of the banking sector in the emergence of a national electronic identification (e-ID) in Denmark is a case in point. Drawing on an analysis of primary and secondary data, we adopt the theoretical lens of collective action to investigate how transformations over time in the convergence of interests, the interdependence of resources, and the alignment of governance models between government and the banking sector shaped the emergence of the Danish national e-ID. We propose a process model to conceptualize paths towards the emergence of public-private collaboration for digital information infrastructure – a common good.",
"title": ""
},
{
"docid": "8ce498cdbdec9bda55970d39bd9d6bee",
"text": "This paper is about the good side of modal logic, the bad side of modal logic, and how hybrid logic takes the good and fixes the bad. In essence, modal logic is a simple formalism for working with relational structures (or multigraphs). But modal logic has no mechanism for referring to or reasoning about the individual nodes in such structures, and this lessens its effectiveness as a representation formalism. In their simplest form, hybrid logics are upgraded modal logics in which reference to individual nodes is possible. But hybrid logic is a rather unusual modal upgrade. It pushes one simple idea as far as it will go: represent all information as formulas. This turns out to be the key needed to draw together a surprisingly diverse range of work (for example, feature logic, description logic and labelled deduction). Moreover, it displays a number of knowledge representation issues in a new light, notably the importance of sorting.",
"title": ""
},
{
"docid": "03869f2ac07c13bbce6af743ea5d2551",
"text": "In this paper we present a novel vehicle detection method in traffic surveillance scenarios. This work is distinguished by three key contributions. First, a feature fusion backbone network is proposed to extract vehicle features which has the capability of modeling geometric transformations. Second, a vehicle proposal sub-network is applied to generate candidate vehicle proposals based on multi-level semantic feature maps. Finally, a head network is used to refine the categories and locations of these proposals. Benefits from the above cues, vehicles with large variation in occlusion and lighting conditions can be detected with high accuracy. Furthermore, the method also demonstrates robustness in the case of motion blur caused by rapid movement of vehicles. We test our network on DETRAC[21] benchmark detection challenge and it shows the state-of-theart performance. Specifically, the proposed method gets the best performances not only at 4 different level: overall, easy, medium and hard, but also in sunny, cloudy and night conditions.",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "d4c7efe10b1444d0f9cb6032856ba4e1",
"text": "This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.",
"title": ""
},
{
"docid": "ee80447709188fab5debfcf9b50a9dcb",
"text": "Prior research by Kornell and Bjork (2007) and Hartwig and Dunlosky (2012) has demonstrated that college students tend to employ study strategies that are far from optimal. We examined whether individuals in the broader—and typically older—population might hold different beliefs about how best to study and learn, given their more extensive experience outside of formal coursework and deadlines. Via a web-based survey, however, we found striking similarities: Learners’ study decisions tend to be driven by deadlines, and the benefits of activities such as self-testing and reviewing studied materials are elf-regulated learning etacognition indset tudy strategies mostly unappreciated. We also found evidence, however, that one’s mindset with respect to intelligence is related to one’s habits and beliefs: Individuals who believe that intelligence can be increased through effort were more likely to value the pedagogical benefits of self-testing, to restudy, and to be intrinsically motivated to learn, compared to individuals who believe that intelligence is fixed. © 2014 Society for Applied Research in Memory and Cognition. Published by Elsevier Inc. All rights With the world’s knowledge at our fingertips, there are increasng opportunities to learn on our own, not only during the years f formal education, but also across our lifespan as our careers, obbies, and interests change. The rapid pace of technological hange has also made such self-directed learning necessary: the bility to effectively self-regulate one’s learning—monitoring one’s wn learning and implementing beneficial study strategies—is, rguably, more important than ever before. Decades of research have revealed the efficacy of various study trategies (see Dunlosky, Rawson, Marsh, Nathan, & Willingham, 013, for a review of effective—and less effective—study techiques). Bjork (1994) coined the term, “desirable difficulties,” to efer to the set of study conditions or study strategies that appear to low down the acquisition of to-be-learned materials and make the earning process seem more effortful, but then enhance long-term etention and transfer, presumably because contending with those ifficulties engages processes that support learning and retention. xamples of desirable difficulties include generating information or esting oneself (instead of reading or re-reading information—a relPlease cite this article in press as: Yan, V. X., et al. Habits and beliefs Journal of Applied Research in Memory and Cognition (2014), http://dx.d tively passive activity), spacing out repeated study opportunities instead of cramming), and varying conditions of practice (rather han keeping those conditions constant and predictable). ∗ Corresponding author at: 1285 Franz Hall, Department of Psychology, University f California, Los Angeles, CA 90095, United States. Tel.: +1 310 954 6650. E-mail address: veronicayan@ucla.edu (V.X. Yan). ttp://dx.doi.org/10.1016/j.jarmac.2014.04.003 211-3681/© 2014 Society for Applied Research in Memory and Cognition. Published by reserved. Many recent findings, however—both survey-based and experimental—have revealed that learners continue to study in non-optimal ways. Learners do not appear, for example, to understand two of the most robust effects from the cognitive psychology literature—namely, the testing effect (that practicing retrieval leads to better long-term retention, compared even to re-reading; e.g., Roediger & Karpicke, 2006a) and the spacing effect (that spacing repeated study sessions leads to better long-term retention than does massing repetitions; e.g., Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006; Dempster, 1988). A survey of 472 undergraduate students by Kornell and Bjork (2007)—which was replicated by Hartwig and Dunlosky (2012)—showed that students underappreciate the learning benefits of testing. Similarly, Karpicke, Butler, and Roediger (2009) surveyed students’ study strategies and found that re-reading was by far the most popular study strategy and that self-testing tended to be used only to assess whether some level of learning had been achieved, not to enhance subsequent recall. Even when students have some appreciation of effective strategies they often do not implement those strategies. Susser and McCabe (2013), for example, showed that even though students reported understanding the benefits of spaced learning over massed learning, they often do not space their study sessions on a given topic, particularly if their upcoming test is going to have a that guide self-regulated learning: Do they vary with mindset? oi.org/10.1016/j.jarmac.2014.04.003 multiple-choice format, or if they think the material is relatively easy, or if they are simply too busy. In fact, Kornell and Bjork’s (2007) survey showed that students’ study decisions tended to be driven by impending deadlines, rather than by learning goals, Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "07817eb2722fb434b1b8565d936197cf",
"text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.",
"title": ""
},
{
"docid": "bd3e5a403cc42952932a7efbd0d57719",
"text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter",
"title": ""
},
{
"docid": "61d31ebda0f9c330e5d86639e0bd824e",
"text": "An electric vehicle (EV) aggregation agent, as a commercial middleman between electricity market and EV owners, participates with bids for purchasing electrical energy and selling secondary reserve. This paper presents an optimization approach to support the aggregation agent participating in the day-ahead and secondary reserve sessions, and identifies the input variables that need to be forecasted or estimated. Results are presented for two years (2009 and 2010) of the Iberian market, and considering perfect and naïve forecast for all variables of the problem.",
"title": ""
},
{
"docid": "05b490844f02e0fefe018022c1032c1c",
"text": "This document describes how to use ms, a program to generate samples under a variety of neutral models. The purpose of this program is to allow one to investigate the statistical properties of such samples, to evaluate estimators or statistical tests, and generally to aid in the interpretation of polymorphism data sets. The typical data set is obtained in a resequencing study in which the same homologous segment is sequenced in several individuals sampled from a population. The classic example of such a data set is the Adh study of Kreitman(1983) in which 11 copies of the Adh gene of Drosophila melanogaster were sequenced. In this case, the copies were isolated from 11 different strains of D. melanogaster collected from scattered locations around the globe. The program ms can be used to generate many independent replicate samples under a variety of assumptions about migration, recombination rate and population size to aid in the interpretation of such polymorphism studies. The samples are generated using the now standard coalescent approach in which the random genealogy of the sample is first generated and then mutations are randomly place on the genealogy (Kingman, 1982; Hudson, 1990; Nordborg, 2001). The usual small sample approximations of the coalescent are used. An infinitesites model of mutation is assumed, and thus multiple-hits and back mutations do not occur. However, when used in conjunction with other programs, finitesite mutation models or micro-satellite models can be studied. For example, the gene trees themselves can be output, and these gene trees can be used as input to other programs which will evolve the sequences under a variety of finite-site models. These are described later. The program is intended to run on Unix, or Unix-like operating systems, such as Linux or MacOsX. The next section describes how to download and compile the program. The subsequent sections described how to run the program and in particular how to specify the parameter values for the simulations. If you use ms for published research, the appropriate citation is:",
"title": ""
},
{
"docid": "1470269bfde3dbbda63ae583bebdfe0f",
"text": "Acquiring local context information and sharing it among co-located devices is critical for emerging pervasive computing applications. The devices belonging to a group of co-located people may need to detect a shared activity (e.g., a meeting) to adapt their devices to support the activity. Today's devices are almost universally equipped with device-to-device communication that easily enables direct context sharing. While existing context sharing models tend not to consider devices' resource limitations or users' constraints, enabling devices to directly share context has significant benefits for efficiency, cost, and privacy. However, as we demonstrate quantitatively, when devices share context via device-to-device communication, it needs to be represented in a size-efficient way that does not sacrifice its expressiveness or accuracy. We present CHITCHAT, a suite of context representations that allows application developers to tune tradeoffs between the size of the representation, the flexibility of the application to update context information, the energy required to create and share context, and the quality of the information shared. We can substantially reduce the size of context representation (thereby reducing applications' overheads when they share their contexts with one another) with only a minimal reduction in the quality of shared contexts.",
"title": ""
},
{
"docid": "e056192e11fb6430ec1d3e64c2336df3",
"text": "Teleological explanations (TEs) account for the existence or properties of an entity in terms of a function: we have hearts because they pump blood, and telephones for communication. While many teleological explanations seem appropriate, others are clearly not warranted--for example, that rain exists for plants to grow. Five experiments explore the theoretical commitments that underlie teleological explanations. With the analysis of [Wright, L. (1976). Teleological Explanations. Berkeley, CA: University of California Press] from philosophy as a point of departure, we examine in Experiment 1 whether teleological explanations are interpreted causally, and confirm that TEs are only accepted when the function invoked in the explanation played a causal role in bringing about what is being explained. However, we also find that playing a causal role is not sufficient for all participants to accept TEs. Experiment 2 shows that this is not because participants fail to appreciate the causal structure of the scenarios used as stimuli. In Experiments 3-5 we show that the additional requirement for TE acceptance is that the process by which the function played a causal role must be general in the sense of conforming to a predictable pattern. These findings motivate a proposal, Explanation for Export, which suggests that a psychological function of explanation is to highlight information likely to subserve future prediction and intervention. We relate our proposal to normative accounts of explanation from philosophy of science, as well as to claims from psychology and artificial intelligence.",
"title": ""
},
{
"docid": "363236815299994c5d155ab2c64b4387",
"text": "The objective of this work is to infer the 3D shape of an object from a single image. We use sculptures as our training and test bed, as these have great variety in shape and appearance. To achieve this we build on the success of multiple view geometry (MVG) which is able to accurately provide correspondences between images of 3D objects under varying viewpoint and illumination conditions, and make the following contributions: first, we introduce a new loss function that can harness image-to-image correspondences to provide a supervisory signal to train a deep network to infer a depth map. The network is trained end-to-end by differentiating through the camera. Second, we develop a processing pipeline to automatically generate a large scale multi-view set of correspondences for training the network. Finally, we demonstrate that we can indeed obtain a depth map of a novel object from a single image for a variety of sculptures with varying shape/texture, and that the network generalises at test time to new domains (e.g. synthetic images).",
"title": ""
},
{
"docid": "9de7af8824594b5de7d510c81585c61b",
"text": "The adoption of business process improvement strategies is a challenge to organizations trying to improve the quality and productivity of their services. The quest for the benefits of this improvement on resource optimization and the responsiveness of the organizations has raised several proposals for business process improvement approaches. However, proposals and results of scientific research on process improvement in higher education institutions, extremely complex and unique organizations, are still scarce. This paper presents a method that provides guidance about how practices and knowledge are gathered to contribute for business process improvement based on the communication between different stakeholders.",
"title": ""
},
{
"docid": "85d4675562eb87550c3aebf0017e7243",
"text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.",
"title": ""
},
{
"docid": "6b05fda194ac3a441a236de04bcc5fc2",
"text": "We have developed a humanoid robot (a cybernetic human called “HRP-4C”) which has the appearance and shape of a human being, can walk and move like one, and interacts with humans using speech recognition. Standing 158 cm tall and weighing 43 kg (including the battery), with the joints and dimensions set to average values for young Japanese females, HRP-4C looks very human-like. In this paper, we present ongoing challenges to create a new bussiness in the contents industry with HRP-4C.",
"title": ""
},
{
"docid": "719783be7139d384d24202688f7fc555",
"text": "Big sensing data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity. Cloud computing provides a promising platform for big sensing data processing and storage as it provides a flexible stack of massive computing, storage, and software services in a scalable manner. Current big sensing data processing on Cloud have adopted some data compression techniques. However, due to the high volume and velocity of big sensing data, traditional data compression techniques lack sufficient efficiency and scalability for data processing. Based on specific on-Cloud data compression requirements, we propose a novel scalable data compression approach based on calculating similarity among the partitioned data chunks. Instead of compressing basic data units, the compression will be conducted over partitioned data chunks. To restore original data sets, some restoration functions and predictions will be designed. MapReduce is used for algorithm implementation to achieve extra scalability on Cloud. With real world meteorological big sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable compression approach based on data chunk similarity can significantly improve data compression efficiency with affordable data accuracy loss.",
"title": ""
}
] |
scidocsrr
|
024e4e329eff07ec01825a021ec03149
|
SCNet: A simplified encoder-decoder CNN for semantic segmentation
|
[
{
"docid": "1eba8eccf88ddb44a88bfa4a937f648f",
"text": "We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixelwise class labels with a measure of model uncertainty using Bayesian deep learning. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3% across a number of datasets and architectures such as SegNet, FCN, Dilation Network and DenseNet.",
"title": ""
}
] |
[
{
"docid": "cd4e04370b1e8b1f190a3533c3f4afe2",
"text": "Perception of depth is a central problem m machine vision. Stereo is an attractive technique for depth perception because, compared with monocular techniques, it leads to more direct, unambiguous, and quantitative depth measurements, and unlike \"active\" approaches such as radar and laser ranging, it is suitable in almost all application domains. Computational stereo is broadly defined as the recovery of the three-dimensional characteristics of a scene from multiple images taken from different points of view. First, each of the functional components of the computational stereo paradigm--image acquLsition, camera modeling, feature acquisition, image matching, depth determination, and interpolation--is identified and discussed. Then, the criteria that are important for evaluating the effectiveness of various computational stereo techniques are presented. Finally a representative sampling of computational stereo research is provided.",
"title": ""
},
{
"docid": "3575842a3306a11bfcc5b370c6d67daf",
"text": "BACKGROUND AND PURPOSE\nMental practice (MP) of a particular motor skill has repeatedly been shown to activate the same musculature and neural areas as physical practice of the skill. Pilot study results suggest that a rehabilitation program incorporating MP of valued motor skills in chronic stroke patients provides sufficient repetitive practice to increase affected arm use and function. This Phase 2 study compared efficacy of a rehabilitation program incorporating MP of specific arm movements to a placebo condition using randomized controlled methods and an appropriate sample size. Method- Thirty-two chronic stroke patients (mean=3.6 years) with moderate motor deficits received 30-minute therapy sessions occurring 2 days/week for 6 weeks, and emphasizing activities of daily living. Subjects randomly assigned to the experimental condition also received 30-minute MP sessions provided directly after therapy requiring daily MP of the activities of daily living; subjects assigned to the control group received the same amount of therapist interaction as the experimental group, and a sham intervention directly after therapy, consisting of relaxation. Outcomes were evaluated by a blinded rater using the Action Research Arm test and the upper extremity section of the Fugl-Meyer Assessment.\n\n\nRESULTS\nNo pre-existing group differences were found on any demographic variable or movement scale. Subjects receiving MP showed significant reductions in affected arm impairment and significant increases in daily arm function (both at the P<0.0001 level). Only patients in the group receiving MP exhibited new ability to perform valued activities.\n\n\nCONCLUSIONS\nThe results support the efficacy of programs incorporating mental practice for rehabilitating affected arm motor function in patients with chronic stroke. These changes are clinically significant.",
"title": ""
},
{
"docid": "e3079a6c47a804498cea0caf804d6f11",
"text": "For realtime walking control of a biped robot, we analyze the dynamics of a three-dimensional inverted pendulum whose motions are constrained onto an arbitrarily defined plane. This analysis leads us a simple linear dynamics, the Three-Dimensional Linear Inverted Pendulum Mode (3D-LIPM). Geometric nature of trajectories under the 3D-LIPM is discussed, and an algorithm for walking pattern generation is presented. Experimental results of realtime walking control of a 12 d.o.f. biped robot HRP-2L using an input device such as a game pad are also shown.",
"title": ""
},
{
"docid": "0bb73266d8e4c18503ccda4903856e44",
"text": "Recent progress in advanced driver assistance systems and the race towards autonomous vehicles is mainly driven by two factors: (1) increasingly sophisticated algorithms that interpret the environment around the vehicle and react accordingly, and (2) the continuous improvements of sensor technology itself. In terms of cameras, these improvements typically include higher spatial resolution, which as a consequence requires more data to be processed. The trend to add multiple cameras to cover the entire surrounding of the vehicle is not conducive in that matter. At the same time, an increasing number of special purpose algorithms need access to the sensor input data to correctly interpret the various complex situations that can occur, particularly in urban traffic. By observing those trends, it becomes clear that a key challenge for vision architectures in intelligent vehicles is to share computational resources. We believe this challenge should be faced by introducing a representation of the sensory data that provides compressed and structured access to all relevant visual content of the scene. The Stixel World discussed in this paper is such a representation. It is a medium-level model of the environment that is specifically designed to compress information about obstacles by leveraging the typical layout of outdoor traffic scenes. It has proven useful for a multi∗Corresponding author: marius.cordts@daimler.com Authors contributed equally and are listed in alphabetical order Preprint submitted to Image and Vision Computing February 14, 2017 tude of automotive vision applications, including object detection, tracking, segmentation, and mapping. In this paper, we summarize the ideas behind the model and generalize it to take into account multiple dense input streams: the image itself, stereo depth maps, and semantic class probability maps that can be generated, e.g ., by deep convolutional neural networks. Our generalization is embedded into a novel mathematical formulation for the Stixel model. We further sketch how the free parameters of the model can be learned using structured SVMs.",
"title": ""
},
{
"docid": "b17569bbae715d3185b6f9793c6cd3eb",
"text": "This paper presents an approach to the design of a novel dual-band power divider with variable power dividing ratio. To achieve dual-band operation, a novel dual-band quarter-wave length transformer based on coupled-lines is proposed, which is used to replace the quarter-wave length transformer in Wilkinson power divider. The proposed dual-band power divider features a simple compact planar structure with wide bandwidth performance for small frequency ratio. Closed-form design equations with one degree of design freedom are derived using evenand odd-mode analysis and transmission line theory. For verification purpose, power dividers operating at 2.4/3.8 GHz with dividing ratios of 2 : 1 and 1 : 1 are designed, simulated and measured. The simulated and measured results are in good agreement.",
"title": ""
},
{
"docid": "56d9c09cc01854e0be889e63f512165a",
"text": "CONTEXT\nRapid opioid detoxification with opioid antagonist induction using general anesthesia has emerged as an expensive, potentially dangerous, unproven approach to treat opioid dependence.\n\n\nOBJECTIVE\nTo determine how anesthesia-assisted detoxification with rapid antagonist induction for heroin dependence compared with 2 alternative detoxification and antagonist induction methods.\n\n\nDESIGN, SETTING, AND PATIENTS\nA total of 106 treatment-seeking heroin-dependent patients, aged 21 through 50 years, were randomly assigned to 1 of 3 inpatient withdrawal treatments over 72 hours followed by 12 weeks of outpatient naltrexone maintenance with relapse prevention psychotherapy. This randomized trial was conducted between 2000 and 2003 at Columbia University Medical Center's Clinical Research Center. Outpatient treatment occurred at the Columbia University research service for substance use disorders. Patients were included if they had an American Society of Anesthesiologists physical status of I or II, were without major comorbid psychiatric illness, and were not dependent on other drugs or alcohol.\n\n\nINTERVENTIONS\nAnesthesia-assisted rapid opioid detoxification with naltrexone induction, buprenorphine-assisted rapid opioid detoxification with naltrexone induction, and clonidine-assisted opioid detoxification with delayed naltrexone induction.\n\n\nMAIN OUTCOME MEASURES\nWithdrawal severity scores on objective and subjective scales; proportions of patients receiving naltrexone, completing inpatient detoxification, and retained in treatment; proportion of opioid-positive urine specimens.\n\n\nRESULTS\nMean withdrawal severities were comparable across the 3 treatments. Compared with clonidine-assisted detoxification, the anesthesia- and buprenorphine-assisted detoxification interventions had significantly greater rates of naltrexone induction (94% anesthesia, 97% buprenorphine, and 21% clonidine), but the groups did not differ in rates of completion of inpatient detoxification. Treatment retention over 12 weeks was not significantly different among groups with 7 of 35 (20%) retained in the anesthesia-assisted group, 9 of 37 (24%) in the buprenorphine-assisted group, and 3 of 34 (9%) in the clonidine-assisted group. Induction with 50 mg of naltrexone significantly reduced the risk of dropping out (odds ratio, 0.28; 95% confidence interval, 0.15-0.51). There were no significant group differences in proportions of opioid-positive urine specimens. The anesthesia procedure was associated with 3 potentially life-threatening adverse events.\n\n\nCONCLUSION\nThese data do not support the use of general anesthesia for heroin detoxification and rapid opioid antagonist induction.",
"title": ""
},
{
"docid": "6ebf60b36d9a13c5ae6ded91ee7d95fe",
"text": "In this paper, a novel approach for Kannada, Telugu and Devanagari handwritten numerals recognition based on global and local structural features is proposed. Probabilistic Neural Network (PNN) Classifier is used to classify the Kannada, Telugu and Devanagari numerals separately. Algorithm is validated with Kannada, Telugu and Devanagari numerals dataset by setting various radial values of PNN classifier under different experimental setup. The experimental results obtained are encouraging and comparable with other methods found in literature survey. The novelty of the proposed method is free from thinning and size",
"title": ""
},
{
"docid": "8cd8e10e371085a48acc52dc594847bd",
"text": "We analyze in this paper a number of data sets proposed over the last decade or so for the task of paraphrase identification. The goal of the analysis is to identify the advantages as well as shortcomings of the previously proposed data sets. Based on the analysis, we then make recommendations about how to improve the process of creating and using such data sets for evaluating in the future approaches to the task of paraphrase identification or the more general task of semantic similarity. The recommendations are meant to improve our understanding of what a paraphrase is, offer a more fair ground for comparing approaches, increase the diversity of actual linguistic phenomena that future data sets will cover, and offer ways to improve our understanding of the contributions of various modules or approaches proposed for solving the task of paraphrase identification or similar tasks. We also developed a data collection tool, called Data Collector, that proactively targets the collection of paraphrase instances covering linguistic phenomena important to paraphrasing.",
"title": ""
},
{
"docid": "e7bb89000329245bccdecbc80549109c",
"text": "This paper presents a tutorial overview of the use of coupling between nonadjacent resonators to produce transmission zeros at real frequencies in microwave filters. Multipath coupling diagrams are constructed and the relative phase shifts of multiple paths are observed to produce the known responses of the cascaded triplet and quadruplet sections. The same technique is also used to explore less common nested cross-coupling structures and to predict their behavior. A discussion of the effects of nonzero electrical length coupling elements is presented. Finally, a brief categorization of the various synthesis and implementation techniques available for these types of filters is given.",
"title": ""
},
{
"docid": "9f24cf3e8fde24d4622d9f71a2c7998f",
"text": "Most of the previous work on video action recognition use complex hand-designed local features, such as SIFT, HOG and SURF, but these approaches are implemented sophisticatedly and difficult to be extended to other sensor modalities. Recent studies discover that there are no universally best hand-engineered features for all datasets, and learning features directly from the data may be more advantageous. One such endeavor is Slow Feature Analysis (SFA) proposed by Wiskott and Sejnowski [33]. SFA can learn the invariant and slowly varying features from input signals and has been proved to be valuable in human action recognition [34]. It is also observed that the multi-layer feature representation has succeeded remarkably in widespread machine learning applications. In this paper, we propose to combine SFA with deep learning techniques to learn hierarchical representations from the video data itself. Specifically, we use a two-layered SFA learning structure with 3D convolution and max pooling operations to scale up the method to large inputs and capture abstract and structural features from the video. Thus, the proposed method is suitable for action recognition. At the same time, sharing the same merits of deep learning, the proposed method is generic and fully automated. Our classification results on Hollywood2, KTH and UCF Sports are competitive with previously published results. To highlight some, on the KTH dataset, our recognition rate shows approximately 1% improvement in comparison to state-of-the-art methods even without supervision or dense sampling.",
"title": ""
},
{
"docid": "7034be316fcc2862d896b51662939c40",
"text": "This article presents HICCUPS (HIdden Communication system for CorrUPted networkS), a steganographic system dedicated to shared medium networks including wireless local area networks. The novelty of HICCUPS is: usage of secure telecommunications network armed with cryptographic mechanisms to provide steganographic system and proposal of new protocol with bandwidth allocation based on corrupted frames. All functional parts of the system and possibility of its implementation in existing public networks are discussed. An example of implementation framework for wireless local area networks IEEE 802.11 is also presented.",
"title": ""
},
{
"docid": "055c9fad6d2f246fc1b6cbb1bce26a92",
"text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.",
"title": ""
},
{
"docid": "4ad4cd6cc77dae0fea4f2cc05651cec4",
"text": "BACKGROUND\nDementia is a clinical syndrome with a number of different causes which is characterised by deterioration in cognitive, behavioural, social and emotional functions. Pharmacological interventions are available but have limited effect to treat many of the syndrome's features. Less research has been directed towards non-pharmacological treatments. In this review, we examined the evidence for effects of music-based interventions as a treatment.\n\n\nOBJECTIVES\nTo assess the effects of music-based therapeutic interventions for people with dementia on emotional well-being including quality of life, mood disturbance or negative affect, behavioural problems, social behaviour, and cognition at the end of therapy and four or more weeks after the end of treatment.\n\n\nSEARCH METHODS\nWe searched ALOIS, the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group (CDCIG) on 14 April 2010 using the terms: music therapy, music, singing, sing, auditory stimulation. Additional searches were also carried out on 3 July 2015 in the major healthcare databases MEDLINE, Embase, psycINFO, CINAHL and LILACS; and in trial registers and grey literature sources. On 12 April 2016, we searched the major databases for new studies for future evaluation.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials of music-based therapeutic interventions (at least five sessions) for people with dementia that measured any of our outcomes of interest. Control groups either received usual care or other activities.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo reviewers worked independently to screen the retrieved studies against the inclusion criteria and then to extract data and assess methodological quality of the included studies. If necessary, we contacted trial authors to ask for additional data, including relevant subscales, or for other missing information. We pooled data using random-effects models.\n\n\nMAIN RESULTS\nWe included 17 studies. Sixteen studies with a total of 620 participants contributed data to meta-analyses. Participants in the studies had dementia of varying degrees of severity, but all were resident in institutions. Five studies delivered an individual music intervention; in the others, the intervention was delivered to groups of participants. Most interventions involved both active and receptive musical elements. The methodological quality of the studies varied. All were at high risk of performance bias and some were at high risk of detection or other bias. At the end of treatment, we found low-quality evidence that music-based therapeutic interventions may have little or no effect on emotional well-being and quality of life (standardized mean difference, SMD 0.32, 95% CI -0.08 to 0.71; 6 studies, 181 participants), overall behaviour problems (SMD -0.20, 95% CI -0.56 to 0.17; 6 studies, 209 participants) and cognition (SMD 0.21, 95% CI -0.04 to 0.45; 6 studies, 257 participants). We found moderate-quality evidence that they reduce depressive symptoms (SMD -0.28, 95% CI -0.48 to -0.07; 9 studies, 376 participants), but do not decrease agitation or aggression (SMD -0.08, 95% CI -0.29 to 0.14; 12 studies, 515 participants). The quality of the evidence on anxiety and social behaviour was very low, so effects were very uncertain. The evidence for all long-term outcomes was also of very low quality.\n\n\nAUTHORS' CONCLUSIONS\nProviding people with dementia with at least five sessions of a music-based therapeutic intervention probably reduces depressive symptoms but has little or no effect on agitation or aggression. There may also be little or no effect on emotional well-being or quality of life, overall behavioural problems and cognition. We are uncertain about effects on anxiety or social behaviour, and about any long-term effects. Future studies should employ larger sample sizes, and include all important outcomes, in particular 'positive' outcomes such as emotional well-being and social outcomes. Future studies should also examine the duration of effects in relation to the overall duration of treatment and the number of sessions.",
"title": ""
},
{
"docid": "49f35f840566645f5b86e90ce0a932af",
"text": "Over the past decade, a number of tools and systems have been developed to manage various aspects of the software development lifecycle. Until now, tool supported code review, an important aspect of software development, has been largely ignored. With the advent of open source code review tools such as Gerrit along with projects that use them, code review data is now available for collection, analysis, and triangulation with other software development data. In this paper, we extract Android peer review data from Gerrit. We describe the Android peer review process, the reverse engineering of the Gerrit JSON API, our data mining and cleaning methodology, database schema, and provide an example of how the data can be used to answer an empirical software engineering question. The database is available for use by the research community.",
"title": ""
},
{
"docid": "a06274d9bf6dba90ea0178ec11a20fb6",
"text": "Osteoporosis has become one of the most prevalent and costly diseases in the world. It is a metabolic disease characterized by reduction in bone mass due to an imbalance between bone formation and resorption. Osteoporosis causes fractures, prolongs bone healing, and impedes osseointegration of dental implants. Its pathological features include osteopenia, degradation of bone tissue microstructure, and increase of bone fragility. In traditional Chinese medicine, the herb Rhizoma Drynariae has been commonly used to treat osteoporosis and bone nonunion. However, the precise underlying mechanism is as yet unclear. Osteoprotegerin is a cytokine receptor shown to play an important role in osteoblast differentiation and bone formation. Hence, activators and ligands of osteoprotegerin are promising drug targets and have been the focus of studies on the development of therapeutics against osteoporosis. In the current study, we found that naringin could synergistically enhance the action of 1α,25-dihydroxyvitamin D3 in promoting the secretion of osteoprotegerin by osteoblasts in vitro. In addition, naringin can also influence the generation of osteoclasts and subsequently bone loss during organ culture. In conclusion, this study provides evidence that natural compounds such as naringin have the potential to be used as alternative medicines for the prevention and treatment of osteolysis.",
"title": ""
},
{
"docid": "ae74b0befa2da2aeb2d831aac0bef456",
"text": "The central purpose of this survey is to provide readers an insight into the recent advances and challenges in on-line active learning. Active learning has attracted the data mining and machine learning community since around 20 years. This is because it served for important purposes to increase practical applicability of machine learning techniques, such as (i) to reduce annotation and measurement costs for operators and measurement equipments, (ii) to reduce manual labelling effort for experts and (iii) to reduce computation time for model training. Almost all of the current techniques focus on the classical pool-based approach, which is off-line by nature as iterating over a pool of (unlabelled) reference samples a multiple times to choose the most promising ones for improving the performance of the classifiers. This is achieved by (time-intensive) re-training cycles on all labelled samples available so far. For the on-line, stream mining case, the challenge is that the sample selection strategy has to operate in a fast, ideally single-pass manner. Some first approaches have been proposed during the last decade (starting from around 2005) with the usage of machine learning (ML) oriented incremental classifiers, which are able to update their parameters based on selected samples, but not their structures. Since 2012, on-line active learning concepts have been proposed in connection with the paradigm of evolving models, which are able to expand their knowledge into feature space regions so far unexplored. This opened the possibility to address a particular type of uncertainty, namely that one which stems from a significant novelty content in streams, as, e.g., caused by drifts, new operation modes, changing system behaviors or non-stationary environments. We will provide an overview about the concepts and techniques for sample selection and active learning within these two principal major research lines (incremental ML models versus evolving systems), a comparison of their essential characteristics and properties (raising some advantages and disadvantages), and a study on possible evaluation techniques for them. We conclude with an overview of real-world application examples where various online AL approaches have been already successfully applied in order to significantly reduce user’s interaction efforts and costs for model updates. Preprint submitted to Information Sciences 27 June 2017",
"title": ""
},
{
"docid": "6301ec034b04323bf0437cc7b829cfad",
"text": "Selective mutism (SM) is a relatively rare childhood disorder and is underdiagnosed and undertreated. The purpose of the retrospective naturalistic study was to examine the long-term outcome of children with SM who were treated with specifically designed modular cognitive behavioral therapy (MCBT). Parents of 36 children who met diagnostic criteria of SM that received MCBT treatment were invited for a follow-up evaluation. Parents were interviewed using structured scales and completed questionnaires regarding the child, including the Selective Mutism Questionnaire (SMQ). Twenty-four subjects were identified and evaluated. Their mean age ± SD of onset of SM symptoms, beginning of treatment, and age at follow-up were 3.4 ± 1.4, 6.4 ± 3.1, and 9.3 ± 3.4 years, respectively. There was robust improvement from beginning of treatment to follow-up evaluation in SM, social anxiety disorder, and specific phobia symptoms. The recovery rate from SM was 84.2 %. Conclusion: SM-focused MCBT is feasible in children and possibly effective in inducing long-term reduction of SM and comorbid anxiety symptoms. What is Known: • There are limited empirical data on selective mutism (SM) treatment outcome and specifically on cognitive-behavioral therapy, with the majority of studies being uncontrolled case reports of 1 to 2 cases each. • There is also limited data on the long-term outcome of children with SM following treatment. What is New: • Modular cognitive behavioral treatment is a feasible and possibly effective treatment for SM. Intervention at a younger age is more effective comparing to an older age. • Treatment for SM also decreases the rate of psychiatric comorbidities, including separation anxiety disorder and specific phobia.",
"title": ""
},
{
"docid": "aa16ca139a7648f7d9bb3ff81aaf0bbc",
"text": "Atherosclerosis has an important inflammatory component and acute cardiovascular events can be initiated by inflammatory processes occurring in advanced plaques. Fatty acids influence inflammation through a variety of mechanisms; many of these are mediated by, or associated with, the fatty acid composition of cell membranes. Human inflammatory cells are typically rich in the n-6 fatty acid arachidonic acid, but the contents of arachidonic acid and of the marine n-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) can be altered through oral administration of EPA and DHA. Eicosanoids produced from arachidonic acid have roles in inflammation. EPA also gives rise to eicosanoids and these are usually biologically weak. EPA and DHA give rise to resolvins which are anti-inflammatory and inflammation resolving. EPA and DHA also affect production of peptide mediators of inflammation (adhesion molecules, cytokines, etc.). Thus, the fatty acid composition of human inflammatory cells influences their function; the contents of arachidonic acid, EPA and DHA appear to be especially important. The anti-inflammatory effects of marine n-3 polyunsaturated fatty acids (PUFAs) may contribute to their protective actions towards atherosclerosis and plaque rupture.",
"title": ""
},
{
"docid": "edd6fb76f672e00b14935094cb0242d0",
"text": "Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for taskoriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework.",
"title": ""
},
{
"docid": "370767f85718121dc3975f383bf99d8b",
"text": "A combinatorial classification and a phylogenetic analysis of the ten 12/8 time, seven-stroke bell rhythm timelines in African and Afro-American music are presented. New methods for rhythm classification are proposed based on measures of rhythmic oddity and off-beatness. These combinatorial classifications reveal several new uniqueness properties of the Bembé bell pattern that may explain its widespread popularity and preference among the other patterns in this class. A new distance measure called the swap-distance is introduced to measure the non-similarity of two rhythms that have the same number of strokes. A swap in a sequence of notes and rests of equal duration is the location interchange of a note and a rest that are adjacent in the sequence. The swap distance between two rhythms is defined as the minimum number of swaps required to transform one rhythm to the other. A phylogenetic analysis using Splits Graphs with the swap distance shows that each of the ten bell patterns can be derived from one of two “canonical” patterns with at most four swap operations, or from one with at most five swap operations. Furthermore, the phylogenetic analysis suggests that for these ten bell patterns there are no “ancestral” rhythms not contained in this set.",
"title": ""
}
] |
scidocsrr
|
9513ffa44c24f795dd573dbfd6b731fa
|
Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views
|
[
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
},
{
"docid": "4628128d1c5cf97fa538a8b750905632",
"text": "A large body of recent work on object detection has focused on exploiting 3D CAD model databases to improve detection performance. Many of these approaches work by aligning exact 3D models to images using templates generated from renderings of the 3D models at a set of discrete viewpoints. However, the training procedures for these approaches are computationally expensive and require gigabytes of memory and storage, while the viewpoint discretization hampers pose estimation performance. We propose an efficient method for synthesizing templates from 3D models that runs on the fly - that is, it quickly produces detectors for an arbitrary viewpoint of a 3D model without expensive dataset-dependent training or template storage. Given a 3D model and an arbitrary continuous detection viewpoint, our method synthesizes a discriminative template by extracting features from a rendered view of the object and decorrelating spatial dependences among the features. Our decorrelation procedure relies on a gradient-based algorithm that is more numerically stable than standard decomposition-based procedures, and we efficiently search for candidate detections by computing FFT-based template convolutions. Due to the speed of our template synthesis procedure, we are able to perform joint optimization of scale, translation, continuous rotation, and focal length using Metropolis-Hastings algorithm. We provide an efficient GPU implementation of our algorithm, and we validate its performance on 3D Object Classes and PASCAL3D+ datasets.",
"title": ""
}
] |
[
{
"docid": "316aa66508daedc1b729283d6212bdb0",
"text": "The purpose of this study is to examine the physiological effects of Shinrin-yoku (taking in the atmosphere of the forest). The subjects were 12 male students (22.8+/-1.4 yr). On the first day of the experiments, one group of 6 subjects was sent to a forest area, and the other group of 6 subjects was sent to a city area. On the second day, each group was sent to the opposite area for a cross check. In the forenoon, the subjects were asked to walk around their given area for 20 minutes. In the afternoon, they were asked to sit on chairs and watch the landscapes of their given area for 20 minutes. Cerebral activity in the prefrontal area and salivary cortisol were measured as physiological indices in the morning at the place of accommodation, before and after walking in the forest or city areas during the forenoon, and before and after watching the landscapes in the afternoon in the forest and city areas, and in the evening at the place of accommodation. The results indicated that cerebral activity in the prefrontal area of the forest area group was significantly lower than that of the group in the city area after walking; the concentration of salivary cortisol in the forest area group was significantly lower than that of the group in the city area before and after watching each landscape. The results of the physiological measurements show that Shinrin-yoku can effectively relax both people's body and spirit.",
"title": ""
},
{
"docid": "7588bd6798d8c2fd891acaf3c64c675f",
"text": "OBJECTIVE\nThis article presents a case report of a child with poor sensory processing and describes the disorders impact on the child's occupational behavior and the changes in occupational performance during 10 months of occupational therapy using a sensory integrative approach (OT-SI).\n\n\nMETHOD\nRetrospective chart review of assessment data and analysis of parent interview data are reviewed. Progress toward goals and objectives is measured using goal attainment scaling. Themes from parent interview regarding past and present occupational challenges are presented.\n\n\nRESULTS\nNotable improvements in occupational performance are noted on goal attainment scales, and these are consistent with improvements in behavior. Parent interview data indicate noteworthy progress in the child's ability to participate in home, school, and family activities.\n\n\nCONCLUSION\nThis case report demonstrates a model for OT-SI. The findings support the theoretical underpinnings of sensory integration theory: that improvement in the ability to process and integrate sensory input will influence adaptive behavior and occupational performance. Although these findings cannot be generalized, they provide preliminary evidence supporting the theory and the effectiveness of this approach.",
"title": ""
},
{
"docid": "c3aaa53892e636f34d6923831a3b66bc",
"text": "OBJECTIVES\nTo evaluate whether 7-mm-long implants could be an alternative to longer implants placed in vertically augmented posterior mandibles.\n\n\nMATERIALS AND METHODS\nSixty patients with posterior mandibular edentulism with 7-8 mm bone height above the mandibular canal were randomized to either vertical augmentation with anorganic bovine bone blocks and delayed 5-month placement of ≥10 mm implants or to receive 7-mm-long implants. Four months after implant placement, provisional prostheses were delivered, replaced after 4 months, by definitive prostheses. The outcome measures were prosthesis and implant failures, any complications and peri-implant marginal bone levels. All patients were followed to 1 year after loading.\n\n\nRESULTS\nOne patient dropped out from the short implant group. In two augmented mandibles, there was not sufficient bone to place 10-mm-long implants possibly because the blocks had broken apart during insertion. One prosthesis could not be placed when planned in the 7 mm group vs. three prostheses in the augmented group, because of early failure of one implant in each patient. Four complications (wound dehiscence) occurred during graft healing in the augmented group vs. none in the 7 mm group. No complications occurred after implant placement. These differences were not statistically significant. One year after loading, patients of both groups lost an average of 1 mm of peri-implant bone. There no statistically significant differences in bone loss between groups.\n\n\nCONCLUSIONS\nWhen residual bone height over the mandibular canal is between 7 and 8 mm, 7 mm short implants might be a preferable choice than vertical augmentation, reducing the chair time, expenses and morbidity. These 1-year preliminary results need to be confirmed by follow-up of at least 5 years.",
"title": ""
},
{
"docid": "5ac66257b2e43eb11ae906672acef904",
"text": "Noticing that different information sources often provide complementary coverage of word sense and meaning, we propose a simple and yet effective strategy for measuring lexical semantics. Our model consists of a committee of vector space models built on a text corpus, Web search results and thesauruses, and measures the semantic word relatedness using the averaged cosine similarity scores. Despite its simplicity, our system correlates with human judgements better or similarly compared to existing methods on several benchmark datasets, including WordSim353.",
"title": ""
},
{
"docid": "c2a59be58131149dcddfec02214423b8",
"text": "Complex structures manufactured using low-pressure vacuum bag-only (VBO) prepreg processing are more susceptible to defects than flat laminates due to complex compaction conditions present at sharp corners. Consequently, effective defect mitigation strategies are required to produce structural parts. In this study, we investigated the relationships between laminate properties, processing conditions`, mold designs and part quality in order to develop science-based guidelines for the manufacture of complex parts. Generic laminates consisting of a central corner and two flanges were fabricated in a multi-part study that considered variation in corner angle and local curvature radius, the applied pressure during layup and cure, and the prepreg material and laminate thickness. The manufactured parts were analyzed in terms of microstructural fiber bed and resin distribution, thickness variation, and void content. The results indicated that defects observed in corner laminates were influenced by both mold design and processing conditions, and that optimal combinations of these factors can mitigate defects and improve quality.",
"title": ""
},
{
"docid": "3dd4bfe71c3c141d9538e3b3eb72e8e1",
"text": "This paper identifies a problem with the usual procedure for L2-regularization parameter estimation in a domain adaptation setting. In such a setting, there are differences between the distributions generating the training data (source domain) and the test data (target domain). The usual cross-validation procedure requires validation data, which can not be obtained from the unlabeled target data. The problem is that if one decides to use source validation data, the regularization parameter is underestimated. One possible solution is to scale the source validation data through importance weighting, but we show that this correction is not sufficient. We conclude the paper with an empirical analysis of the effect of several importance weight estimators on the estimation of the regularization parameter.",
"title": ""
},
{
"docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5",
"text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.",
"title": ""
},
{
"docid": "8e077186aef0e7a4232eec0d8c73a5a2",
"text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8df4ff8a2fbaf84b4bbd3aa647e946e8",
"text": "One of the newly emerging carbon materials, nanodiamond (ND), has been exploited for use in traditional electric materials and this has extended into biomedical and pharmaceutical applications. Recently, NDs have attained significant interests as a multifunctional and combinational drug delivery system. ND studies have provided insights into granting new potentials with their wide ranging surface chemistry, complex formation with biopolymers, and combination with biomolecules. The studies that have proved ND inertness, biocompatibility, and low toxicity have made NDs much more feasible for use in real in vivo applications. This review gives an understanding of NDs in biomedical engineering and pharmaceuticals, focusing on the classified introduction of ND/drug complexes. In addition, the diverse potential applications that can be obtained with chemical modification are presented.",
"title": ""
},
{
"docid": "8d092dfa88ba239cf66e5be35fcbfbcc",
"text": "We present VideoWhisper, a novel approach for unsupervised video representation learning. Based on the observation that the frame sequence encodes the temporal dynamics of a video (e.g., object movement and event evolution), we treat the frame sequential order as a self-supervision to learn video representations. Unlike other unsupervised video feature learning methods based on frame-level feature reconstruction that is sensitive to visual variance, VideoWhisper is driven by a novel video “sequence-to-whisper” learning strategy. Specifically, for each video sequence, we use a prelearned visual dictionary to generate a sequence of high-level semantics, dubbed “whisper,” which can be considered as the language describing the video dynamics. In this way, we model VideoWhisper as an end-to-end sequence-to-sequence learning model using attention-based recurrent neural networks. This model is trained to predict the whisper sequence and hence it is able to learn the temporal structure of videos. We propose two ways to generate video representation from the model. Through extensive experiments on two real-world video datasets, we demonstrate that video representation learned by V ideoWhisper is effective to boost fundamental multimedia applications such as video retrieval and event classification.",
"title": ""
},
{
"docid": "d29b90dbce6f4dd7c2a3480239def8f9",
"text": "This paper presents a design of permanent magnet machines (PM), such as the permanent magnet axial flux generator for wind turbine generated direct current voltage base on performance requirements. However recent developments in rare earth permanent magnet materials and power electronic devices has awakened interest in alternative generator topologies that can be used to produce direct voltage from wind energy using rectifier circuit convert alternating current to direct current. In preliminary tests the input mechanical energy to drive the rotor of the propose generator. This paper propose a generator which can change mechanical energy into electrical energy with the generator that contains bar magnets move relative generated flux magnetic offset winding coils in stator component. The results show that the direct current output power versus rotor speed of generator in various applications. These benefits present the axial flux permanent magnet generator with generated direct voltage at rated power 1500 W.",
"title": ""
},
{
"docid": "72e255a72bef093425f591e891f0c477",
"text": "REFERENCES 1. Fern andez-Guarino M, Aldanondo I, Gonz alez-Garc ıa C, Garrido P, Marquet A, P erez-Garc ıa B, et al. Dermatosis perforante por gefinitib. Actas Dermosifiliogr 2006;97:208-11. 2. Gilaberte Y, Coscojuela C, V azquez C, Rosell o R, Vera J. Perforating folliculitis associated with tumor necrosis factor alpha inhibitors administered for rheumatoid arthritis. Br J Dermatol 2007;156:368-71. 3. Vano-Galvan S, Moreno C, Medina J, P erez-Garc ıa B, Garc ıaL opez JL, Jaen P. Perforating dermatosis in a patient receiving bevacizumab. J Eur Acad Dermatol 2009;23:972-4. 4. Minami-Hori M, Ishida-Yamamoto A, Komatsu S, Iiduka H. Transient perforating folliculitis induced by sorafenib. J Dermatol 2010;37:833-4. 5. Wolber C, Udvardi A, Tatzreiter G, Schneeberger A, Volc-Platzer B. Perforating folliculitis, angioedema, hand-foot syndrome e multiple cutaneous side effects in a patient treated with sorafenib. J Dtsch Dermatol Ges 2009;7:449-52.",
"title": ""
},
{
"docid": "540099388527a2e8dd5b43162b697fea",
"text": "This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.",
"title": ""
},
{
"docid": "1448b02c9c14e086a438d76afa1b2fde",
"text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.",
"title": ""
},
{
"docid": "2f3bb54596bba8cd7a073ef91964842c",
"text": "BACKGROUND AND PURPOSE\nRecent meta-analyses have suggested similar wound infection rates when using single- or multiple-dose antibiotic prophylaxis in the operative management of closed long bone fractures. In order to assist clinicians in choosing the optimal prophylaxis strategy, we performed a cost-effectiveness analysis comparing single- and multiple-dose prophylaxis.\n\n\nMETHODS\nA cost-effectiveness analysis comparing the two prophylactic strategies was performed using time horizons of 60 days and 1 year. Infection probabilities, costs, and quality-adjusted life days (QALD) for each strategy were estimated from the literature. All costs were reported in 2007 US dollars. A base case analysis was performed for the surgical treatment of a closed ankle fracture. Sensitivity analysis was performed for all variables, including probabilistic sensitivity analysis using Monte Carlo simulation.\n\n\nRESULTS\nSingle-dose prophylaxis results in lower cost and a similar amount of quality-adjusted life days gained. The single-dose strategy had an average cost of $2,576 for an average gain of 272 QALD. Multiple doses had an average cost of $2,596 for 272 QALD gained. These results are sensitive to the incidence of surgical site infection and deep wound infection for the single-dose treatment arm. Probabilistic sensitivity analysis using all model variables also demonstrated preference for the single-dose strategy.\n\n\nINTERPRETATION\nAssuming similar infection rates between the prophylactic groups, our results suggest that single-dose prophylaxis is slightly more cost-effective than multiple-dose regimens for the treatment of closed fractures. Extensive sensitivity analysis demonstrates these results to be stable using published meta-analysis infection rates.",
"title": ""
},
{
"docid": "cf0b98dfd188b7612577c975e08b0c92",
"text": "Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.",
"title": ""
},
{
"docid": "20c3addef683da760967df0c1e83f8e3",
"text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.",
"title": ""
},
{
"docid": "29199ac45d4aa8035fd03e675406c2cb",
"text": "This work presents an autonomous mobile robot in order to cover an unknown terrain “randomly”, namely entirely, unpredictably and evenly. This aim is very important, especially in military missions, such as the surveillance of terrains, the terrain exploration for explosives and the patrolling for intrusion in military facilities. The “heart” of the proposed robot is a chaotic motion controller, which is based on a chaotic true random bit generator. This generator has been implemented with a microcontroller, which converts the produced chaotic bit sequence, to the robot's motion. Experimental results confirm that this approach, with an appropriate sensor for obstacle avoidance, can obtain very satisfactory results in regard to the fast scanning of the robot’s workspace with unpredictable way. Key-Words: Autonomous mobile robot, terrain coverage, microcontroller, random bit generator, nonlinear system, chaos, Logistic map.",
"title": ""
},
{
"docid": "acb3689c9ece9502897cebb374811f54",
"text": "In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.",
"title": ""
},
{
"docid": "2f7ba7501fcf379b643867c7d5a9d7bf",
"text": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow-minimum-cut theorem.",
"title": ""
}
] |
scidocsrr
|
ec4fbb606738d5a29536fc2630f6dd9a
|
Citation function, polarity and influence classification
|
[
{
"docid": "61d80b5b0c6c2b3feb1ce667babd2236",
"text": "In a recent article published in this journal, Lombard, Snyder-Duch, and Bracken (2002) surveyed 200 content analyses for their reporting of reliability tests; compared the virtues and drawbacks of five popular reliability measures; and proposed guidelines and standards for their use. Their discussion revealed that numerous misconceptions circulate in the content analysis literature regarding how these measures behave and can aid or deceive content analysts in their effort to ensure the reliability of their data. This paper proposes three conditions for statistical measures to serve as indices of the reliability of data and examines the mathematical structure and the behavior of the five coefficients discussed by the authors, plus two others. It compares common beliefs about these coefficients with what they actually do and concludes with alternative recommendations for testing reliability in content analysis and similar data-making efforts. In a recent paper published in a special issue of Human Communication Research devoted to methodological topics (Vol. 28, No. 4), Lombard, Snyder-Duch, and Bracken (2002) presented their findings of how reliability was treated in 200 content analyses indexed in Communication Abstracts between 1994 and 1998. In essence, their results showed that only 69% of the articles report reliabilities. This amounts to no significant improvements in reliability concerns over earlier studies (e.g., Pasadeos et al., 1995; Riffe & Freitag, 1996). Lombard et al. attribute the failure of consistent reporting of reliability of content analysis data to a lack of available guidelines, and they end up proposing such guidelines. Having come to their conclusions by content analytic means, Lombard et al. also report their own reliabilities, using not one, but four, indices for comparison: %-agreement; Scott‟s (1955) (pi); Cohen‟s (1960) (kappa); and Krippendorff‟s (1970, 2004) (alpha). Faulty software 1 initially led the authors to miscalculations, now corrected (Lombard et al., 2003). However, in their original article, the authors cite several common beliefs about these coefficients and make recommendations that I contend can seriously mislead content analysis researchers, thus prompting my corrective response. To put the discussion of the purpose of these indices into a larger perspective, I will have to go beyond the arguments presented in their article. Readers who might find the technical details tedious are invited to go to the conclusion, which is in the form of four recommendations. The Conservative/Liberal Continuum Lombard et al. report “general agreement (in the literature) that indices which do not account for chance agreement (%-agreement and Holsti‟s [1969] CR – actually Osgood‟s [1959, p.44] index) are too liberal while those that do (, , and ) are too conservative” (2002, p. 593). For liberal or “more lenient” coefficients, the authors recommend adopting higher critical values for accepting data as reliable than for conservative or “more stringent” ones (p. 600) – as if differences between these coefficients were merely a problem of locating them on a shared scale. Discussing reliability coefficients in terms of a conservative/liberal continuum is not widespread in the technical literature. It entered the writing on content analysis not so long ago. Neuendorf (2002) used this terminology, but only in passing. Before that, Potter and Lewine-Donnerstein (1999, p. 287) cited Perreault and Leigh‟s (1989, p. 138) assessment of the chance-corrected as being “overly conservative” and “difficult to compare (with) ... Cronbach‟s (1951) alpha,” for example – as if the comparison with a correlation coefficient mattered. I contend that trying to understand diverse agreement coefficients by their numerical results alone, conceptually placing them on a conservative/liberal continuum, is seriously misleading. Statistical coefficients are mathematical functions. They apply to a collection of data (records, values, or numbers) and result in one numerical index intended to inform its users about something – here about whether they can rely on their data. Differences among coefficients are due to responding to (a) different patterns in data and/or (b) the same patterns but in different ways. How these functions respond to which patterns of agreement and how their numerical results relate to the risk of drawing false conclusions from unreliable data – not just the numbers they produce – must be understood before selecting one coefficient over another. Issues of Scale Let me start with the ranges of the two broad classes of agreement coefficients, chancecorrected agreement and raw or %-agreement. While both kinds equal 1.000 or 100% when agreement is perfect, and data are considered reliable, %-agreement is zero when absolutely no agreement is observed; when one coder‟s categories unfailingly differ from the categories used by the other; or disagreement is systematic and extreme. Extreme disagreement is statistically almost as unexpected as perfect agreement. It should not occur, however, when coders apply the same coding instruction to the same set of units of analysis and work independently of each other, as is required when generating data for testing reliability. Where the reliability of data is an issue, the worst situation is not when one coder looks over the shoulder of another coder and selects a non-matching category, but when coders do not understand what they are asked to interpret, categorize by throwing dice, or examine unlike units of analysis, causing research results that are indistinguishable from chance events. While zero %-agreement has no meaningful reliability interpretation, chance-corrected agreement coefficients, by contrast, become zero when coders‟ behavior bears no relation to the phenomena to be coded, leaving researchers clueless as to what their data mean. Thus, the scales of chance-corrected agreement coefficients are anchored at two points of meaningful reliability interpretations, zero and one, whereas %-like agreement indices are anchored in only one, 100%, which renders all deviations from 100% uninterpretable, as far as data reliability is concerned. %-agreement has other undesirable properties; for example, it is limited to nominal data; can compare only two coders 2 ; and high %-agreement becomes progressively unlikely as more categories are available. I am suggesting that the convenience of calculating %-agreement, which is often cited as its advantage, cannot compensate for its meaninglessness. Let me hasten to add that chance-correction is not a panacea either. Chance-corrected agreement coefficients do not form a uniform class. Benini (1901), Bennett, Alpert, and Goldstein (1954), Cohen (1960), Goodman and Kruskal (1954), Krippendorff (1970, 2004), and Scott (1955) build different corrections into their coefficients, thus measuring reliability on slightly different scales. Chance can mean different things. Discussing these coefficients in terms of being conservative (yielding lower values than expected) or liberal (yielding higher values than expected) glosses over their crucial mathematical differences and privileges an intuitive sense of the kind of magnitudes that are somehow considered acceptable. If it were the issue of striking a balance between conservative and liberal coefficients, it would be easy to follow statistical practices and modify larger coefficients by squaring them and smaller coefficients by applying the square root to them. However, neither transformation would alter what these mathematical functions actually measure; only the sizes of the intervals between 0 and 1. Lombard et al., by contrast, attempt to resolve their dilemma by recommending that content analysts use several reliability measures. In their own report, they use , “an index ...known to be conservative,” but when measures below .700, they revert to %-agreement, “a liberal index,” and accept data as reliable as long as the latter is above .900 (2002, p. 596). They give no empirical justification for their choice. I shall illustrate below the kind of data that would pass their criterion. Relation Between Agreement and Reliability To be clear, agreement is what we measure; reliability is what we wish to infer from it. In content analysis, reproducibility is arguably the most important interpretation of reliability (Krippendorff, 2004, p.215). I am suggesting that an agreement coefficient can become an index of reliability only when (1) It is applied to proper reliability data. Such data result from duplicating the process of describing, categorizing, or measuring a sample of data obtained from the population of data whose reliability is in question. Typically, but not exclusively, duplications are achieved by employing two or more widely available coders or observers who, working independent of each other, apply the same coding instructions or recording devices to the same set of units of analysis. (2) It treats units of analysis as separately describable or categorizable, without, however, presuming any knowledge about the correctness of their descriptions or categories. What matters, therefore, is not truths, correlations, subjectivity, or the predictability of one particular coder‟s use of categories from that by another coder, but agreements or disagreements among multiple descriptions generated by a coding procedure, regardless of who enacts that procedure. Reproducibility is about data making, not about coders. A coefficient for assessing the reliability of data must treat coders as interchangeable and count observable coder idiosyncrasies as disagreement. (3) Its values correlate with the conditions under which one is willing to rely on imperfect data. The correlation between a measure of agreement and the rely-ability on data involves two kinds of inferences. Estimating the (dis)agreement in a population of data from the (dis)agreements observed and meas",
"title": ""
},
{
"docid": "6adb3d2e49fa54679c4fb133a992b4f7",
"text": "Kathleen McKeown1, Hal Daume III2, Snigdha Chaturvedi2, John Paparrizos1, Kapil Thadani1, Pablo Barrio1, Or Biran1, Suvarna Bothe1, Michael Collins1, Kenneth R. Fleischmann3, Luis Gravano1, Rahul Jha4, Ben King4, Kevin McInerney5, Taesun Moon6, Arvind Neelakantan8, Diarmuid O’Seaghdha7, Dragomir Radev4, Clay Templeton3, Simone Teufel7 1Columbia University, 2University of Maryland, 3University of Texas at Austin, 4University of Michigan, 5Rutgers University, 6IBM, 7Cambridge University, 8University of Massachusetts at Amherst",
"title": ""
},
{
"docid": "ce2ef27f032d30ce2bc6aa5509a58e49",
"text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.",
"title": ""
}
] |
[
{
"docid": "b5cce2a39a51108f9191bdd3516646ca",
"text": "The aim of component technology is the replacement of large monolithic applications with sets of smaller software components, whose particular functionality and interoperation can be adapted to users’ needs. However, the adaptation mechanisms of component software are still limited. Most proposals concentrate on adaptations that can be achieved either at compile time or at link time. Current support for dynamic component adaptation, i.e. unanticipated, incremental modifications of a component system at run-time, is not sufficient. This paper proposes object-based inheritance (also known as delegation) as a complement to purely forwarding-based object composition. It presents a typesafe integration of delegation into a class-based object model and shows how it overcomes the problems faced by forwarding-based component interaction, how it supports independent extensibility of components and unanticipated, dynamic component adaptation.",
"title": ""
},
{
"docid": "7ccbb730f1ce8eca687875c632520545",
"text": "Increasing cost of the fertilizers with lesser nutrient use efficiency necessitates alternate means to fertilizers. Soil is a storehouse of nutrients and energy for living organisms under the soil-plant-microorganism system. These rhizospheric microorganisms are crucial components of sustainable agricultural ecosystems. They are involved in sustaining soil as well as crop productivity under organic matter decomposition, nutrient transformations, and biological nutrient cycling. The rhizospheric microorganisms regulate the nutrient flow in the soil through assimilating nutrients, producing biomass, and converting organically bound forms of nutrients. Soil microorganisms play a significant role in a number of chemical transformations of soils and thus, influence the availability of macroand micronutrients. Use of plant growth-promoting microorganisms (PGPMs) helps in increasing yields in addition to conventional plant protection. The most important PGPMs are Azospirillum, Azotobacter, Bacillus subtilis, B. mucilaginosus, B. edaphicus, B. circulans, Paenibacillus spp., Acidithiobacillus ferrooxidans, Pseudomonas, Burkholderia, potassium, phosphorous, zinc-solubilizing V.S. Meena (*) Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India Indian Council of Agricultural Research – Vivekananda Institute of Hill Agriculture, Almora 263601, Uttarakhand, India e-mail: vijayssac.bhu@gmail.com; vijay.meena@icar.gov.in I. Bahadur • B.R. Maurya Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India A. Kumar Department of Botany, MMV, Banaras Hindu University, Varanasi 221005, India R.K. Meena Department of Plant Sciences, School of Life Sciences, University of Hyderabad, Hyderabad 500046, TG, India S.K. Meena Division of Soil Science and Agricultural Chemistry, Indian Agriculture Research Institute, New Delhi 110012, India J.P. Verma Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi 22100, Uttar Pradesh, India # Springer India 2016 V.S. Meena et al. (eds.), Potassium Solubilizing Microorganisms for Sustainable Agriculture, DOI 10.1007/978-81-322-2776-2_1 1 microorganisms, or SMART microbes; these are eco-friendly and environmentally safe. The rhizosphere is the important area of soil influenced by plant roots. It is composed of huge microbial populations that are somehow different from the rest of the soil population, generally denominated as the “rhizosphere effect.” The rhizosphere is the small region of soil that is immediately near to the root surface and also affected by root exudates.",
"title": ""
},
{
"docid": "da7b39dce3c7c8a08f11db132925fe37",
"text": "In this paper, a new language identification system is presented based on the total variability approach previously developed in the field of speaker identification. Various techniques are employed to extract the most salient features in the lower dimensional i-vector space and the system developed results in excellent performance on the 2009 LRE evaluation set without the need for any post-processing or backend techniques. Additional performance gains are observed when the system is combined with other acoustic systems.",
"title": ""
},
{
"docid": "0bb73266d8e4c18503ccda4903856e44",
"text": "Recent progress in advanced driver assistance systems and the race towards autonomous vehicles is mainly driven by two factors: (1) increasingly sophisticated algorithms that interpret the environment around the vehicle and react accordingly, and (2) the continuous improvements of sensor technology itself. In terms of cameras, these improvements typically include higher spatial resolution, which as a consequence requires more data to be processed. The trend to add multiple cameras to cover the entire surrounding of the vehicle is not conducive in that matter. At the same time, an increasing number of special purpose algorithms need access to the sensor input data to correctly interpret the various complex situations that can occur, particularly in urban traffic. By observing those trends, it becomes clear that a key challenge for vision architectures in intelligent vehicles is to share computational resources. We believe this challenge should be faced by introducing a representation of the sensory data that provides compressed and structured access to all relevant visual content of the scene. The Stixel World discussed in this paper is such a representation. It is a medium-level model of the environment that is specifically designed to compress information about obstacles by leveraging the typical layout of outdoor traffic scenes. It has proven useful for a multi∗Corresponding author: marius.cordts@daimler.com Authors contributed equally and are listed in alphabetical order Preprint submitted to Image and Vision Computing February 14, 2017 tude of automotive vision applications, including object detection, tracking, segmentation, and mapping. In this paper, we summarize the ideas behind the model and generalize it to take into account multiple dense input streams: the image itself, stereo depth maps, and semantic class probability maps that can be generated, e.g ., by deep convolutional neural networks. Our generalization is embedded into a novel mathematical formulation for the Stixel model. We further sketch how the free parameters of the model can be learned using structured SVMs.",
"title": ""
},
{
"docid": "601d9060ac35db540cdd5942196db9e0",
"text": "In this paper, we review nine visualization techniques that can be used for visual exploration of multidimensional financial data. We illustrate the use of these techniques by studying the financial performance of companies from the pulp and paper industry. We also illustrate the use of visualization techniques for detecting multivariate outliers, and other patterns in financial performance data in the form of clusters, relationships, and trends. We provide a subjective comparison between different visualization techniques as to their capabilities for providing insight into financial performance data. The strengths of each technique and the potential benefits of using multiple visualization techniques for gaining insight into financial performance data are highlighted.",
"title": ""
},
{
"docid": "7abad18b2ddc66b07267ef76b109d1c9",
"text": "Modern applications for distributed publish/subscribe systems often require stream aggregation capabilities along with rich data filtering. When compared to other distributed systems, aggregation in pub/sub differentiates itself as a complex problem which involves dynamic dissemination paths that are difficult to predict and optimize for a priori, temporal fluctuations in publication rates, and the mixed presence of aggregated and non-aggregated workloads. In this paper, we propose a formalization for the problem of minimizing communication traffic in the context of aggregation in pub/sub. We present a solution to this minimization problem by using a reduction to the well-known problem of minimum vertex cover in a bipartite graph. This solution is optimal under the strong assumption of complete knowledge of future publications. We call the resulting algorithm \"Aggregation Decision, Optimal with Complete Knowledge\" (ADOCK). We also show that under a dynamic setting without full knowledge, ADOCK can still be applied to produce a low, yet not necessarily optimal, communication cost. We also devise a computationally cheaper dynamic approach called \"Aggregation Decision with Weighted Publication\" (WAD). We compare our solutions experimentally using two real datasets and explore the trade-offs with respect to communication and computation costs.",
"title": ""
},
{
"docid": "d795351a71887f46f9729e8e06a69bc6",
"text": "This research finds out what criteria Ethereum needs to fulfil to replace paper contracts and if it fulfils them. It dives into aspects such as privacy and security of the blockchain and its contracts, and if it is even possible at all to place a contract on the blockchain. However, due to the variety of contract clauses and a large privacy setback, it is not recommended to place paper contracts on the Ethereum blockchain.",
"title": ""
},
{
"docid": "ae7009ff00bec61884759b6eacf7e6b2",
"text": "Four novel terephthaloyl thiourea chitosan (TTU-chitosan) hydrogels were synthesized via a cross-linking reaction of chitosan with different concentrations of terephthaloyl diisothiocyanate. Their structures were investigated by elemental analyses, FTIR, SEM and X-ray diffraction. The antimicrobial activities of the hydrogels against three species of bacteria (Bacillis subtilis, Staphylococcus aureus and Escherichia coli) and three crop-threatening pathogenic fungi (Aspergillus fumigatus, Geotrichum candidum and Candida albicans) are much higher than that of the parent chitosan. The hydrogels were more potent in case of Gram-positive bacteria than Gram-negative bacteria. Increasing the degree of cross-linking in the hydrogels resulted in a stronger antimicrobial activity.",
"title": ""
},
{
"docid": "d1b509ce63a9ca777d6a0d4d8af19ae3",
"text": "The study explores the reliability, validity, and measurement invariance of the Video game Addiction Test (VAT). Game-addiction problems are often linked to Internet enabled online games; the VAT has the unique benefit that it is theoretically and empirically linked to Internet addiction. The study used data (n=2,894) from a large-sample paper-and-pencil questionnaire study, conducted in 2009 on secondary schools in Netherlands. Thus, the main source of data was a large sample of schoolchildren (aged 13-16 years). Measurements included the proposed VAT, the Compulsive Internet Use Scale, weekly hours spent on various game types, and several psychosocial variables. The VAT demonstrated excellent reliability, excellent construct validity, a one-factor model fit, and a high degree of measurement invariance across gender, ethnicity, and learning year, indicating that the scale outcomes can be compared across different subgroups with little bias. In summary, the VAT can be helpful in the further study of video game addiction, and it contributes to the debate on possible inclusion of behavioral addictions in the upcoming DSM-V.",
"title": ""
},
{
"docid": "55285f99e1783bcba47ab41e56171026",
"text": "Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.",
"title": ""
},
{
"docid": "70eac68ec33cdf99fee4a16f2cee468a",
"text": "Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.",
"title": ""
},
{
"docid": "0df3e40b3fa44121943de03941fdddc0",
"text": "From generation to generation in all countries all around the world medicinal plants play an important role in our live from ancient time till these days of wide drugs and pharmacological high technique industries , the studding of biological and pharmacological activities of plant essential oils attracted the attention to the potential use of these natural products from chemical and pharmacological investigation to their therapeutic aspects. In this paper two resins commiphora Africana and commiphora myrrha were selected to discuss their essential oils for chemical analysis and biological aspect the results of GCMS shows that the two resins are rich in sesqiuterpenes and sesqiuterpene lactones compounds that possess anti-inflammatory and antitumor activity Antibacterial and antifungal bioassay shows antibacterial and antifungal activity higher in the myrrha oil than the Africana oil while antiviral bioassay shows higher antiviral activity in the Africana oil than myrrha oil",
"title": ""
},
{
"docid": "c60693035f0f99528a741fe5e3d88219",
"text": "Transmit array design is more challenging for dual-band operation than for single band, due to the independent 360° phase wrapping jumps needed at each band when large electrical length compensation is involved. This happens when aiming at large gains, typically above 25 dBi with beam scanning and $F/D \\le 1$ . No such designs have been reported in the literature. A general method is presented here to reduce the complexity of dual-band transmit array design, valid for arbitrarily large phase error compensation and any band ratio, using a finite number of different unit cells. The procedure is demonstrated for two offset transmit array implementations operating in circular polarization at 20 GHz(Rx) and 30 GHz(Tx) for Ka-band satellite-on-the-move terminals with mechanical beam-steering. An appropriate set of 30 dual-band unit cells is developed with transmission coefficient greater than −0.9 dB. The full-size transmit array is characterized by full-wave simulation enabling elevation beam scanning over 0°–50° with gains reaching 26 dBi at 20 GHz and 29 dBi at 30 GHz. A smaller prototype was fabricated and measured, showing a measured gain of 24 dBi at 20 GHz and 27 dBi at 30 GHz. In both cases, the beam pointing direction is coincident over the two frequency bands, and thus confirming the proposed design procedure.",
"title": ""
},
{
"docid": "f9765c97a101a163a486b18e270d67f5",
"text": "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any lp norm (p ≥ 1) on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random/adversarial input transform) and regularization techniques such as weight decay, dropout, and batch norm. 2",
"title": ""
},
{
"docid": "682b3d97bdadd988b0a21d5dd6774fbc",
"text": "WTF (\"Who to Follow\") is Twitter's user recommendation service, which is responsible for creating millions of connections daily between users based on shared interests, common connections, and other related factors. This paper provides an architectural overview and shares lessons we learned in building and running the service over the past few years. Particularly noteworthy was our design decision to process the entire Twitter graph in memory on a single server, which significantly reduced architectural complexity and allowed us to develop and deploy the service in only a few months. At the core of our architecture is Cassovary, an open-source in-memory graph processing engine we built from scratch for WTF. Besides powering Twitter's user recommendations, Cassovary is also used for search, discovery, promoted products, and other services as well. We describe and evaluate a few graph recommendation algorithms implemented in Cassovary, including a novel approach based on a combination of random walks and SALSA. Looking into the future, we revisit the design of our architecture and comment on its limitations, which are presently being addressed in a second-generation system under development.",
"title": ""
},
{
"docid": "088078841a9bf35bcfb38c1d85573860",
"text": "Multilingual Word Embeddings (MWEs) represent words from multiple languages in a single distributional vector space. Unsupervised MWE (UMWE) methods acquire multilingual embeddings without cross-lingual supervision, which is a significant advantage over traditional supervised approaches and opens many new possibilities for low-resource languages. Prior art for learning UMWEs, however, merely relies on a number of independently trained Unsupervised Bilingual Word Embeddings (UBWEs) to obtain multilingual embeddings. These methods fail to leverage the interdependencies that exist among many languages. To address this shortcoming, we propose a fully unsupervised framework for learning MWEs1 that directly exploits the relations between all language pairs. Our model substantially outperforms previous approaches in the experiments on multilingual word translation and cross-lingual word similarity. In addition, our model even beats supervised approaches trained with cross-lingual resources.",
"title": ""
},
{
"docid": "4362bc019deebc239ba4b6bc2fee446e",
"text": "observed. It was mainly due to the developments in biological studies, the change of a population lifestyle and the increase in the consumer awareness concerning food products. The health quality of food depends mainly on nutrients, but also on foreign substances such as food additives. The presence of foreign substances in the food can be justified, allowed or tolerated only when they are harmless to our health. Epidemic obesity and diabetes encouraged the growth of the artificial sweetener industry. There are more and more people who are trying to lose weight or keeping the weight off; therefore, sweeteners can be now found in almost all food products. There are two main types of sweeteners, i.e., nutritive and artificial ones. The latter does not provide calories and will not influence blood glucose; however, some of nutritive sweeteners such as sugar alcohols also characterize with lower blood glucose response and can be metabolized without insulin, being at the same time natural compounds. Sugar alcohols (polyols or polyhydric alcohols) are low digestible carbohydrates, which are obtained by substituting and aldehyde group with a hydroxyl one [1, 2]. As most of sugar alcohols are produced from their corresponding aldose sugars, they are also called alditols [3]. Among sugar alcohols can be listed hydrogenated monosaccharides (sorbitol, mannitol), hydrogenated disaccharides (isomalt, maltitol, lactitol) and mixtures of hydrogenated mono-diand/or oligosaccharides (hydrogenated starch hydrolysates) [1, 2, 4]. Polyols are naturally present in smaller quantities in fruits as well as in certain kinds of vegetables or mushrooms, and they are also regulated as either generally recognized as safe or food additives [5–7]. Food additives are substances that are added intentionally to foodstuffs in order to perform certain technological functions such as to give color, to sweeten or to help in food preservation. Abstract Epidemic obesity and diabetes encouraged the changes in population lifestyle and consumers’ food products awareness. Food industry has responded people’s demand by producing a number of energy-reduced products with sugar alcohols as sweeteners. These compounds are usually produced by a catalytic hydrogenation of carbohydrates, but they can be also found in nature in fruits, vegetables or mushrooms as well as in human organism. Due to their properties, sugar alcohols are widely used in food, beverage, confectionery and pharmaceutical industries throughout the world. They have found use as bulk sweeteners that promote dental health and exert prebiotic effect. They are added to foods as alternative sweeteners what might be helpful in the control of calories intake. Consumption of low-calorie foods by the worldwide population has dramatically increased, as well as health concerns associated with the consequent high intake of sweeteners. This review deals with the role of commonly used sugar alcohols such as erythritol, isomalt, lactitol, maltitol, mannitol, sorbitol and xylitol as sugar substitutes in food industry.",
"title": ""
},
{
"docid": "c0283c87e2a8305ba43ce87bf74a56a6",
"text": "Real-world deployments of accelerometer-based human activity recognition systems need to be carefully configured regarding the sampling rate used for measuring acceleration. Whilst a low sampling rate saves considerable energy, as well as transmission bandwidth and storage capacity, it is also prone to omitting relevant signal details that are of interest for contemporary analysis tasks. In this paper we present a pragmatic approach to optimising sampling rates of accelerometers that effectively tailors recognition systems to particular scenarios, thereby only relying on unlabelled sample data from the domain. Employing statistical tests we analyse the properties of accelerometer data and determine optimal sampling rates through similarity analysis. We demonstrate the effectiveness of our method in experiments on 5 benchmark datasets where we determine optimal sampling rates that are each substantially below those originally used whilst maintaining the accuracy of reference recognition systems. c © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4bf86b129afab00ebe60e6ad39117177",
"text": "Migrating to microservices (microservitization) enables optimising the autonomy, replaceability, decentralised governance and traceability of software architectures. Despite the hype for microservitization , the state of the art still lacks consensus on the definition of microservices, their properties and their modelling techniques. This paper summarises views of microservices from informal literature to reflect on the foundational context of this paradigm shift. A strong foundational context can advance our understanding of microservitization and help guide software architects in addressing its design problems. One such design problem is finalising the optimal level of granularity of a microservice architecture. Related design trade-offs include: balancing the size and number of microservices in an architecture and balancing the nonfunctional requirement satisfaction levels of the individual microservices as well as their satisfaction for the overall system. We propose how self-adaptivity can assist in addressing these design trade-offs and discuss some of the challenges such a selfadaptive solution. We use a hypothetical online movie streaming system to motivate these design trade-offs. A solution roadmap is presented in terms of the phases of a feedback control loop.",
"title": ""
},
{
"docid": "e3bb16dfbe54599c83743e5d7f1facc6",
"text": "Testosterone-dependent secondary sexual characteristics in males may signal immunological competence and are sexually selected for in several species,. In humans, oestrogen-dependent characteristics of the female body correlate with health and reproductive fitness and are found attractive. Enhancing the sexual dimorphism of human faces should raise attractiveness by enhancing sex-hormone-related cues to youth and fertility in females,, and to dominance and immunocompetence in males,,. Here we report the results of asking subjects to choose the most attractive faces from continua that enhanced or diminished differences between the average shape of female and male faces. As predicted, subjects preferred feminized to average shapes of a female face. This preference applied across UK and Japanese populations but was stronger for within-population judgements, which indicates that attractiveness cues are learned. Subjects preferred feminized to average or masculinized shapes of a male face. Enhancing masculine facial characteristics increased both perceived dominance and negative attributions (for example, coldness or dishonesty) relevant to relationships and paternal investment. These results indicate a selection pressure that limits sexual dimorphism and encourages neoteny in humans.",
"title": ""
}
] |
scidocsrr
|
532f1fa097be66f7ed8456dab410ca86
|
Adaptive nonlinear hierarchical control of a quad tilt-wing UAV
|
[
{
"docid": "8de43a1cbdd9d5157aee6a67eca408d3",
"text": "This paper presents two types of nonlinear controllers for an autonomous quadrotor helicopter. One type, a feedback linearization controller involves high-order derivative terms and turns out to be quite sensitive to sensor noise as well as modeling uncertainty. The second type involves a new approach to an adaptive sliding mode controller using input augmentation in order to account for the underactuated property of the helicopter, sensor noise, and uncertainty without using control inputs of large magnitude. The sliding mode controller performs very well under noisy conditions, and adaptation can effectively estimate uncertainty such as ground effects.",
"title": ""
},
{
"docid": "adc9e237e2ca2467a85f54011b688378",
"text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.",
"title": ""
}
] |
[
{
"docid": "61f0c91688994adf947f4cc61718421a",
"text": "This article reports on experiences and lessons learned during incremental migration and architectural refactoring of a commercial mobile back end as a service to microservices architecture. It explains how the researchers adopted DevOps and how this facilitated a smooth migration.",
"title": ""
},
{
"docid": "c9b221d052490f106ea9c6bc58b75c27",
"text": "Food logging is recommended by dieticians for prevention and treatment of obesity, but currently available mobile applications for diet tracking are often too difficult and time-consuming for patients to use regularly. For this reason, we propose a novel approach to food journaling that uses speech and language understanding technology in order to enable efficient self-assessment of energy and nutrient consumption. This paper presents ongoing language understanding experiments conducted as part of a larger effort to create a nutrition dialogue system that automatically extracts food concepts from a user's spoken meal description. We first summarize the data collection and annotation of food descriptions performed via Amazon Mechanical Turk AMT, for both a written corpus and spoken data from an in-domain speech recognizer. We show that the addition of word vector features improves conditional random field CRF performance for semantic tagging of food concepts, achieving an average F1 test score of 92.4 on written data; we also demonstrate that a convolutional neural network CNN with no hand-crafted features outperforms the best CRF on spoken data, achieving an F1 test score of 91.3. We illustrate two methods for associating foods with properties: segmenting meal descriptions with a CRF, and a complementary method that directly predicts associations with a feed-forward neural network. Finally, we conduct an end-to-end system evaluation through an AMT user study with worker ratings of 83% semantic tagging accuracy.",
"title": ""
},
{
"docid": "03daa46354d26c4a8aeabbe88fd2cb37",
"text": "The rapid evolution of Internet-of-Things (IoT) technologies has led to an emerging need to make them smarter. A variety of applications now run simultaneously on an ARM-based processor. For example, devices on the edge of the Internet are provided with higher horsepower to be entrusted with storing, processing and analyzing data collected from IoT devices. This significantly improves efficiency and reduces the amount of data that needs to be transported to the cloud for data processing, analysis and storage. However, commodity OSes are prone to compromise. Once they are exploited, attackers can access the data on these devices. Since the data stored and processed on the devices can be sensitive, left untackled, this is particularly disconcerting. In this paper, we propose a new system, TrustShadow that shields legacy applications from untrusted OSes. TrustShadow takes advantage of ARM TrustZone technology and partitions resources into the secure and normal worlds. In the secure world, TrustShadow constructs a trusted execution environment for security-critical applications. This trusted environment is maintained by a lightweight runtime system that coordinates the communication between applications and the ordinary OS running in the normal world. The runtime system does not provide system services itself. Rather, it forwards requests for system services to the ordinary OS, and verifies the correctness of the responses. To demonstrate the efficiency of this design, we prototyped TrustShadow on a real chip board with ARM TrustZone support, and evaluated its performance using both microbenchmarks and real-world applications. We showed TrustShadow introduces only negligible overhead to real-world applications.",
"title": ""
},
{
"docid": "0ea3451556904a534352cc7cb90b70a9",
"text": "Policy agenda research is concerned with measuring the policymaker activities. Topic classification has proven a valuable tool for policy agenda research. However, manual topic coding is extremely costly and time-consuming. Supervised topic classification offers a cost-effective and reliable alternative, yet it introduces new challenges, the most significant of which are the training set coding, classifier design, and accuracy-efficiency trade-off. In this work, we address these challenges in the context of the recently launched Croatian Policy Agendas project. We describe a new policy agenda dataset, explore the many system design choices, and report on the insights gained. Our best-performing model reaches 77% and 68% of F1-score for major topics and subtopics, respectively.",
"title": ""
},
{
"docid": "c46edb8a67c10ba5819a5eeeb0e62905",
"text": "One of the most challenging projects in information systems is extracting information from unstructured texts, including medical document classification. I am developing a classification algorithm that classifies a medical document by analyzing its content and categorizing it under predefined topics from the Medical Subject Headings (MeSH). I collected a corpus of 50 full-text journal articles (N=50) from MEDLINE, which were already indexed by experts based on MeSH. Using natural language processing (NLP), my algorithm classifies the collected articles under MeSH subject headings. I evaluated the algorithm's outcome by measuring its precision and recall of resulting subject headings from the algorithm, comparing results to the actual documents' subject headings. The algorithm classified the articles correctly under 45% to 60% of the actual subject headings and got 40% to 53% of the total subject headings correct. This holds promising solutions for the global health arena to index and classify medical documents expeditiously.",
"title": ""
},
{
"docid": "9419aa1cabec77e33ccea0c448e56b20",
"text": "We consider in this paper the problem of noisy 1-bit matrix completion under a general non-uniform sampling distribution using the max-norm as a convex relaxation for the rank. A max-norm constrained maximum likelihood estimate is introduced and studied. The rate of convergence for the estimate is obtained. Information-theoretical methods are used to establish a minimax lower bound under the general sampling model. The minimax upper and lower bounds together yield the optimal rate of convergence for the Frobenius norm loss. Computational algorithms and numerical performance are also discussed.",
"title": ""
},
{
"docid": "37feedcb9e527601cb28fe59b2526ab3",
"text": "In this paper we present a covariance based tracking algorithm for intelligent video analysis to assist marine biologists in understanding the complex marine ecosystem in the Ken-Ding sub-tropical coral reef in Taiwan by processing underwater real-time videos recorded in open ocean. One of the most important aspects of marine biology research is the investigation of fish trajectories to identify events of interest such as fish preying, mating, schooling, etc. This task, of course, requires a reliable tracking algorithm able to deal with 1) the difficulties of following fish that have multiple degrees of freedom and 2) the possible varying conditions of the underwater environment. To accommodate these needs, we have developed a tracking algorithm that exploits covariance representation to describe the object’s appearance and statistical information and also to join different types of features such as location, color intensities, derivatives, etc. The accuracy of the algorithm was evaluated by using hand-labeled ground truth data on 30000 frames belonging to ten different videos, achieving an average performance of about 94%, estimated using multiple ratios that provide indication on how good is a tracking algorithm both globally (e.g. counting objects in a fixed range of time) and locally (e.g. in distinguish occlusions among objects).",
"title": ""
},
{
"docid": "e896b306c5282da3b0fd58aaf635c027",
"text": "In June 2011 the U.S. Supreme Court ruled that video games enjoy full free speech protections and that the regulation of violent game sales to minors is unconstitutional. The Supreme Court also referred to psychological research on violent video games as \"unpersuasive\" and noted that such research contains many methodological flaws. Recent reviews in many scholarly journals have come to similar conclusions, although much debate continues. Given past statements by the American Psychological Association linking video game and media violence with aggression, the Supreme Court ruling, particularly its critique of the science, is likely to be shocking and disappointing to some psychologists. One possible outcome is that the psychological community may increase the conclusiveness of their statements linking violent games to harm as a form of defensive reaction. However, in this article the author argues that the psychological community would be better served by reflecting on this research and considering whether the scientific process failed by permitting and even encouraging statements about video game violence that exceeded the data or ignored conflicting data. Although it is likely that debates on this issue will continue, a move toward caution and conservatism as well as increased dialogue between scholars on opposing sides of this debate will be necessary to restore scientific credibility. The current article reviews the involvement of the psychological science community in the Brown v. Entertainment Merchants Association case and suggests that it might learn from some of the errors in this case for the future.",
"title": ""
},
{
"docid": "934ca8aa2798afd6e7cd4acceeed839a",
"text": "This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence.",
"title": ""
},
{
"docid": "a8a4bad208ee585ae4b4a0b3c5afe97a",
"text": "English-speaking children with specific language impairment (SLI) are known to have particular difficulty with the acquisition of grammatical morphemes that carry tense and agreement features, such as the past tense -ed and third-person singular present -s. In this study, an Extended Optional Infinitive (EOI) account of SLI is evaluated. In this account, -ed, -s, BE, and DO are regarded as finiteness markers. This model predicts that finiteness markers are omitted for an extended period of time for nonimpaired children, and that this period will be extended for a longer time in children with SLI. At the same time, it predicts that if finiteness markers are present, they will be used correctly. These predictions are tested in this study. Subjects were 18 5-year-old children with SLI with expressive and receptive language deficits and two comparison groups of children developing language normally: 22 CA-equivalent (5N) and 20 younger, MLU-equivalent children (3N). It was found that the children with SLI used nonfinite forms of lexical verbs, or omitted BE and DO, more frequently than children in the 5N and 3N groups. At the same time, like the normally developing children, when the children with SLI marked finiteness, they did so appropriately. Most strikingly, the SLI group was highly accurate in marking agreement on BE and DO forms. The findings are discussed in terms of the predictions of the EOI model, in comparison to other models of the grammatical limitations of children with SLI.",
"title": ""
},
{
"docid": "afae709279cd8adeda2888089872d70e",
"text": "One-class classification problemhas been investigated thoroughly for past decades. Among one of themost effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM).The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed.The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.",
"title": ""
},
{
"docid": "8cb33cec31601b096ff05426e5ffa848",
"text": "Efficient actuation control of flapping-wing microrobots requires a low-power frequency reference with good absolute accuracy. To meet this requirement, we designed a fully-integrated 10MHz relaxation oscillator in a 40nm CMOS process. By adaptively biasing the continuous-time comparator, we are able to achieve a power consumption of 20μW, a 68% reduction to the conventional fixed bias design. A built-in self-calibration controller enables fast post-fabrication calibration of the clock frequency. Measurements show a frequency drift of 1.2% as the battery voltage changes from 3V to 4.1V.",
"title": ""
},
{
"docid": "57c090eaab37e615b564ef8451412962",
"text": "Variational inference is an umbrella term for algorithms which cast Bayesian inference as optimization. Classically, variational inference uses the Kullback-Leibler divergence to define the optimization. Though this divergence has been widely used, the resultant posterior approximation can suffer from undesirable statistical properties. To address this, we reexamine variational inference from its roots as an optimization problem. We use operators, or functions of functions, to design variational objectives. As one example, we design a variational objective with a Langevin-Stein operator. We develop a black box algorithm, operator variational inference (opvi), for optimizing any operator objective. Importantly, operators enable us to make explicit the statistical and computational tradeoffs for variational inference. We can characterize different properties of variational objectives, such as objectives that admit data subsampling—allowing inference to scale to massive data—as well as objectives that admit variational programs—a rich class of posterior approximations that does not require a tractable density. We illustrate the benefits of opvi on a mixture model and a generative model of images.",
"title": ""
},
{
"docid": "a3c0a5a570c9c7d4fda363c6b8f792c5",
"text": "How do children identify promising hypotheses worth testing? Many studies have shown that preschoolers can use patterns of covariation together with prior knowledge to learn causal relationships. However, covariation data are not always available and myriad hypotheses may be commensurate with substantive knowledge about content domains. We propose that children can identify high-level abstract features common to effects and their candidate causes and use these to guide their search. We investigate children’s sensitivity to two such high-level features — proportion and dynamics, and show that preschoolers can use these to link effects and candidate causes, even in the absence of other disambiguating information.",
"title": ""
},
{
"docid": "d50c31e9b6ae64adc55a0c6fddb869cb",
"text": "Dynamic simulation model of the actuator with two pneumatic artificial muscles in antagonistic connection was designed and built in Matlab Simulink environment. The basis for this simulation model was dynamic model of the pneumatic actuator based on advanced geometric muscle model. The main dynamics characteristics of such actuator were obtained by model simulation, as for example muscle force change, pressure change in muscle, arm position of the actuator. Simulation results will be used in design of control system of such actuator using model reference adaptive controller.",
"title": ""
},
{
"docid": "1a638cef61762f6399df012e57b32998",
"text": "Recurrent neural networks as fundamentally different neural network from feed-forward architectures was investigated for modelling of non linear behaviour of financial markets. Recurrent neural networks could be configured with the correct choice of parameters such as the number of neurons, the number of epochs, the amount of data and their relationship with the training data for predictions of financial markets. By exploring of learning and forecasting of the recurrent neural networks is observed the same effect: better learning, which often is described by the root mean square error does not guarantee a better prediction. There are such a recurrent neural networks settings where the best results of non linear time series forecasting could be obtained. New method of orthogonal input data was proposed, which improve process of EVOLINO RNN learning and forecasting. Citations: Nijolė Maknickienė, Aleksandras Vytautas Rutkauskas, Algirdas Maknickas. Investigation of Financial Market Prediction by Recurrent Neural Network – Innovative Infotechnologies for Science, Business and Education, ISSN 2029-1035 – 2(11) 2011 – Pp. 3-8.",
"title": ""
},
{
"docid": "d2feed22afd1b6702ff4a8ebe160a5d7",
"text": "Contactless payment systems represent cashless payments that do not require physical contact between the devices used in consumer payment and POS terminals by the merchant. Radio frequency identification (RFID) devices can be embedded in the most different forms, as the form of cards, key rings, built into a watch, mobile phones. This type of payment supports the three largest payment system cards: Visa (Visa Contactless), MasterCard (MasterCard PayPass) and American Express (ExpressPay). All these products are compliant with international ISO 14443 standard, which provides a unique system for payment globally. Implementation of contactless payment systems are based on same infrastructure that exists for the payment cards with magnetic strips and does not require additional investments by the firm and financial institutions, other than upgrading the existing POS terminals. Technological solutions used for the implementation are solutions based on ISO 14443 standard, Sony FeliCa technology, RFID tokens and NFC (Near Field Communication) systems. This paper describes the advantages of introducing contactless payment system based on RF technology through pilot projects conducted by VISA, MasterCard and American Express Company in order to confirm in practice the applicability of this technology.",
"title": ""
},
{
"docid": "52504a4825bf773ced200a675d291dde",
"text": "Natural Language Generation (NLG) is defined as the systematic approach for producing human understandable natural language text based on nontextual data or from meaning representations. This is a significant area which empowers human-computer interaction. It has also given rise to a variety of theoretical as well as empirical approaches. This paper intends to provide a detailed overview and a classification of the state-of-the-art approaches in Natural Language Generation. The paper explores NLG architectures and tasks classed under document planning, micro-planning and surface realization modules. Additionally, this paper also identifies the gaps existing in the NLG research which require further work in order to make NLG a widely usable technology.",
"title": ""
},
{
"docid": "ea92d0563e89a4cd7cfcfe6fc690ed09",
"text": "At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their realvalued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.",
"title": ""
},
{
"docid": "09f3bb814e259c74f1c42981758d5639",
"text": "PURPOSE OF REVIEW\nThe application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases.\n\n\nRECENT FINDINGS\nMachine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies.\n\n\nSUMMARY\nOverall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.",
"title": ""
}
] |
scidocsrr
|
6dc764540e1d815be5e87a4a467d1dd2
|
A Family of Droids: Analyzing Behavioral Model based Android Malware Detection via Static and Dynamic Analysis
|
[
{
"docid": "9728b73d9b5075b5b0ee878ddfc9379a",
"text": "The security research community has invested significant effort in improving the security of Android applications over the past half decade. This effort has addressed a wide range of problems and resulted in the creation of many tools for application analysis. In this article, we perform the first systematization of Android security research that analyzes applications, characterizing the work published in more than 17 top venues since 2010. We categorize each paper by the types of problems they solve, highlight areas that have received the most attention, and note whether tools were ever publicly released for each effort. Of the released tools, we then evaluate a representative sample to determine how well application developers can apply the results of our community’s efforts to improve their products. We find not only that significant work remains to be done in terms of research coverage but also that the tools suffer from significant issues ranging from lack of maintenance to the inability to produce functional output for applications with known vulnerabilities. We close by offering suggestions on how the community can more successfully move forward.",
"title": ""
},
{
"docid": "982345cc5c9aee7d1326938c1d9f7784",
"text": "With the integration of mobile devices into daily life, smartphones are privy to increasing amounts of sensitive information. Sophisticated mobile malware, particularly Android malware, acquire or utilize such data without user consent. It is therefore essential to devise effective techniques to analyze and detect these threats. This article presents a comprehensive survey on leading Android malware analysis and detection techniques, and their effectiveness against evolving malware. This article categorizes systems by methodology and date to evaluate progression and weaknesses. This article also discusses evaluations of industry solutions, malware statistics, and malware evasion techniques and concludes by supporting future research paths.",
"title": ""
},
{
"docid": "11ce5bca8989b3829683430abe2aee47",
"text": "Android is the most popular smartphone operating system with a market share of 80%, but as a consequence, also the platform most targeted by malware. To deal with the increasing number of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community, the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a fully automated, publicly available and comprehensive analysis system for Android apps. ANDRUBIS combines static analysis with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage. With ANDRUBIS, we collected a dataset of over 1,000,000 Android apps, including 40% malicious apps. This dataset allows us to discuss trends in malware behavior observed from apps dating back as far as 2010, as well as to present insights gained from operating ANDRUBIS as a publicly available service for the past two years.",
"title": ""
}
] |
[
{
"docid": "a09d03e2de70774f443d2da88a32b555",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs) [1]. Brain-computer interfaces are devices that process a user’s brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted non-disabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming.",
"title": ""
},
{
"docid": "22947cc8f2b1be70df10cb6adf210fc5",
"text": "GANS are powerful generative models that are able to model the manifold of natural images. We leverage this property to perform manifold regularization by approximating the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the feature-matching GAN of Salimans et al. (2016), we achieve state-of-the-art results for GAN-based semisupervised learning on the CIFAR-10 dataset, with a method that is significantly easier to implement than competing methods.",
"title": ""
},
{
"docid": "a713d3aa4bbce697bc30e2ddccf75296",
"text": "BACKGROUND\nThe frequency of gender identity disorder is hard to determine; the number of gender reassignment operations and of court proceedings in accordance with the German Law on Transsexuality almost certainly do not fully reflect the underlying reality. There have been only a few studies on patient satisfaction with male-to-female gender reassignment surgery.\n\n\nMETHODS\n254 consecutive patients who had undergone male-to-female gender reassignment surgery at Essen University Hospital's Department of Urology retrospectively filled out a questionnaire about their subjective postoperative satisfaction.\n\n\nRESULTS\n119 (46.9% ) of the patients filled out and returned the questionnaires, at a mean of 5.05 years after surgery (standard deviation 1.61 years, range 1-7 years). 90.2% said their expectations for life as a woman were fulfilled postoperatively. 85.4% saw themselves as women. 61.2% were satisfied, and 26.2% very satisfied, with their outward appearance as a woman; 37.6% were satisfied, and 34.4% very satisfied, with the functional outcome. 65.7% said they were satisfied with their life as it is now.\n\n\nCONCLUSION\nThe very high rates of subjective satisfaction and the surgical outcomes indicate that gender reassignment surgery is beneficial. These findings must be interpreted with caution, however, because fewer than half of the questionnaires were returned.",
"title": ""
},
{
"docid": "0b17e52a3fd306c1e990b628d41a973f",
"text": "Electronic health records (EHRs) have contributed to the computerization of patient records so that it can be used not only for efficient and systematic medical services, but also for research on data science. In this paper, we compared disease prediction performance of generative adversarial networks (GANs) and conventional learning algorithms in combination with missing value prediction methods. As a result, the highest accuracy of 98.05% was obtained using stacked autoencoder as the missing value prediction method and auxiliary classifier GANs (AC-GANs) as the disease predicting method. Results show that the combination of stacked autoencoder and AC-GANs performs significantly greater than existing algorithms at the problem of disease prediction in which missing values and class imbalance exist.",
"title": ""
},
{
"docid": "bd8b0a2b060594d8513f43fbfe488443",
"text": "Part 1 of the paper presents the detection and sizing capability based on image display of sectorial scan. Examples are given for different types of weld defects: toe cracks, internal porosity, side-wall lack of fusion, underbead crack, inner-surface breaking cracks, slag inclusions, incomplete root penetration and internal cracks. Based on combination of S-scan and B-scan plotted into 3-D isometric part, the defect features could be reconstructed and measured into a draft package. Comparison between plotted data and actual defect sizes are also presented.",
"title": ""
},
{
"docid": "2939531a61f319ace08f852f783e8734",
"text": "We pose the following question: what happens when test data not only differs from training data, but differs from it in a continually evolving way? The classic domain adaptation paradigm considers the world to be separated into stationary domains with clear boundaries between them. However, in many real-world applications, examples cannot be naturally separated into discrete domains, but arise from a continuously evolving underlying process. Examples include video with gradually changing lighting and spam email with evolving spammer tactics. We formulate a novel problem of adapting to such continuous domains, and present a solution based on smoothly varying embeddings. Recent work has shown the utility of considering discrete visual domains as fixed points embedded in a manifold of lower-dimensional subspaces. Adaptation can be achieved via transforms or kernels learned between such stationary source and target subspaces. We propose a method to consider non-stationary domains, which we refer to as Continuous Manifold Adaptation (CMA). We treat each target sample as potentially being drawn from a different subspace on the domain manifold, and present a novel technique for continuous transform-based adaptation. Our approach can learn to distinguish categories using training data collected at some point in the past, and continue to update its model of the categories for some time into the future, without receiving any additional labels. Experiments on two visual datasets demonstrate the value of our approach for several popular feature representations.",
"title": ""
},
{
"docid": "27f7025c2ee602b5ad2dee830836bbef",
"text": "Arsenic contamination of rice is widespread, but the rhizosphere processes influencing arsenic attenuation remain unresolved. In particular, the formation of Fe plaque around rice roots is thought to be an important barrier to As uptake, but the relative importance of this mechanism is not well characterized. Here we elucidate the colocalization of As species and Fe on rice roots with variable Fe coatings; we used a combination of techniques--X-ray fluorescence imaging, μXANES, transmission X-ray microscopy, and tomography--for this purpose. Two dominant As species were observed in fine roots-inorganic As(V) and As(III) -with minor amounts of dimethylarsinic acid (DMA) and arsenic trisglutathione (AsGlu(3)). Our investigation shows that variable Fe plaque formation affects As entry into rice roots. In roots with Fe plaque, As and Fe were strongly colocated around the root; however, maximal As and Fe were dissociated and did not encapsulate roots that had minimal Fe plaque. Moreover, As was not exclusively associated with Fe plaque in the rice root system; Fe plaque does not coat many of the young roots or the younger portion of mature roots. Young, fine roots, important for solute uptake, have little to no iron plaque. Thus, Fe plaque does not directly intercept (and hence restrict) As supply to and uptake by rice roots but rather serves as a bulk scavenger of As predominantly near the root base.",
"title": ""
},
{
"docid": "79f55fb1f121e1184e749558c97f000b",
"text": "6a. BASIC TECHNIQUES FOR RF POWER AMPLIFICATION RF power amplifiers are commonly designated as classes A, B, C, D, E, and F [19]. All but class A employ various nonlinear, switching, and wave-shaping techniques. Classes of operation differ not in only the method of operation and efficiency, but also in their power-output capability. The power-output capability (“transistor utilization factor”) is defined as output power per transistor normalized for peak drain voltage and current of 1 V and 1 A, respectively. The basic topologies (Figures 7, 8 and 9) are single-ended, transformer-coupled, and complementary. The drain voltage and current waveforms of selected ideal PAs are shown in Figure 10.",
"title": ""
},
{
"docid": "c9aa03ce1656afdff95d426d9d4f5644",
"text": "One fundamental problem of the convolutional neural network (CNN) is catastrophic forgetting, which occurs when new object classes and data are added while the original dataset is not available any more. Training the network only using the new dataset deteriorates the performance with respect to the old dataset. To overcome this problem, we propose an expanded network architecture, called the ExpandNet, to enhance the CNN incremental learning capability. Our solution keeps filters of the original networks on one hand, yet adds additional filters to the convolutional layers as well as the fully connected layers on the other hand. The proposed new architecture does not need any information of the original dataset, and it is trained using the new dataset only. Extensive evaluations based on the CIFAR −10 and the CIFAR −100 datasets show that the proposed method has a slower forgetting rate as compared to several existing incremental learning networks.",
"title": ""
},
{
"docid": "678b90e0a7fdc1166928ff952b603f29",
"text": "Semantic search promises to produce precise answers to user queries by taking advantage of the availability of explicit semantics of information in the context of the semantic web. Existing tools have been primarily designed to enhance the performance of traditional search technologies but with little support for naive users, i.e., ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by hiding the complexity of semantic search from end users and making it easy to use and effective. In contrast with existing semantic-based keyword search engines which typically compromise their capability of handling complex user queries in order to overcome the problem of knowledge overhead, SemSearch not only overcomes the problem of knowledge overhead but also supports complex queries. Further, SemSearch provides comprehensive means to produce precise answers that on the one hand satisfy user queries and on the other hand are self-explanatory and understandable by end users. A prototype of the search engine has been implemented and applied in the semantic web portal of our lab. An initial evaluation shows promising results.",
"title": ""
},
{
"docid": "784f3100dbd852b249c0e9b0761907f1",
"text": "The bi-directional beam from an equiangular spiral antenna (EAS) is changed to a unidirectional beam using an electromagnetic band gap (EBG) reflector. The antenna height, measured from the upper surface of the EBG reflector to the spiral arms, is chosen to be extremely small to realize a low-profile antenna: 0.07 wavelength at the lowest analysis frequency of 3 GHz. The analysis shows that the EAS backed by the EBG reflector does not reproduce the inherent wideband axial ratio characteristic observed when the EAS is isolated in free space. The deterioration in the axial ratio is examined by decomposing the total radiation field into two field components: one component from the equiangular spiral and the other from the EBG reflector. The examination reveals that the amplitudes and phases of these two field components do not satisfy the constructive relationship necessary for circularly polarized radiation. Based on this finding, next, the EBG reflector is modified by gradually removing the patch elements from the center region of the reflector, thereby satisfying the required constructive relationship between the two field components. This equiangular spiral with a modified EBG reflector shows wideband characteristics with respect to the axial ratio, input impedance and gain within the design frequency band (4-9 GHz). Note that, for comparison, the antenna characteristics for an EAS isolated in free space and an EAS backed by a perfect electric conductor are also presented.",
"title": ""
},
{
"docid": "0af8bbdda9482f24dfdfc41046382e1b",
"text": "In this paper, we have examined the effectiveness of \"style matrix\" which is used in the works on style transfer and texture synthesis by Gatys et al. in the context of image retrieval as image features. A style matrix is presented by Gram matrix of the feature maps in a deep convolutional neural network. We proposed a style vector which are generated from a style matrix with PCA dimension reduction. In the experiments, we evaluate image retrieval performance using artistic images downloaded from Wikiarts.org regarding both artistic styles ans artists. We have obtained 40.64% and 70.40% average precision for style search and artist search, respectively, both of which outperformed the results by common CNN features. In addition, we found PCA-compression boosted the performance.",
"title": ""
},
{
"docid": "dab44174c41a470421d4f7337aa73b9e",
"text": "UNLABELLED\nRetinopathy of prematurity (ROP) is a blinding disease, initiated by delayed retinal vascular growth after premature birth. There are both oxygen-regulated and non-oxygen-regulated factors, which contribute to both normal vascular development and retinal neovascularization. One important oxygen-regulated factor, critical to both phases of ROP, is vascular endothelial growth factor (VEGF). A critical non oxygen-regulated growth factor is insulin-like growth factor (IGF-1). In knockout mice, lack of IGF-1 prevents normal retinal vascular growth, despite the presence of VEGF, important to vessel development. In vitro, low IGF-1 prevents vascular endothelial growth factor-induced activation of Akt, a kinase critical for vascular endothelial cell survival. Premature infants who develop ROP have lower levels of serum IGF-1 than age-matched infants without disease.\n\n\nCONCLUSION\nIGF-1 is critical to normal vascular development. Low IGF-1 predicts ROP and restoration of IGF-1 to normal levels may prevent ROP.",
"title": ""
},
{
"docid": "e4c493697d9bece8daec6b2dd583e6bb",
"text": "High dimensionality of the feature space is one of the most important concerns in text classification problems due to processing time and accuracy considerations. Selection of distinctive features is therefore essential for text classification. This study proposes a novel filter based probabilistic feature selection method, namely distinguishing feature selector (DFS), for text classification. The proposed method is compared with well-known filter approaches including chi square, information gain, Gini index and deviation from Poisson distribution. The comparison is carried out for different datasets, classification algorithms, and success measures. Experimental results explicitly indicate that DFS offers a competitive performance with respect to the abovementioned approaches in terms of classification accuracy, dimension reduction rate and processing time.",
"title": ""
},
{
"docid": "0a143c2d4af3cc726964a90927556399",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "cafdc8bb8b86171026d5a852e7273486",
"text": "A majority of the existing algorithms which mine graph datasets target complete, frequent sub-graph discovery. We describe the graph-based data mining system Subdue which focuses on the discovery of sub-graphs which are not only frequent but also compress the graph dataset, using a heuristic algorithm. The rationale behind the use of a compression-based methodology for frequent pattern discovery is to produce a fewer number of highly interesting patterns than to generate a large number of patterns from which interesting patterns need to be identified. We perform an experimental comparison of Subdue with the graph mining systems gSpan and FSG on the Chemical Toxicity and the Chemical Compounds datasets that are provided with gSpan. We present results on the performance on the Subdue system on the Mutagenesis and the KDD 2003 Citation Graph dataset. An analysis of the results indicates that Subdue can efficiently discover best-compressing frequent patterns which are fewer in number but can be of higher interest.",
"title": ""
},
{
"docid": "6abe1b7806f6452bbcc087b458a7ef96",
"text": "We demonstrate distributed, online, and real-time cooperative localization and mapping between multiple robots operating throughout an unknown environment using indirect measurements. We present a novel Expectation Maximization (EM) based approach to efficiently identify inlier multi-robot loop closures by incorporating robot pose uncertainty, which significantly improves the trajectory accuracy over long-term navigation. An EM and hypothesis based method is used to determine a common reference frame. We detail a 2D laser scan correspondence method to form robust correspondences between laser scans shared amongst robots. The implementation is experimentally validated using teams of aerial vehicles, and analyzed to determine its accuracy, computational efficiency, scalability to many robots, and robustness to varying environments. We demonstrate through multiple experiments that our method can efficiently build maps of large indoor and outdoor environments in a distributed, online, and real-time setting.",
"title": ""
},
{
"docid": "cdb1effc893b1321c377f6cfecefbfaf",
"text": "A hamiltonian walk of a graph is a shortest closed walk that passes through every vertex at least once, and the length is the total number of traversed edges. The hamiltonian walk problem in which one would like to find a hamiltonian walk of a given graph is NP-complete. The problem is a generalized hamiltonian cycle problem and is a special case of the traveling salesman problem. Employing the techniques of divide-and-conquer and augmentation, we present an approximation algorithm for the problem on maximal planar graphs. The algorithm finds, in Ow2) time, a closed spanning walk of a given arbitrary maximal planar graph, and the length of the obtained walk is at most i@ 3) if the graph has p (Z 9) vertices. Hence the worst-case bound is i.",
"title": ""
},
{
"docid": "990067864c123b45e5c3d06ef1a0cf7d",
"text": "BACKGROUND\nRetrospective single-centre series have shown the feasibility of sentinel lymph-node (SLN) identification in endometrial cancer. We did a prospective, multicentre cohort study to assess the detection rate and diagnostic accuracy of the SLN procedure in predicting the pathological pelvic-node status in patients with early stage endometrial cancer.\n\n\nMETHODS\nPatients with International Federation of Gynecology and Obstetrics (FIGO) stage I-II endometrial cancer had pelvic SLN assessment via cervical dual injection (with technetium and patent blue), and systematic pelvic-node dissection. All lymph nodes were histopathologically examined and SLNs were serial sectioned and examined by immunochemistry. The primary endpoint was estimation of the negative predictive value (NPV) of sentinel-node biopsy per hemipelvis. This is an ongoing study for which recruitment has ended. The study is registered with ClinicalTrials.gov, number NCT00987051.\n\n\nFINDINGS\nFrom July 5, 2007, to Aug 4, 2009, 133 patients were enrolled at nine centres in France. No complications occurred after injection of technetium colloid and no anaphylactic reactions were noted after patent blue injection. No surgical complications were reported during SLN biopsy, including procedures that involved conversion to open surgery. At least one SLN was detected in 111 of the 125 eligible patients. 19 of 111 (17%) had pelvic-lymph-node metastases. Five of 111 patients (5%) had an associated SLN in the para-aortic area. Considering the hemipelvis as the unit of analysis, NPV was 100% (95% CI 95-100) and sensitivity 100% (63-100). Considering the patient as the unit of analysis, three patients had false-negative results (two had metastatic nodes in the contralateral pelvic area and one in the para-aortic area), giving an NPV of 97% (95% CI 91-99) and sensitivity of 84% (62-95). All three of these patients had type 2 endometrial cancer. Immunohistochemistry and serial sectioning detected metastases undiagnosed by conventional histology in nine of 111 (8%) patients with detected SLNs, representing nine of the 19 patients (47%) with metastases. SLN biopsy upstaged 10% of patients with low-risk and 15% of those with intermediate-risk endometrial cancer.\n\n\nINTERPRETATION\nSLN biopsy with cervical dual labelling could be a trade-off between systematic lymphadenectomy and no dissection at all in patients with endometrial cancer of low or intermediate risk. Moreover, our study suggests that SLN biopsy could provide important data to tailor adjuvant therapy.\n\n\nFUNDING\nDirection Interrégionale de Recherche Clinique, Ile-de-France, Assistance Publique-Hôpitaux de Paris.",
"title": ""
}
] |
scidocsrr
|
1c831b50481cb9bd4a3cce9d068f0cc0
|
Proposition Knowledge Graphs
|
[
{
"docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
}
] |
[
{
"docid": "4ac34fd54fd3ac1a390d7176426e4ada",
"text": "With the increasing use of security technology, technical attacks should become more difficult leading attackers to employ social engineering as a means to obtaining unauthorized access to information. Therefore, social engineering is a potentially dangerous threat to information security. Fortunately, a number of countermeasures have been proposed to defend against it. These countermeasures include implementing policy, providing end-user and key personnel education, and performing security audits. However, most current prominent information assurance curricula do not directly address social engineering and only indirectly address the countermeasures. Amending these curricula to include social engineering as a topic may help students be better prepared for encountering social engineering threats.",
"title": ""
},
{
"docid": "25efced5063ca8c9e842c79a8d3ab073",
"text": "The best practice to prevent Cross Site Scripting (XSS) attacks is to apply encoders to sanitize untrusted data. To balance security and functionality, encoders should be applied to match the web page context, such as HTML body, JavaScript, and style sheets. A common programming error is the use of a wrong type of encoder to sanitize untrusted data, leaving the application vulnerable. We present a security unit testing approach to detect XSS vulnerabilities caused by improper encoding of untrusted data. Unit tests for the XSS vulnerability are constructed out of each web page and then evaluated by a unit test execution framework. A grammar-based attack generator is devised to automatically generate test inputs. We also propose a vulnerability repair technique that can automatically fix detected vulnerabilities in many situations. Evaluation of this approach has been conducted on an open source medical record application with over 200 web pages written in JSP.",
"title": ""
},
{
"docid": "d4398beee01d5ddda7c7bb5151693b0e",
"text": "Our goal is to refocus the question about cybersecurity research from 'is this process scientific' to 'why is this scientific process producing unsatisfactory results'. We focus on five common complaints that claim cybersecurity is not or cannot be scientific. Many of these complaints presume views associated with the philosophical school known as Logical Empiricism that more recent scholarship has largely modified or rejected. Modern philosophy of science, supported by mathematical modeling methods, provides constructive resources to mitigate all purported challenges to a science of security. Therefore, we argue the community currently practices a science of cybersecurity. A philosophy of science perspective suggests the following form of practice: structured observation to seek intelligible explanations of phenomena, evaluating explanations in many ways, with specialized fields (including engineering and forensics) constraining explanations within their own expertise, inter-translating where necessary. A natural question to pursue in future work is how collecting, evaluating, and analyzing evidence for such explanations is different in security than other sciences.",
"title": ""
},
{
"docid": "17a1f03485b74ba0f1efd76e118e2b7a",
"text": "DISC Measure, Squeezer, Categorical Data Clustering, Cosine similarity References Rishi Sayal and Vijay Kumar. V. 2011. A novel Similarity Measure for Clustering Categorical Data Sets. International Journal of Computer Application (0975-8887). Aditya Desai, Himanshu Singh and Vikram Pudi. 2011. DISC Data-Intensive Similarity Measure for Categorical Data. Pacific-Asia Conferences on Knowledge Discovery Data Mining. Shyam Boriah, Varun Chandola and Vipin Kumar. 2008. Similarity Measure for Clustering Categorical Data. Comparative Evaluation. SIAM International Conference on Data Mining-SDM. Taoying Li, Yan Chen. 2009. Fuzzy Clustering Ensemble Algorithm for partitional Categorical Data. IEEE, International conference on Business Intelligence and Financial Engineering.",
"title": ""
},
{
"docid": "c2081b44d63490f2967517558065bdf0",
"text": "The add-on battery pack in plug-in hybrid electric vehicles can be charged from an AC outlet, feed power back to the grid, provide power for electric traction, and capture regenerative energy when braking. Conventionally, three-stage bidirectional converter interfaces are used to fulfil these functions. In this paper, a single stage integrated converter is proposed based on direct AC/DC conversion theory. The proposed converter eliminates the full bridge rectifier, reduces the number of semiconductor switches and high current inductors, and improves the conversion efficiency.",
"title": ""
},
{
"docid": "829b910e2c73ee15866fc59de4884200",
"text": "Shared-memory multiprocessors are frequently used as compute servers with multiple parallel applications executing at the same time. In such environments, the efficiency of a parallel application can be significantly affected by the operating system scheduling policy. In this paper, we use detailed simulation studies to evaluate the performance of several different scheduling strategies, These include regular priority scheduling, coscheduling or gang scheduling, process control with processor partitioning, handoff scheduling, and affinity-based scheduling. We also explore tradeoffs between the use of busy-waiting and blocking synchronization primitives and their interactions with the scheduling strategies. Since effective use of caches is essential to achieving high performance, a key focus is on the impact of the scheduling strategies on the caching behavior of the applications.Our results show that in situations where the number of processes exceeds the number of processors, regular priority-based scheduling in conjunction with busy-waiting synchronization primitives results in extremely poor processor utilization. In such situations, use of blocking synchronization primitives can significantly improve performance. Process control and gang scheduling strategies are shown to offer the highest performance, and their performance is relatively independent of the synchronization method used. However, for applications that have sizable working sets that fit into the cache, process control performs better than gang scheduling. For the applications considered, the performance gains due to handoff scheduling and processor affinity are shown to be small.",
"title": ""
},
{
"docid": "d339ef4e124fdc9d64330544b7391055",
"text": "Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Part I of this series presented a neurophysiologic theory of the effects of Sudarshan Kriya Yoga (SKY). Part II will review clinical studies, our own clinical observations, and guidelines for the safe and effective use of yoga breath techniques in a wide range of clinical conditions. Although more clinical studies are needed to document the benefits of programs that combine pranayama (yogic breathing) asanas (yoga postures), and meditation, there is sufficient evidence to consider Sudarshan Kriya Yoga to be a beneficial, low-risk, low-cost adjunct to the treatment of stress, anxiety, post-traumatic stress disorder (PTSD), depression, stress-related medical illnesses, substance abuse, and rehabilitation of criminal offenders. SKY has been used as a public health intervention to alleviate PTSD in survivors of mass disasters. Yoga techniques enhance well-being, mood, attention, mental focus, and stress tolerance. Proper training by a skilled teacher and a 30-minute practice every day will maximize the benefits. Health care providers play a crucial role in encouraging patients to maintain their yoga practices.",
"title": ""
},
{
"docid": "6be6e28cf4a4a044122901fad0d2bf40",
"text": "ÐAutomatic transformation of paper documents into electronic documents requires geometric document layout analysis at the first stage. However, variations in character font sizes, text line spacing, and document layout structures have made it difficult to design a general-purpose document layout analysis algorithm for many years. The use of some parameters has therefore been unavoidable in previous methods. In this paper, we propose a parameter-free method for segmenting the document images into maximal homogeneous regions and identifying them as texts, images, tables, and ruling lines. A pyramidal quadtree structure is constructed for multiscale analysis and a periodicity measure is suggested to find a periodical attribute of text regions for page segmentation. To obtain robust page segmentation results, a confirmation procedure using texture analysis is applied to only ambiguous regions. Based on the proposed periodicity measure, multiscale analysis, and confirmation procedure, we could develop a robust method for geometric document layout analysis independent of character font sizes, text line spacing, and document layout structures. The proposed method was experimented with the document database from the University of Washington and the MediaTeam Document Database. The results of these tests have shown that the proposed method provides more accurate results than the previous ones. Index TermsÐGeometric document layout analysis, parameter-free method, periodicity estimation, multiscale analysis, page segmentation.",
"title": ""
},
{
"docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb",
"text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.",
"title": ""
},
{
"docid": "5ad4b3c5905b7b716a806432b755e60b",
"text": "The formation of both germline cysts and the germinal epithelium is described during the ovary development in Cyprinus carpio. As in the undifferentiated gonad of mammals, cords of PGCs become oogonia when they are surrounded by somatic cells. Ovarian differentiation is triggered when oogonia proliferate and enter meiosis, becoming oocytes. Proliferation of single oogonium results in clusters of interconnected oocytes, the germline cysts, that are encompassed by somatic prefollicle cells and form cell nests. Both PGCs and cell nests are delimited by a basement membrane. Ovarian follicles originate from the germline cysts, about the time of meiotic arrest, as prefollicle cells surround oocytes, individualizing them. They synthesize a basement membrane and an oocyte forms a follicle. With the formation of the stroma, unspecialized mesenchymal cells differentiate, and encompass each follicle, forming the theca. The follicle, basement membrane, and theca constitute the follicle complex. Along the ventral region of the differentiating ovary, the epithelium invaginates to form the ovigerous lamellae whose developing surface epithelium, the germinal epithelium, is composed of epithelial cells, germline cysts with oogonia, oocytes, and developing follicles. The germinal epithelium rests upon a basement membrane. The follicles complexes are connected to the germinal epithelium by a shared portion of basement membrane. In the differentiated ovary, germ cell proliferation in the epithelium forms nests in which there are the germline cysts. Germline cysts, groups of cells that form from a single founder cell and are joined by intercellular bridges, are conserved throughout the vertebrates, as is the germinal epithelium.",
"title": ""
},
{
"docid": "1c5dcfe0c211d1cbf02aae76e6c39a77",
"text": "Introduction In his Harvard Business Review article in 1994, Henry Mintzberg coined strategic planning as the mere programming, i.e. the calculation of a plan. As such “strategic planning isn’t strategic thinking”, he confined (p 107). Indeed, strategic planning may actually spoil any creative strategic thinking and, hence, the possibility to create a competitive advantage. He condemned that planning itself has decreased the commitment of executives and created a disadvantageous atmosphere spit with formalized instead of creative thinking. Now, two decades later, calculating plans has become in vogue again. Since Bigdata has entered the business world, it has been celebrated as a savior in an increasing complex economic environment. In this vein, Mintzberg’s accuse has become more relevant than ever. Especially because of the globalization and internationalization of markets, companies are increasingly facing challenging business environments. Today’s markets are characterized by fierce competition, rapid changes and increasing uncertainty and complexity where Bigdata is becoming a central determinant for executives to decide on their strategic plan. Therefore, we ask: Does planning actually create a competitive advantage? What are the social and operational outcomes of strategic planning? Which environment actually favors the planning of strategies?",
"title": ""
},
{
"docid": "91a919fa526704ff9c4562ae39aceeaa",
"text": "We consider the problem of finding the shortest distance between all pairs of vertices in a complete digraph on n vertices, whose arc-lengths are non-negative random variables. We describe an algorithm which solves this problem in O(n(m + n log n)) expected time, where m is the expected number of arcs with finite length. If m is small enough, this represents a small improvement over the bound in Bloniarz [3]. We consider also the case when the arc-lengths are random variables which are independently distributed with distribution function F, where F(0) = 0 and F is differentiable at 0; for this case, we describe an algorithm which runs in O(n 2log n) expected time. In our treatment of the shortest-path problem we consider the following problem in combinatorial probability theory. A town contains n people, one of whom knows a rumour. At the first stage he tells someone chosen randomly from the town; at each stage, each person who knows the rumour tells someone else, chosen randomly from the town and independently of all other choices. Let Sn be the number of stages before the whole town knows the rumour. We show that Sn/log2n--\" 1 + loge 2 in probability as n ~ 0% and estimate the probabilities of large deviations in Sn.",
"title": ""
},
{
"docid": "4ea7fba21969fcdd2de9b4e918583af8",
"text": "Due to the explosion in the size of the WWW[1,4,5] it becomes essential to make the crawling process parallel. In this paper we present an architecture for a parallel crawler that consists of multiple crawling processes called as C-procs which can run on network of workstations. The proposed crawler is scalable, is resilient against system crashes and other event. The aim of this architecture is to efficiently and effectively crawl the current set of publically indexable web pages so that we can maximize the download rate while minimizing the overhead from parallelization",
"title": ""
},
{
"docid": "d7561aacef14a5913586b743018acb7e",
"text": "Most of all interaction tasks relevant for a general three-dimensional virtual environment can be supported by 6DOF control and grab/select input. Obviously a very efficient method is direct manipulation with bare hands, like in real environment. This paper shows the possibility to perform non-trivial tasks using only a few well-known hand gestures, so that almost no training is necessary to interact with 3D-softwares. Using this gesture interaction we have built an immersive 3D modeling system with 3D model representation based on a mesh library, which is optimized not only for real-time rendering but also accommodates for changes of both vertex positions and mesh connectivity in real-time. For performing the gesture interaction, the user's hand is marked with just four fingertipthimbles made of inexpensive material as simple as white paper. Within our scenario, the recognized hand gestures are used to select, create, manipulate and deform the meshes in a spontaneous and intuitive way. All modeling tasks are performed wirelessly through a camera/vision tracking method for the head and hand interaction.",
"title": ""
},
{
"docid": "cffca9fbd3a5c93175e06547831755e2",
"text": "Many challenges in natural language processing require generating text, including language translation, dialogue generation, and speech recognition. For all of these problems, text generation becomes more difficult as the text becomes longer. Current language models often struggle to keep track of coherence for long pieces of text. Here, we attempt to have the model construct and use an outline of the text it generates to keep it focused. We find that the usage of an outline improves perplexity. We do not find that using the outline improves human evaluation over a simpler baseline, revealing a discrepancy in perplexity and human perception. Similarly, hierarchical generation is not found to improve human evaluation scores.",
"title": ""
},
{
"docid": "232bf10d578c823b0cd98a3641ace44a",
"text": "The effect of economic globalization on the number of transnational terrorist incidents within countries is analyzed statistically, using a sample of 112 countries from 1975 to 1997. Results show that trade, foreign direct investment (FDI), and portfolio investment have no direct positive effect on transnational terrorist incidents within countries and that economic developments of a country and its top trading partners reduce the number of terrorist incidents inside the country. To the extent that trade and FDI promote economic development, they have an indirect negative effect on transnational terrorism.",
"title": ""
},
{
"docid": "f74a0c176352b8378d9f27fdf93763c9",
"text": "The future of user interfaces will be dominated by hand gestures. In this paper, we explore an intuitive hand gesture based interaction for smartphones having a limited computational capability. To this end, we present an efficient algorithm for gesture recognition with First Person View (FPV), which focuses on recognizing a four swipe model (Left, Right, Up and Down) for smartphones through single monocular camera vision. This can be used with frugal AR/VR devices such as Google Cardboard1 andWearality2 in building AR/VR based automation systems for large scale deployments, by providing a touch-less interface and real-time performance. We take into account multiple cues including palm color, hand contour segmentation, and motion tracking, which effectively deals with FPV constraints put forward by a wearable. We also provide comparisons of swipe detection with the existing methods under the same limitations. We demonstrate that our method outperforms both in terms of gesture recognition accuracy and computational time.",
"title": ""
},
{
"docid": "2f9f21740603b7a84abd57d7c7c02c11",
"text": "Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditional SRAM/DRAM-based memory architecture in future microprocessor design. One of the major disadvantages for NVM is the latency and energy overhead associated with write operations. Mitigation techniques to minimize the write overhead for NVM-based main memory architecture have been studied extensively. However, most prior work focuses on optimization techniques for NVM-based main memory itself, with little attention paid to cache management policies for the Last-Level Cache (LLC).\n In this article, we propose a Writeback-Aware Dynamic CachE (WADE) management technique to help mitigate the write overhead in NVM-based memory.<sup;>1</sup;> The proposal is based on the observation that, when dirty cache blocks are evicted from the LLC and written into NVM-based memory (with PCM as an example), the long latency and high energy associated with write operations to NVM-based memory can cause system performance/power degradation. Thus, reducing the number of writeback requests from the LLC is critical.\n The proposed WADE cache management technique tries to keep highly reused dirty cache blocks in the LLC. The technique predicts blocks that are frequently written back in the LLC. The LLC sets are dynamically partitioned into a frequent writeback list and a nonfrequent writeback list. It keeps a best size of each list in the LLC. Our evaluation shows that the technique can reduce the number of writeback requests by 16.5% for memory-intensive single-threaded benchmarks and 10.8% for multicore workloads. It yields a geometric mean speedup of 5.1% for single-thread applications and 7.6% for multicore workloads. Due to the reduced number of writeback requests to main memory, the technique reduces the energy consumption by 8.1% for single-thread applications and 7.6% for multicore workloads.",
"title": ""
}
] |
scidocsrr
|
b5c668555a40fb7c6bc55f058b329202
|
Translingual Mining from Text Data
|
[
{
"docid": "7fa92e07f76bcefc639ae807147b8d7b",
"text": "We present a novel method for discovering parallel sentences in comparable, non-parallel corpora. We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. Using this approach, we extract parallel data from large Chinese, Arabic, and English non-parallel newspaper corpora. We evaluate the quality of the extracted data by showing that it improves the performance of a state-of-the-art statistical machine translation system. We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. Thus, our method can be applied with great benefit to language pairs for which only scarce resources are available.",
"title": ""
},
{
"docid": "724388aac829af9671a90793b1b31197",
"text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.",
"title": ""
}
] |
[
{
"docid": "44c9de5fbaac78125277a9995890b43c",
"text": "In the real world, speech is usually distorted by both reverberation and background noise. In such conditions, speech intelligibility is degraded substantially, especially for hearing-impaired (HI) listeners. As a consequence, it is essential to enhance speech in the noisy and reverberant environment. Recently, deep neural networks have been introduced to learn a spectral mapping to enhance corrupted speech, and shown significant improvements in objective metrics and automatic speech recognition score. However, listening tests have not yet shown any speech intelligibility benefit. In this paper, we propose to enhance the noisy and reverberant speech by learning a mapping to reverberant target speech rather than anechoic target speech. A preliminary listening test was conducted, and the results show that the proposed algorithm is able to improve speech intelligibility of HI listeners in some conditions. Moreover, we develop a masking-based method for denoising and compare it with the spectral mapping method. Evaluation results show that the masking-based method outperforms the mapping-based method.",
"title": ""
},
{
"docid": "124fa48e1e842f2068a8fb55a2b8bb8e",
"text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.",
"title": ""
},
{
"docid": "f1dd866b1cdd79716f2bbc969c77132a",
"text": "Fiber optic sensor technology offers the possibility of sensing different parameters like strain, temperature, pressure in harsh environment and remote locations. these kinds of sensors modulates some features of the light wave in an optical fiber such an intensity and phase or use optical fiber as a medium for transmitting the measurement information. The advantages of fiber optic sensors in contrast to conventional electrical ones make them popular in different applications and now a day they consider as a key component in improving industrial processes, quality control systems, medical diagnostics, and preventing and controlling general process abnormalities. This paper is an introduction to fiber optic sensor technology and some of the applications that make this branch of optic technology, which is still in its early infancy, an interesting field. Keywords—Fiber optic sensors, distributed sensors, sensor application, crack sensor.",
"title": ""
},
{
"docid": "47b4b22cee9d5693c16be296afe61982",
"text": "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.",
"title": ""
},
{
"docid": "6c4a7a6d21c85f3f2f392fbb1621cc51",
"text": "The International Academy of Education (IAE) is a not-for-profit scientific association that promotes educational research, and its dissemination and implementation. Founded in 1986, the Academy is dedicated to strengthening the contributions of research, solving critical educational problems throughout the world, and providing better communication among policy makers, researchers, and practitioners. The general aim of the IAE is to foster scholarly excellence in all fields of education. Towards this end, the Academy provides timely syntheses of research-based evidence of international importance. The Academy also provides critiques of research and of its evidentiary basis and its application to policy. This booklet about teacher professional learning and development has been prepared for inclusion in the Educational Practices Series developed by the International Academy of Education and distributed by the International Bureau of Education and the Academy. As part of its mission, the Academy provides timely syntheses of research on educational topics of international importance. This is the eighteenth in a series of booklets on educational practices that generally improve learning. This particular booklet is based on a synthesis of research evidence produced for the New Zealand Ministry of Education's Iterative Best Evidence Synthesis (BES) Programme, which is designed to be a catalyst for systemic improvement and sustainable development in education. This synthesis, and others in the series, are available electronically at www.educationcounts.govt.nz/themes/BES. All BESs are written using a collaborative approach that involves the writers, teacher unions, principal groups, teacher educators, academics, researchers, policy advisers, and other interested parties. To ensure its rigour and usefulness, each BES follows national guidelines developed by the Ministry of Education. Professor Helen Timperley was lead writer for the Teacher Professional Learning and Development: Best Evidence Synthesis Iteration [BES], assisted by teacher educators Aaron Wilson and Heather Barrar and research assistant Irene Fung, all of the University of Auckland. The BES is an analysis of 97 studies of professional development that led to improved outcomes for the students of the participating teachers. Most of these studies came from the United States, New Zealand, the Netherlands, the United Kingdom, Canada, and Israel. Dr Lorna Earl provided formative quality assurance for the synthesis; Professor John Hattie and Dr Gavin Brown oversaw the analysis of effect sizes. Helen Timperley is Professor of Education at the University of Auckland. The primary focus of her research is promotion of professional and organizational learning in schools for the purpose of improving student learning. She has …",
"title": ""
},
{
"docid": "e08914f566fde1dd91a5270d0e12d886",
"text": "Automation in agriculture system is very important these days. This paper proposes an automated system for irrigating the fields. ESP-8266 WIFI module chip is used to connect the system to the internet. Various types of sensors are used to check the content of moisture in the soil, and the water is supplied to the soil through the motor pump. IOT is used to inform the farmers of the supply of water to the soil through an android application. Every time water is given to the soil, the farmer will get to know about that.",
"title": ""
},
{
"docid": "79de6591c4d7bc26d2f2eea2f2b19756",
"text": "This paper presents a MOOC-ready online FPGA laboratory platform which targets computer system experiments. Goal of design is to provide user with highly approximate experience and results as offline experiments. Rich functions are implemented by utilizing SoC FPGA as the controller of lab board. The design details and effects are discussed in this paper.",
"title": ""
},
{
"docid": "c4f9c924963cadc658ad9c97560ea252",
"text": "A novel broadband circularly polarized (CP) antenna is proposed. The operating principle of this CP antenna is different from those of conventional CP antennas. An off-center-fed dipole is introduced to achieve the 90° phase difference required for circular polarization. The new CP antenna consists of two off-center-fed dipoles. Combining such two new CP antennas leads to a bandwidth enhancement for circular polarization. A T-shaped microstrip probe is used to excite the broadband CP antenna, featuring a simple planar configuration. It is shown that the new broadband CP antenna achieves an axial ratio (AR) bandwidth of 55% (1.69-3.0 GHz) for AR <; 3 dB, an impedance bandwidth of 60% (1.7-3.14 GHz) for return loss (RL) > 15 dB, and an antenna gain of 6-9 dBi. The new mechanism for circular polarization is described and an experimental verification is presented.",
"title": ""
},
{
"docid": "408ef85850165cb8ffa97811cb5dc957",
"text": "Inspired by the recent development of deep network-based methods in semantic image segmentation, we introduce an end-to-end trainable model for face mask extraction in video sequence. Comparing to landmark-based sparse face shape representation, our method can produce the segmentation masks of individual facial components, which can better reflect their detailed shape variations. By integrating convolutional LSTM (ConvLSTM) algorithm with fully convolutional networks (FCN), our new ConvLSTM-FCN model works on a per-sequence basis and takes advantage of the temporal correlation in video clips. In addition, we also propose a novel loss function, called segmentation loss, to directly optimise the intersection over union (IoU) performances. In practice, to further increase segmentation accuracy, one primary model and two additional models were trained to focus on the face, eyes, and mouth regions, respectively. Our experiment shows the proposed method has achieved a 16.99% relative improvement (from 54.50 to 63.76% mean IoU) over the baseline FCN model on the 300 Videos in the Wild (300VW) dataset.",
"title": ""
},
{
"docid": "f4e1ed913d3fd6e82a1651944d7a6e4c",
"text": "The availability of massive data about sports activities offers nowadays the opportunity to quantify the relation between performance and success. In this work, we analyze more than 6,000 games and 10 million events in the six major European leagues and investigate the relation between team performance and success in soccer competitions. We discover that a team’s success in the national tournament is significantly related to its typical performance. Moreover, we observe that while victory and defeats can be explained by the team’s performance during a game, draws are difficult to describe with a machine learning approach. We then perform a simulation of an entire season of the six leagues where the outcome of every game is replaced by a synthetic outcome (victory, defeat, or draw) based on a machine learning model trained on the previous seasons. We find that the final rankings in the simulated tournaments are close to the actual rankings in the real tournaments, suggesting that a complex systems’ view on soccer has the potential of revealing hidden patterns regarding the relation between performance and success.",
"title": ""
},
{
"docid": "0610ec403ed86dd1cf2f84073b59cc37",
"text": "SQL injection attacks pose a serious threat to the security of Web applications because they can give attackers unrestricted access to databases that contain sensitive information. In this paper, we propose a new, highly automated approach for protecting existing Web applications against SQL injection. Our approach has both conceptual and practical advantages over most existing techniques. From the conceptual standpoint, the approach is based on the novel idea of positive tainting and the concept of syntax-aware evaluation. From the practical standpoint, our technique is at the same time precise and efficient and has minimal deployment requirements. The paper also describes wasp, a tool that implements our technique, and a set of studies performed to evaluate our approach. In the studies, we used our tool to protect several Web applications and then subjected them to a large and varied set of attacks and legitimate accesses. The evaluation was a complete success: wasp successfully and efficiently stopped all of the attacks without generating any false positives.",
"title": ""
},
{
"docid": "c974e6b4031fde2b8e1de3ade33caef4",
"text": "A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments.",
"title": ""
},
{
"docid": "7332f08a9447fd321f7e40609cfabfc0",
"text": "Requirements Engineering und Management gewinnen in allen Bereichen der Systementwicklung stetig an Bedeutung. Zusammenhänge zwischen der Qualität der Anforderungserhebung und des Projekterfolges, wie von der Standish Group im jährlich erscheinenden Chaos Report [Standish 2004] untersucht, sind den meisten ein Begriff. Bei der Erhebung von Anforderungen treten immer wieder ähnliche Probleme auf. Dabei spielen unterschiedliche Faktoren und Gegebenheiten eine Rolle, die beachtet werden müssen. Es gibt mehrere Möglichkeiten, die Tücken der Analysephase zu meistern; eine Hilfe bietet der Einsatz der in diesem Artikel vorgestellten Methoden zur Anforderungserhebung. Auch wenn die Anforderungen korrekt und vollständig erhoben sind, ist es eine Kunst, diese zu verwalten. In der heutigen Zeit der verteilten Projekte ist es eine Herausforderung, die Dokumentation für jeden Beteiligten ständig verfügbar, nachvollziehbar und eindeutig zu erhalten. Requirements Management rüstet den Analytiker mit Methoden aus, um sich dieser Herausforderung zu stellen. Änderungen von Stakeholder-Wünschen an bestehenden Anforderungen stellen besondere Ansprüche an das Requirements Management, doch mithilfe eines Change-Management-Prozesses können auch diese bewältigt werden. Metriken und Traceability unterstützen bei der Aufwandsabschätzung für Änderungsanträge.",
"title": ""
},
{
"docid": "fcacf1a443252652dfec05f7061784e1",
"text": "Small point lights (e.g., LEDs) are used as indicators in a wide variety of devices today, from digital watches and toasters, to washing machines and desktop computers. Although exceedingly simple in their output - varying light intensity over time - their design space can be rich. Unfortunately, a survey of contemporary uses revealed that the vocabulary of lighting expression in popular use today is small, fairly unimaginative, and generally ambiguous in meaning. In this paper, we work through a structured design process that points the way towards a much richer set of expressive forms and more effective communication for this very simple medium. In this process, we make use of five different data gathering and evaluation components to leverage the knowledge, opinions and expertise of people outside our team. Our work starts by considering what information is typically conveyed in this medium. We go on to consider potential expressive forms -- how information might be conveyed. We iteratively refine and expand these sets, concluding with ideas gathered from a panel of designers. Our final step was to make use of thousands of human judgments, gathered in a crowd-sourced fashion (265 participants), to measure the suitability of different expressive forms for conveying different information content. This results in a set of recommended light behaviors that mobile devices, such as smartphones, could readily employ.",
"title": ""
},
{
"docid": "6af7d655d12fb276f5db634f4fc7cb74",
"text": "The letter presents a compact 3-bit 90 ° phase shifter for phased-array applications at the 60 GHz ISM band (IEEE 802.11ad standard). The designed phase shifter is based on reflective-type topology using the proposed reflective loads with binary-weighted digitally-controlled varactor arrays and the transformer-type directional coupler. The measured eight output states of the implemented phase shifter in 65 nm CMOS technology, exhibit phase-resolution of 11.25 ° with an RMS phase error of 5.2 °. The insertion loss is 5.69 ± 1.22 dB at 60 GHz and the return loss is better than 12 dB over 54-66 GHz. The chip demonstrates a compact size of only 0.034 mm2.",
"title": ""
},
{
"docid": "a1dba8928f1a3b919b44dbd2ca8c3fb8",
"text": "With the increasing adoption of cloud computing, a growing number of users outsource their datasets to cloud. To preserve privacy, the datasets are usually encrypted before outsourcing. However, the common practice of encryption makes the effective utilization of the data difficult. For example, it is difficult to search the given keywords in encrypted datasets. Many schemes are proposed to make encrypted data searchable based on keywords. However, keyword-based search schemes ignore the semantic representation information of users’ retrieval, and cannot completely meet with users search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, we propose ECSED, a novel semantic search scheme based on the concept hierarchy and the semantic relationship between concepts in the encrypted datasets. ECSED uses two cloud servers. One is used to store the outsourced datasets and return the ranked results to data users. The other one is used to compute the similarity scores between the documents and the query and send the scores to the first server. To further improve the search efficiency, we utilize a tree-based index structure to organize all the document index vectors. We employ the multi-keyword ranked search over encrypted cloud data as our basic frame to propose two secure schemes. The experiment results based on the real world datasets show that the scheme is more efficient than previous schemes. We also prove that our schemes are secure under the known ciphertext model and the known background model.",
"title": ""
},
{
"docid": "92cecd8329343bc3a9b0e46e2185eb1c",
"text": "The spondylo and spondylometaphyseal dysplasias (SMDs) are characterized by vertebral changes and metaphyseal abnormalities of the tubular bones, which produce a phenotypic spectrum of disorders from the mild autosomal-dominant brachyolmia to SMD Kozlowski to autosomal-dominant metatropic dysplasia. Investigations have recently drawn on the similar radiographic features of those conditions to define a new family of skeletal dysplasias caused by mutations in the transient receptor potential cation channel vanilloid 4 (TRPV4). This review demonstrates the significance of radiography in the discovery of a new bone dysplasia family due to mutations in a single gene.",
"title": ""
},
{
"docid": "441f80a25e7a18760425be5af1ab981d",
"text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.",
"title": ""
},
{
"docid": "5065387618c6eb389ef0efb503172c5a",
"text": "We present a new algorithm for the contextual bandit learning problem, where the learner repeatedly takes an action in response to the observed context, observing the reward only for that action. Our method assumes access to an oracle for solving cost-sensitive classification problems and achieves the statistically optimal regret guarantee with only Õ( √ T ) oracle calls across all T rounds. By doing so, we obtain the most practical contextual bandit learning algorithm amongst approaches that work for general policy classes. We further conduct a proof-of-concept experiment which demonstrates the excellent computational and prediction performance of (an online variant of) our algorithm relative to several baselines.",
"title": ""
},
{
"docid": "5e9a0d990a3b4fb075552346a11986c4",
"text": "The TinyTeRP is a centimeter-scale, modular wheeled robotic platform developed for the study of swarming or collective behavior. This paper presents the use of TinyTeRPs to implement collective recruitment and rendezvous to a fixed location using several RSSI-based gradient ascent algorithms. We also present a redesign of the wheelbased module with tank treads and a wider base, improving the robot’s mobility over uneven terrain and overall robustness. Lastly, we present improvements to the open source C libraries that allow users to easily implement high-level functions and closed-loop control on the TinyTeRP.",
"title": ""
}
] |
scidocsrr
|
636f172b02e5af09431bf0c148ce9de8
|
Swarm intelligence based routing protocol for wireless sensor networks: Survey and future directions
|
[
{
"docid": "510b9b709d8bd40834ed0409d1e83d4d",
"text": "In this paper we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the Nature-inspired Ant Colony Optimization framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet’s performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks.",
"title": ""
},
{
"docid": "376c9736ccd7823441fd62c46eee0242",
"text": "Description: Infrastructure for Homeland Security Environments Wireless Sensor Networks helps readers discover the emerging field of low-cost standards-based sensors that promise a high order of spatial and temporal resolution and accuracy in an ever-increasing universe of applications. It shares the latest advances in science and engineering paving the way towards a large plethora of new applications in such areas as infrastructure protection and security, healthcare, energy, food safety, RFID, ZigBee, and processing. Unlike other books on wireless sensor networks that focus on limited topics in the field, this book is a broad introduction that covers all the major technology, standards, and application topics. It contains everything readers need to know to enter this burgeoning field, including current applications and promising research and development; communication and networking protocols; middleware architecture for wireless sensor networks; and security and management. The straightforward and engaging writing style of this book makes even complex concepts and processes easy to follow and understand. In addition, it offers several features that help readers grasp the material and then apply their knowledge in designing their own wireless sensor network systems: Examples illustrate how concepts are applied to the development and application of wireless sensor networks Detailed case studies set forth all the steps of design and implementation needed to solve real-world problems Chapter conclusions that serve as an excellent review by stressing the chapter's key concepts References in each chapter guide readers to in-depth discussions of individual topics This book is ideal for networking designers and engineers who want to fully exploit this new technology and for government employees who are concerned about homeland security. With its examples, it is appropriate for use as a coursebook for upper-level undergraduates and graduate students.",
"title": ""
}
] |
[
{
"docid": "7ca908e7896afc49a0641218e1c4febf",
"text": "Timely and accurate classification and interpretation of high-resolution images are very important for urban planning and disaster rescue. However, as spatial resolution gets finer, it is increasingly difficult to recognize complex patterns in high-resolution remote sensing images. Deep learning offers an efficient strategy to fill the gap between complex image patterns and their semantic labels. However, due to the hierarchical abstract nature of deep learning methods, it is difficult to capture the precise outline of different objects at the pixel level. To further reduce this problem, we propose an object-based deep learning method to accurately classify the high-resolution imagery without intensive human involvement. In this study, high-resolution images were used to accurately classify three different urban scenes: Beijing (China), Pavia (Italy), and Vaihingen (Germany). The proposed method is built on a combination of a deep feature learning strategy and an object-based classification for the interpretation of high-resolution images. Specifically, high-level feature representations extracted through the convolutional neural networks framework have been systematically investigated over five different layer configurations. Furthermore, to improve the classification accuracy, an object-based classification method also has been integrated with the deep learning strategy for more efficient image classification. Experimental results indicate that with the combination of deep learning and object-based classification, it is possible to discriminate different building types in Beijing Scene, such as commercial buildings and residential buildings with classification accuracies above 90%.",
"title": ""
},
{
"docid": "5ed1a40b933e44f0a7f7240bbca24ab4",
"text": "We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states and actions, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off.",
"title": ""
},
{
"docid": "90a1fc43ee44634bce3658463503994e",
"text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",
"title": ""
},
{
"docid": "61411c55041f40c3b0c63f3ebd4c621f",
"text": "This paper presents an application of neural network approach for the prediction of peak ground acceleration (PGA) using the strong motion data from Turkey, as a soft computing technique to remove uncertainties in attenuation equations. A training algorithm based on the Fletcher–Reeves conjugate gradient back-propagation was developed and employed for three sample sets of strong ground motion. The input variables in the constructed artificial neural network (ANN) model were the magnitude, the source-to-site distance and the site conditions, and the output was the PGA. The generalization capability of ANN algorithms was tested with the same training data. To demonstrate the authenticity of this approach, the network predictions were compared with the ones from regressions for the corresponding attenuation equations. The results indicated that the fitting between the predicted PGA values by the networks and the observed ones yielded high correlation coefficients (R). In addition, comparisons of the correlations by the ANN and the regression method showed that the ANN approach performed better than the regression. Even though the developed ANN models suffered from optimal configuration about the generalization capability, they can be conservatively used to well understand the influence of input parameters for the PGA predictions. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "34f6603912c9775fc48329e596467107",
"text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.",
"title": ""
},
{
"docid": "c4be39977487cdebc8127650c8eda433",
"text": "Unfavorable wake and separated flow from the hull might cause a dramatic decay of the propeller performance in single-screw propelled vessels such as tankers, bulk carriers and containers. For these types of vessels, special attention has to be paid to the design of the stern region, the occurrence of a good flow towards the propeller and rudder being necessary to avoid separation and unsteady loads on the propeller blades and, thus, to minimize fuel consumption and the risk for cavitation erosion and vibrations. The present work deals with the analysis of the propeller inflow in a single-screw chemical tanker vessel affected by massive flow separation in the stern region. Detailed flow measurements by Laser Doppler Velocimetry (LDV) were performed in the propeller region at model scale, in the Large Circulating Water Channel of CNR-INSEAN. Tests were undertaken with and without propeller in order to investigate its effect on the inflow characteristics and the separation mechanisms. In this regard, the study concerned also a phase locked analysis of the propeller perturbation at different distances upstream of the propulsor. The study shows the effectiveness of the 3 order statistical moment (i.e. skewness) for describing the topology of the wake and accurately identifying the portion affected by the detached flow.",
"title": ""
},
{
"docid": "1909d62daf3df32fad94d6a205cc0a8c",
"text": "Scalability properties of deep neural networks raise key re search questions, particularly as the problems considered become larger and more challenging. This paper expands on the idea of conditional computation introd uce in [2], where the nodes of a deep network are augmented by a set of gating uni ts that determine when a node should be calculated. By factorizing the wei ght matrix into a low-rank approximation, an estimation of the sign of the pr -nonlinearity activation can be efficiently obtained. For networks using rec tifi d-linear hidden units, this implies that the computation of a hidden unit wit h an estimated negative pre-nonlinearity can be omitted altogether, as its val ue will become zero when nonlinearity is applied. For sparse neural networks, this c an result in considerable speed gains. Experimental results using the MNIST and SVHN d ata sets with a fully-connected deep neural network demonstrate the perf ormance robustness of the proposed scheme with respect to the error introduced b y the conditional computation process.",
"title": ""
},
{
"docid": "94e2bfa218791199a59037f9ea882487",
"text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.",
"title": ""
},
{
"docid": "f64e65df9db7219336eafb20d38bf8cf",
"text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.",
"title": ""
},
{
"docid": "d0cdbd1137e9dca85d61b3d90789d030",
"text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).",
"title": ""
},
{
"docid": "0cca7892dc3a741deca22f7699e1ed7e",
"text": "Document polarity detection is a part of sentiment analysis where a document is classified as a positive polarity document or a negative polarity document. The applications of polarity detection are content filtering and opinion mining. Content filtering of negative polarity documents is an important application to protect children from negativity and can be used in security filters of organizations. In this paper, dictionary based method using polarity lexicon and machine learning algorithms are applied for polarity detection of Kannada language documents. In dictionary method, a manually created polarity lexicon of 5043 Kannada words is used and compared with machine learning algorithms like Naïve Bayes and Maximum Entropy. It is observed that performance of Naïve Bayes and Maximum Entropy is better than dictionary based method with accuracy of 0.90, 0.93 and 0.78 respectively.",
"title": ""
},
{
"docid": "a448b5e4e4bd017049226f06ce32fa9d",
"text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.",
"title": ""
},
{
"docid": "6936b03672c64798ca4be118809cc325",
"text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.",
"title": ""
},
{
"docid": "0131e5a748fb70627746068d33553eca",
"text": "Fast changing, increasingly complex, and diverse computing platforms pose central problems in scientific computing: How to achieve, with reasonable effort, portable optimal performance? We present SPIRAL, which considers this problem for the performance-critical domain of linear digital signal processing (DSP) transforms. For a specified transform, SPIRAL automatically generates high-performance code that is tuned to the given platform. SPIRAL formulates the tuning as an optimization problem and exploits the domain-specific mathematical structure of transform algorithms to implement a feedback-driven optimizer. Similar to a human expert, for a specified transform, SPIRAL \"intelligently\" generates and explores algorithmic and implementation choices to find the best match to the computer's microarchitecture. The \"intelligence\" is provided by search and learning techniques that exploit the structure of the algorithm and implementation space to guide the exploration and optimization. SPIRAL generates high-performance code for a broad set of DSP transforms, including the discrete Fourier transform, other trigonometric transforms, filter transforms, and discrete wavelet transforms. Experimental results show that the code generated by SPIRAL competes with, and sometimes outperforms, the best available human tuned transform library code.",
"title": ""
},
{
"docid": "0d2e5667545ebc9380416f9f625dd836",
"text": "New developments in assistive technology are likely to make an important contribution to the care of elderly people in institutions and at home. Video-monitoring, remote health monitoring, electronic sensors and equipment such as fall detectors, door monitors, bed alerts, pressure mats and smoke and heat alarms can improve older people's safety, security and ability to cope at home. Care at home is often preferable to patients and is usually less expensive for care providers than institutional alternatives.",
"title": ""
},
{
"docid": "e8f15d3689f1047cd05676ebd72cc0fc",
"text": "We argue that in fully-connected networks a phase transition delimits the overand under-parametrized regimes where fitting can or cannot be achieved. Under some general conditions, we show that this transition is sharp for the hinge loss. In the whole over-parametrized regime, poor minima of the loss are not encountered during training since the number of constraints to satisfy is too small to hamper minimization. Our findings support a link between this transition and the generalization properties of the network: as we increase the number of parameters of a given model, starting from an under-parametrized network, we observe that the generalization error displays three phases: (i) initial decay, (ii) increase until the transition point — where it displays a cusp — and (iii) slow decay toward a constant for the rest of the over-parametrized regime. Thereby we identify the region where the classical phenomenon of over-fitting takes place, and the region where the model keeps improving, in line with previous empirical observations for modern neural networks.",
"title": ""
},
{
"docid": "574259df6c01fd0c46160b3f8548e4e7",
"text": "Hashtag has emerged as a widely used concept of popular culture and campaigns, but its implications on people’s privacy have not been investigated so far. In this paper, we present the first systematic analysis of privacy issues induced by hashtags. We concentrate in particular on location, which is recognized as one of the key privacy concerns in the Internet era. By relying on a random forest model, we show that we can infer a user’s precise location from hashtags with accuracy of 70% to 76%, depending on the city. To remedy this situation, we introduce a system called Tagvisor that systematically suggests alternative hashtags if the user-selected ones constitute a threat to location privacy. Tagvisor realizes this by means of three conceptually different obfuscation techniques and a semantics-based metric for measuring the consequent utility loss. Our findings show that obfuscating as little as two hashtags already provides a near-optimal trade-off between privacy and utility in our dataset. This in particular renders Tagvisor highly time-efficient, and thus, practical in real-world settings.",
"title": ""
},
{
"docid": "1a5b28583eaf7cab8cc724966d700674",
"text": "Advertising (ad) revenue plays a vital role in supporting free websites. When the revenue dips or increases sharply, ad system operators must find and fix the rootcause if actionable, for example, by optimizing infrastructure performance. Such revenue debugging is analogous to diagnosis and root-cause analysis in the systems literature but is more general. Failure of infrastructure elements is only one potential cause; a host of other dimensions (e.g., advertiser, device type) can be sources of potential causes. Further, the problem is complicated by derived measures such as costs-per-click that are also tracked along with revenue. Our paper takes the first systematic look at revenue debugging. Using the concepts of explanatory power, succinctness, and surprise, we propose a new multidimensional root-cause algorithm for fundamental and derived measures of ad systems to identify the dimension mostly likely to blame. Further, we implement the attribution algorithm and a visualization interface in a tool called the Adtributor to help troubleshooters quickly identify potential causes. Based on several case studies on a very large ad system and extensive evaluation, we show that the Adtributor has an accuracy of over 95% and helps cut down troubleshooting time by an order of magnitude.",
"title": ""
},
{
"docid": "30279db171fffe6fac561541a5d175ca",
"text": "Deformable displays can provide two major benefits compared to rigid displays: Objects of different shapes and deformabilities, situated in our physical environment, can be equipped with deformable displays, and users can benefit from their pre-existing knowledge about the interaction with physical objects when interacting with deformable displays. In this article we present InformationSense, a large, highly deformable cloth display. The article contributes to two research areas in the context of deformable displays: It presents an approach for the tracking of large, highly deformable surfaces, and it presents one of the first UX analyses of cloth displays that will help with the design of future interaction techniques for this kind of display. The comparison of InformationSense with a rigid display interface unveiled the trade-off that while users are able to interact with InformationSense more naturally and significantly preferred InformationSense in terms of joy of use, they preferred the rigid display interfaces in terms of efficiency. This suggests that deformable displays are already suitable if high hedonic qualities are important but need to be enhanced with additional digital power if high pragmatic qualities are required.",
"title": ""
},
{
"docid": "18e5b72779f6860e2a0f2ec7251b0718",
"text": "This paper presents a novel dielectric resonator filter exploiting dual TM11 degenerate modes. The dielectric rod resonators are short circuited on the top and bottom surfaces to the metallic cavity. The dual-mode cavities can be conveniently arranged in many practical coupling configurations. Through-holes in height direction are made in each of the dielectric rods for the frequency tuning and coupling screws. All the coupling elements, including inter-cavity coupling elements, are accessible from the top of the filter cavity. This planar coupling configuration is very attractive for composing a diplexer or a parallel multifilter assembly using the proposed filter structure. To demonstrate the new filter technology, two eight-pole filters with cross-couplings for UMTS band are prototyped and tested. It has been experimentally shown that as compared to a coaxial combline filter with a similar unloaded Q, the proposed dual-mode filter can save filter volume by more than 50%. Moreover, a simple method that can effectively suppress the lower band spurious mode is also presented.",
"title": ""
}
] |
scidocsrr
|
a9cd6bd83dd3a811515aa81115a3b08d
|
Haptic feature extraction from a biomimetic tactile sensor: Force, contact location and curvature
|
[
{
"docid": "74b2697e6faf8339ec11b29092758272",
"text": "A tactile sense is key to advanced robotic grasping and manipulation. By touching an object it is possible to measure contact properties such as contact forces, torques, and contact position. From these, we can estimate object properties such as geometry, stiffness, and surface condition. This information can then be used to control grasping or manipulation, to detect slip, and also to create or improve object models. This paper presents an overview of tactile sensing in intelligent robotic manipulation. The history, the common issues, and applications are reviewed. Sensor performance is briefly discussed and compared to the human tactile sense. Advantages and disadvantages of the most common sensor approaches are discussed. Some examples are given of sensors widely available today. Eventually the state of the art in applying tactile sensing experimentally is presented.",
"title": ""
}
] |
[
{
"docid": "9bbc279974aaa899d12fee26948ce029",
"text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.",
"title": ""
},
{
"docid": "7cb0aabbead294c6471c9810a538b299",
"text": "Due to colossal financial losses in recent years, phishing has drawn attention of most of the individuals and organizations in the world of internet. Need for protection against phishing activities through fraudulent emails has increased remarkably. In this paper we propose a hybrid model to classify phishing emails using machine learning algorithms with the aspiration of developing an ensemble model for email classification with improved accuracy. We have used the content of emails and extracted 47 features from it. The processed emails are provided as input to various machine learning classifiers. Going through experiments, it is observed and inferred that Bayesian net classification model when ensemble with CART gives highest test accuracy of 99.32%.",
"title": ""
},
{
"docid": "9244b687b0031e895cea1fcf5a0b11da",
"text": "Bacopa monnieri (L.) Wettst., a traditional Indian medicinal plant with high commercial potential, is used as a potent nervine tonic. A slow growth protocol was developed for medium-term conservation using mineral oil (MO) overlay. Nodal segments of B. monnieri (two genotypes; IC249250, IC468878) were conserved using MO for 24 months. Single node explants were implanted on MS medium supplemented with 0.2 mg l−1 BA and were covered with MO. Subculture duration could be significantly enhanced from 6 to 24 months, on the above medium. Normal plants regenerated from conserved cultures were successfully established in soil. On the basis of 20 random amplified polymorphic DNA and 5 inter-simple sequence repeat primers analyses and bacoside A content using HPLC, no significant reproducible variation was observed between the controls and in vitro-conserved plants. The results demonstrate the feasibility of using MO for medium-term conservation of B. monnieri germplasm without any adverse genetical and biochemical effects.",
"title": ""
},
{
"docid": "1904d8b3c45bc24acdc0294d84d66c79",
"text": "The propagation of unreliable information is on the rise in many places around the world. This expansion is facilitated by the rapid spread of information and anonymity granted by the Internet. The spread of unreliable information is a well-studied issue and it is associated with negative social impacts. In a previous work, we have identified significant differences in the structure of news articles from reliable and unreliable sources in the US media. Our goal in this work was to explore such differences in the Brazilian media. We found significant features in two data sets: one with Brazilian news in Portuguese and another one with US news in English. Our results show that features related to the writing style were prominent in both data sets and, despite the language difference, some features have a universal behavior, being significant to both US and Brazilian news articles. Finally, we combined both data sets and used the universal features to build a machine learning classifier to predict the source type of a news article as reliable or unreliable.",
"title": ""
},
{
"docid": "2efe5c0228e6325cdbb8e0922c19924f",
"text": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes that can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across simulation models, particularly in cases where only a small number of patients have high quality phenotype. This situation is commonly encountered in research with EHRs. Denoising autoencoders perform dimensionality reduction allowing visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs.",
"title": ""
},
{
"docid": "03a55678d5f25f710274323abf71f48c",
"text": "Ontologies are an explicit specification of a conceptualization, that is understood to be an abstract and simplified version of the world to be represented. In recent years, ontologies have been used in Ubiquitous Computing, especially for the development of context-aware applications. In this paper, we offer a taxonomy for classifying ontologies used in Ubiquitous Computing, in which two main categories are distinguished: Domain ontologies, created to represent and communicate agreed knowledge within some sub-domain of Ubiquitous Computing; and Ontologies as software artifacts, when ontologies play the role of an additional type of artifact in ubiquitous computing applications. The latter category is subdivided according with the moment in that ontologies are used: at development time or at run time. Also, we analyze and classify (based on this taxonomy) some recently published works.",
"title": ""
},
{
"docid": "fa404bb1a60c219933f1666552771ada",
"text": "A novel low voltage self-biased high swing cascode current mirror (SHCCM) employing bulk-driven NMOS transistors is proposed in this paper. The comparison with the conventional circuit reveals that the proposed bulk-driven circuit operates at lower voltages and provides enhanced bandwidth with improved output resistance. The proposed circuit is further modified by replacing the passive resistance by active MOS realization. Small signal analysis of the proposed and conventional SHCCM are carried out to show the improvement achieved through the proposed circuit. The circuits are simulated in standard SPICE 0.25 mm CMOS technology and simulated results are compared with the theoretically obtained results. To ensure robustness of the proposed SHCCM, simulation results of component tolerance and process variation have also been included. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "554a3f5f19503a333d3788cf46ffcef2",
"text": "Hospital overcrowding has been a problem in Thai public healthcare system. The main cause of this problem is the limited available resources, including a limited number of doctors, nurses, and limited capacity and availability of medical devices. There have been attempts to alleviate the problem through various strategies. In this paper, a low-cost system was developed and tested in a public hospital with limited budget. The system utilized QR code and smartphone application to capture as-is hospital processes and the time spent on individual activities. With the available activity data, two algorithms were developed to identify two quantities that are valuable to conduct process improvement: the most congested time and bottleneck activities. The system was implemented in a public hospital and results were presented.",
"title": ""
},
{
"docid": "5b3fd9394bc6dd84f48a23def50f8ace",
"text": "This study presents the first behavioral genetic investigation of the relationships between trait emotional intelligence (trait EI or trait emotional self-efficacy) and the Dark Triad traits of narcissism, Machiavellianism, and psychopathy. In line with trait EI theory, the construct correlated positively with narcissism, but negatively with the other two traits. Generally, the correlations were consistent across the 4 factors and 15 facets of the construct. Cholesky decomposition analysis revealed that the phenotypic associations were primarily due to correlated genetic factors and secondarily due to correlated nonshared environmental factors, with shared environmental factors being nonsignificant in all cases. Results are discussed from the perspective of trait EI theory with particular reference to the issue of adaptive value.",
"title": ""
},
{
"docid": "2c69eb4be7bc2bed32cfbbbe3bc41a5d",
"text": "The Sapienza University Networking framework for underwater Simulation Emulation and real-life Testing (SUNSET) is a toolkit for the implementation and testing of protocols for underwater sensor networks. SUNSET enables a radical new way of performing experimental research on underwater communications. It allows protocol designers and implementors to easily realize their solutions and to evaluate their performance through simulation, in-lab emulation and trials at sea in a direct and transparent way, and independently of specific underwater hardware platforms. SUNSET provides a complete toolchain of predeployment and deployment time tools able to identify risks, malfunctioning and under-performing solutions before incurring the expense of going to sea. Novel underwater systems can therefore be rapidly and easily investigated. Heterogeneous underwater communication technologies from different vendors can be used, allowing the evaluation of the impact of different combinations of hardware and software on the overall system performance. Using SUNSET, underwater devices can be reconfigured and controlled remotely in real time, using acoustic links. This allows the performance investigation of underwater systems under different settings and configurations and significantly reduces the cost and complexity of at-sea trials. This paper describes the architectural concept of SUNSET and presents some exemplary results of its use in the field. The SUNSET framework has been extensively validated during more than fifteen at-sea experimental campaigns in the past four years. Several of these have been conducted jointly with the NATO STO Centre for Maritime Research and Experimentation (CMRE) under a collaboration between the University of Rome and CMRE.",
"title": ""
},
{
"docid": "f99fe9c7aaf417a3893c264b2602a9f3",
"text": "A male infant was brought to hospital aged eight weeks. He was born at full term via normal vaginal home delivery without any complications. The delivery was conducted by a traditional birth attendant and Apgar scores at birth were unrecorded. One week after the birth, the parents noticed an increase in size of the baby’s breasts. In accordance with cultural practice, they massaged the breasts in order to express milk, hoping that by doing so the size of the breasts would return to normal. However, the size of the breasts increased. They also reported that milk was being discharged spontaneously through the nipples. There was no history of drug intake neither by the mother nor the baby. The infant appeared clinically well and showed no signs of irritability. On examination, bilateral breast enlargement was observed of approximate diameter 6 cm. No tenderness, purulent discharge or any sign of inflammation were observed (Figure 1). Systemic and genital examination were unremarkable. Routine blood investigations were normal. Firm advice was given not to massage the breasts of the baby.",
"title": ""
},
{
"docid": "fa71a2d44ea95cf51a9e2d48f1fdcf29",
"text": "A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non-art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value.",
"title": ""
},
{
"docid": "1cabb80b00c350367de61194f85fdb77",
"text": "Text summarization is the process of distilling the most important information from source/sources to produce an abridged version for a particular user/users and task/tasks. Automatically generated summaries can significantly reduce the information overload on intelligence analysts in their daily work. Moreover, automated text summarization can be utilized for automated classification and filtering of text documents, information search over the Internet, content recommendation systems, online social networks, etc. The increasing trend of cross-border globalization accompanied by the growing multi-linguality of the Internet requires text summarization techniques to work equally well on multiple languages. However, only some of the automated summarization methods proposed in the literature can be defined as “multi-lingual\" or “language-independent,\" as they are not based on any morphological analysis of the summarized text. In this chapter, we present a novel approach called MUSE (MUltilingual Sentence Extractor) to “language-independent\" extractive summarization, which represents the summary as a collection of the most informative fragments of the summarized document without any language-specific text analysis. We use a Genetic Algorithm to find the best linear combination of 31 sentence scoring metrics based on vector and graph representations of text documents. Our summarization methodology is evaluated on two monolingual corpora of English and Hebrew documents, and, in addition, on a bilingual collection of English and Hebrew documents. The results are compared to 15 statistical sentence scoring methods for extractive single-document summarization found in the literature and to several stateof-the-art summarization tools. These bilingual experiments show that the MUSE methodology significantly outperforms the existing approaches and tools in both languages.",
"title": ""
},
{
"docid": "7e74cc21787c1e21fd64a38f1376c6a9",
"text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.",
"title": ""
},
{
"docid": "58cfc1f2f7c56794cdf0d81133253c00",
"text": "Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred. In addition to extract answers, previous works usually predict an additional “no-answer” probability to detect unanswerable cases. However, they fail to validate the answerability of the question by verifying the legitimacy of the predicted answer. To address this problem, we propose a novel read-then-verify system, which not only utilizes a neural reader to extract candidate answers and produce noanswer probabilities, but also leverages an answer verifier to decide whether the predicted answer is entailed by the input snippets. Moreover, we introduce two auxiliary losses to help the reader better handle answer extraction as well as noanswer detection, and investigate three different architectures for the answer verifier. Our experiments on the SQuAD 2.0 dataset show that our system obtains a score of 74.2 F1 on test set, achieving state-of-the-art results at the time of submission (Aug. 28th, 2018).",
"title": ""
},
{
"docid": "f453b2fdb5da78a9a8b303b5bed8ae25",
"text": "Building correct and efficient concurrent algorithms is known to be a difficult problem of fundamental importance. To achieve efficiency, designers try to remove unnecessary and costly synchronization. However, not only is this manual trial-and-error process ad-hoc, time consuming and error-prone, but it often leaves designers pondering the question of: is it inherently impossible to eliminate certain synchronization, or is it that I was unable to eliminate it on this attempt and I should keep trying?\n In this paper we respond to this question. We prove that it is impossible to build concurrent implementations of classic and ubiquitous specifications such as sets, queues, stacks, mutual exclusion and read-modify-write operations, that completely eliminate the use of expensive synchronization.\n We prove that one cannot avoid the use of either: i) read-after-write (RAW), where a write to shared variable A is followed by a read to a different shared variable B without a write to B in between, or ii) atomic write-after-read (AWAR), where an atomic operation reads and then writes to shared locations. Unfortunately, enforcing RAW or AWAR is expensive on all current mainstream processors. To enforce RAW, memory ordering--also called fence or barrier--instructions must be used. To enforce AWAR, atomic instructions such as compare-and-swap are required. However, these instructions are typically substantially slower than regular instructions.\n Although algorithm designers frequently struggle to avoid RAW and AWAR, their attempts are often futile. Our result characterizes the cases where avoiding RAW and AWAR is impossible. On the flip side, our result can be used to guide designers towards new algorithms where RAW and AWAR can be eliminated.",
"title": ""
},
{
"docid": "7cc20934720912ad1c056dc9afd97e18",
"text": "Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that. demonstrate a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon.",
"title": ""
},
{
"docid": "ec1528a3f82caa4953c39a101e4d4311",
"text": "Electroencephalography (EEG) is the recording of electrical activity along the scalp of human brain. EEG is most often used to diagnose brain disorders i.e. epilepsy, sleep disorder, coma, brain death etc. EEG signals are frequently contaminated by Eye Blink Artifacts generated due to the opening and closing of eye lids during EEG recording. To analyse signal of EEG for diagnosis it is necessary that the EEG recording should be artifact free. This paper is based on the work to detect the presence of artifact and its actual position with extent in EEG recording. For the purpose of classification of artifact or non-artifact activity Artificial Neural Network (ANN) is used and for detection of contaminated zone the Discrete Wavelet Transform with level 6 Haar is used. The part of zone detection is necessary for further appropriate removal of artifactual activities from EEG recording without losing the background activity. The results demonstrated from the ANN classifier are very much promising such as- Sensitivity 98.21 %, Specificity 87.50 %, and Accuracy 95.83 %.",
"title": ""
},
{
"docid": "4172a0c101756ea8207b65b0dfbbe8ce",
"text": "Inspired by ACTORS [7, 17], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [2], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: 1. alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc., by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. 2. explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. 3. have a simple concrete experimental domain for certain issues of programming semantics and style. This paper is organized into sections. The first section is a short “reference manual” containing specifications for all the unusual features of SCHEME. Next, we present a sequence of programming examples which illustrate various programming styles, and how to use them. This will raise certain issues of semantics which we will try to clarify with lambda calculus in the third section. In the fourth section we will give a general discussion of the issues facing an implementor of an interpreter for a language based on lambda calculus. Finally, we will present a completely annotated interpreter for SCHEME, written in MacLISP [13], to acquaint programmers with the tricks of the trade of implementing non-recursive control structures in a recursive language like LISP. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C0643. 1. The SCHEME Reference Manual SCHEME is essentially a full-funarg LISP. LAMBDAexpressions need not be QUOTEd, FUNCTIONed, or *FUNCTIONed when passed as arguments or returned as values; they will evaluate to closures of themselves. All LISP functions (i.e.,EXPRs,SUBRs, andLSUBRs, butnotFEXPRs,FSUBRs, orMACROs) are primitive operators in SCHEME, and have the same meaning as they have in LISP. Like LAMBDAexpressions, primitive operators and numbers are self-evaluating (they evaluate to trivial closures of themselves). There are a number of special primitives known as AINTs which are to SCHEME as FSUBRs are to LISP. We will enumerate them here. IF This is the primitive conditional operator. It takes three arguments. If the first evaluates to non-NIL , it evaluates the second expression, and otherwise the third. QUOTE As in LISP, this quotes the argument form so that it will be passed verbatim as data. The abbreviation “ ’FOO” may be used instead of “ (QUOTE FOO) ”. 406 SUSSMAN AND STEELE DEFINE This is analogous to the MacLISP DEFUNprimitive (but note that theLAMBDA must appear explicitly!). It is used for defining a function in the “global environment” permanently, as opposed to LABELS(see below), which is used for temporary definitions in a local environment.DEFINE takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom). LABELS We have decided not to use the traditional LABEL primitive in this interpreter because it is difficult to define several mutually recursive functions using only LABEL. The solution, which Hewitt [17] also uses, is to adopt an ALGOLesque block syntax: (LABELS <function definition list> <expression>) This has the effect of evaluating the expression in an environment where all the functions are defined as specified by the definitions list. Furthermore, the functions are themselves closed in that environment, and not in the outer environment; this allows the functions to call themselvesand each otherecursively. For example, consider a function which counts all the atoms in a list structure recursively to all levels, but which doesn’t count the NIL s which terminate lists (but NIL s in theCARof some list count). In order to perform this we use two mutually recursive functions, one to count the car and one to count the cdr, as follows: (DEFINE COUNT (LAMBDA (L) (LABELS ((COUNTCAR (LAMBDA (L) (IF (ATOM L) 1 (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L)))))) (COUNTCDR (LAMBDA (L) (IF (ATOM L) (IF (NULL L) 0 1) (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L))))))) (COUNTCDR L)))) ;Note: COUNTCDR is defined here. ASET This is the side effect primitive. It is analogous to the LISP function SET. For example, to define a cell [17], we may useASETas follows: (DEFINE CONS-CELL (LAMBDA (CONTENTS) (LABELS ((THE-CELL (LAMBDA (MSG) (IF (EQ MSG ’CONTENTS?) CONTENTS (IF (EQ MSG ’CELL?) ’YES (IF (EQ (CAR MSG) ’<-) (BLOCK (ASET ’CONTENTS (CADR MSG)) THE-CELL) (ERROR ’|UNRECOGNIZED MESSAGE CELL| MSG ’WRNG-TYPE-ARG))))))) THE-CELL))) INTERPRETER FOR EXTENDED LAMBDA CALCULUS 407 Those of you who may complain about the lack of ASETQare invited to write(ASET’ foo bar) instead of(ASET ’foo bar) . EVALUATE This is similar to the LISP functionEVAL. It evaluates its argument, and then evaluates the resulting s-expression as SCHEME code. CATCH This is the “escape operator” which gives the user a handle on the control structure of the interpreter. The expression: (CATCH <identifier> <expression>) evaluates<expression> in an environment where <identifier> is bound to a continuation which is “just about to return from the CATCH”; that is, if the continuation is called as a function of one argument, then control proceeds as if the CATCHexpression had returned with the supplied (evaluated) argument as its value. For example, consider the following obscure definition of SQRT(Sussman’s favorite style/Steele’s least favorite): (DEFINE SQRT (LAMBDA (X EPSILON) ((LAMBDA (ANS LOOPTAG) (CATCH RETURNTAG (PROGN (ASET ’LOOPTAG (CATCH M M)) ;CREATE PROG TAG (IF (< (ABS (-$ (*$ ANS ANS) X)) EPSILON) (RETURNTAG ANS) ;RETURN NIL) ;JFCL (ASET ’ANS (//$ (+$ (//$ X ANS) ANS) 2.0)) (LOOPTAG LOOPTAG)))) ;GOTO 1.0 NIL))) Anyone who doesn’t understand how this manages to work probably should not attempt to useCATCH. As another example, we can define a THROWfunction, which may then be used with CATCHmuch as they are in LISP: (DEFINE THROW (LAMBDA (TAG RESULT) (TAG RESULT))) CREATE!PROCESS This is the process generator for multiprocessing. It takes one argument, an expression to be evaluated in the current environment as a separate parallel process. If the expression ever returns a value, the process automatically terminates. The value ofCREATE!PROCESSis a process id for the newly generated process. Note that the newly created process will not actually run until it is explicitly started. START!PROCESS This takes one argument, a process id, and starts up that process. It then runs. 408 SUSSMAN AND STEELE STOP!PROCESS This also takes a process id, but stops the process. The stopped process may be continued from where it was stopped by using START!PROCESSagain on it. The magic global variable**PROCESS** always contains the process id of the currently running process; thus a process can stop itself by doing (STOP!PROCESS **PROCESS**) . A stopped process is garbage collected if no live process has a pointer to its process id. EVALUATE!UNINTERRUPTIBLY This is the synchronization primitive. It evaluates an expression uninterruptibly; i.e., no other process may run until the expression has returned a value. Note that if a funarg is returned from the scope of an EVALUATE!UNINTERRUPTIBLY, then that funarg will be uninterruptible when it is applied; that is, the uninterruptibility property follows the rules of variable scoping. For example, consider the following function: (DEFINE SEMGEN (LAMBDA (SEMVAL) (LIST (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (ASET’ SEMVAL (+ SEMVAL 1)))) (LABELS (P (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (IF (PLUSP SEMVAL) (ASET’ SEMVAL (SEMVAL 1)) (P))))) P)))) This returns a pair of functions which are V and P operations on a newly created semaphore. The argument to SEMGENis the initial value for the semaphore. Note that P busy-waits by iterating if necessary; because EVALUATE!UNINTERRUPTIBLYuses variable-scoping rules, other processes have a chance to get in at the beginning of each iteration. This busy-wait can be made much more efficient by replacing the expression (P) in the definition ofP with ((LAMBDA (ME) (BLOCK (START!PROCESS (CREATE!PROCESS ’(START!PROCESS ME))) (STOP!PROCESS ME) (P))) **PROCESS**) Let’s see you figure this one out! Note that a STOP!PROCESSwithin anEVALUATE! UNINTERRUPTIBLYforces the process to be swapped out even if it is the current one, and so other processes get to run; but as soon as it gets swapped in again, others are locked out as before. Besides theAINTs, SCHEME has a class of primitives known as AMACRO s These are similar to MacLISPMACROs, in that they are expanded into equivalent code before being executed. Some AMACRO s supplied with the SCHEME interpreter: INTERPRETER FOR EXTENDED LAMBDA CALCULUS 409 COND This is like the MacLISPCONDstatement, except that singleton clauses (where the result of the predicate is the returned value) are not allowed. AND, OR These are also as in MacLISP. BLOCK This is like the MacLISPPROGN, but arranges to evaluate its last argument without an extra net control frame (explained later), so that the last argument may involved in an iteration. Note that in SCHEME, unlike MacLISP, the body of a LAMBDAexpression is not an implicit PROGN. DO This is like the MacLISP “new-style” DO; old-styleDOis not supported. AMAPCAR , AMAPLIST These are likeMAPCARandMAPLIST, but they expect a SCHEME lambda closure for the first argument. To use SCHEME, simply incant at DDT (on MIT-AI): 3",
"title": ""
},
{
"docid": "8f54f2c6e9736a63ea4a99f89090e0a2",
"text": "This article demonstrates how documents prepared in hypertext or word processor format can be saved in portable document format (PDF). These files are self-contained documents that that have the same appearance on screen and in print, regardless of what kind of computer or printer are used, and regardless of what software package was originally used to for their creation. PDF files are compressed documents, invariably smaller than the original files, hence allowing rapid dissemination and download.",
"title": ""
}
] |
scidocsrr
|
95dae1c267cfb5f8cd2d5206f0d66194
|
Fit4life: the design of a persuasive technology promoting healthy behavior and ideal weight
|
[
{
"docid": "8d292592202c948c439f055ca5df9d56",
"text": "This paper provides an overview of the current state of the art in persuasive systems design. All peer-reviewed full papers published at the first three International Conferences on Persuasive Technology were analyzed employing a literature review framework. Results from this analysis are discussed and directions for future research are suggested. Most research papers so far have been experimental. Five out of six of these papers (84.4%) have addressed behavioral change rather than an attitude change. Tailoring, tunneling, reduction and social comparison have been the most studied methods for persuasion. Quite, surprisingly ethical considerations have remained largely unaddressed in these papers. In general, many of the research papers seem to describe the investigated persuasive systems in a relatively vague manner leaving room for some improvement.",
"title": ""
}
] |
[
{
"docid": "0fb7fa7907e33b3192946407607b54f2",
"text": "We present commensal cuckoo,* a secure group partitioning scheme for large-scale systems that maintains the correctness of many small groups, despite a Byzantine adversary that controls a constant (global) fraction of all nodes. In particular, the adversary is allowed to repeatedly rejoin faulty nodes to the system in an arbitrary adaptive manner, e.g., to collocate them in the same group. Commensal cuckoo addresses serious practical limitations of the state-ofthe- art scheme, the cuckoo rule of Awerbuch and Scheideler, tolerating 32x--41x more faulty nodes with groups as small as 64 nodes (as compared to the hundreds required by the cuckoo rule). Secure group partitioning is a key component of highly-scalable, reliable systems such as Byzantine faulttolerant distributed hash tables (DHTs).",
"title": ""
},
{
"docid": "ca26daaa9961f7ba2343ae84245c1181",
"text": "In a recently held WHO workshop it has been recommended to abandon the distinction between potentially malignant lesions and potentially malignant conditions and to use the term potentially malignant disorders instead. Of these disorders, leukoplakia and erythroplakia are the most common ones. These diagnoses are still defined by exclusion of other known white or red lesions. In spite of tremendous progress in the field of molecular biology there is yet no single marker that reliably enables to predict malignant transformation in an individual patient. The general advice is to excise or laser any oral of oropharyngeal leukoplakia/erythroplakia, if feasible, irrespective of the presence or absence of dysplasia. Nevertheless, it is actually unknown whether such removal truly prevents the possible development of a squamous cell carcinoma. At present, oral lichen planus seems to be accepted in the literature as being a potentially malignant disorder, although the risk of malignant transformation is lower than in leukoplakia. There are no means to prevent such event. The efficacy of follow-up of oral lichen planus is questionable. Finally, brief attention has been paid to oral submucous fibrosis, actinic cheilitis, some inherited cancer syndromes and immunodeficiency in relation to cancer predisposition.",
"title": ""
},
{
"docid": "5e15abdf0268acf2495a06a49a49eee7",
"text": "Analysis of large scale geonomics data, notably gene expres sion, has initially focused on clustering methods. Recently, biclustering techniques we re proposed for revealing submatrices showing unique patterns. We review some of the algorithmic a pproaches to biclustering and discuss their properties.",
"title": ""
},
{
"docid": "4718e64540f5b8d7399852fb0e16944a",
"text": "In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the network's output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.",
"title": ""
},
{
"docid": "f86e3894a6c61c3734e1aabda3500ef0",
"text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.",
"title": ""
},
{
"docid": "002c83aada3dbbc19a1da7561c53fc4b",
"text": "The Swedish preschool is an important socializing agent because the great majority of children aged, from 1 to 5 years, are enrolled in an early childhood education program. This paper explores how preschool teachers and children, in an ethnically diverse preschool, negotiate the meaning of cultural traditions celebrated in Swedish preschools. Particular focus is given to narrative representations of cultural traditions as they are co-constructed and negotiated in preschool practice between teachers and children. Cultural traditions are seen as shared events in the children’s preschool life, as well as symbolic resources which enable children and preschool teachers to conceive themselves as part of a larger whole. The data analyzed are three videotaped circle time events focused on why a particular tradition is celebrated. Methodologically the analysis builds on a narrative approach inspired by Bakhtin’s notion of addressivity and on Alexander’s ideas about dialogic teaching. The results of the analysis show that the teachers attempt to achieve a balance between transferring traditional cultural and religious values and realizing a child-centered pedagogy, emphasizing the child’s initiative. The analyses also show that narratives with a religious tonality generate some uncertainty on how to communicate with the children about the traditions that are being discussed. These research findings are important because, in everyday practice, preschool teachers enact whether religion is regarded as an essential part of cultural socialization, while acting both as keepers of traditions and agents of change.",
"title": ""
},
{
"docid": "5351eb646699758a4c1dd1d4e9c35b26",
"text": "Interpersonal trust is one of the key components of efficient teamwork. Research suggests two main approaches for trust formation: personal information exchange (e.g., social icebreakers), and creating a context of risk and interdependence (e.g., trust falls). However, because these strategies are difficult to implement in an online setting, trust is more difficult to achieve and preserve in distributed teams. In this paper, we argue that games are an optimal environment for trust formation because they can simulate both risk and interdependence. Results of our online experiment show that a social game can be more effective than a social task at fostering interpersonal trust. Furthermore, trust formation through the game is reliable, but trust depends on several contingencies in the social task. Our work suggests that gameplay interactions do not merely promote impoverished versions of the rich ties formed through conversation; but rather engender genuine social bonds. \\",
"title": ""
},
{
"docid": "4411ff57ab4fbfdff76501fe2e3f6f4a",
"text": "Incorporating wireless transceivers with numerous antennas (such as Massive-MIMO) is a prospective way to increase the link capacity or enhance the energy efficiency of future communication systems. However, the benefits of such approach can be realized only when proper channel information is available at the transmitter. Since the amount of the channel information required by the transmitter is large with so many antennas, the feedback is arduous in practice, especially for frequency division duplexing (FDD) systems. This paper proposes channel feedback reduction techniques based on the theory of compressive sensing, which permits the transmitter to obtain channel information with acceptable accuracy under substantially reduced feedback load. Furthermore, by leveraging properties of compressive sensing, we present two adaptive feedback protocols, in which the feedback content can be dynamically configured based on channel conditions to improve the efficiency.",
"title": ""
},
{
"docid": "a774567d957ed0ea209b470b8eced563",
"text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.",
"title": ""
},
{
"docid": "7071a178d42011a39145066da2d08895",
"text": "This paper discusses the trend modeling for traffic time series. First, we recount two types of definitions for a long-term trend that appeared in previous studies and illustrate their intrinsic differences. We show that, by assuming an implicit temporal connection among the time series observed at different days/locations, the PCA trend brings several advantages to traffic time series analysis. We also describe and define the so-called short-term trend that cannot be characterized by existing definitions. Second, we sequentially review the role that trend modeling plays in four major problems in traffic time series analysis: abnormal data detection, data compression, missing data imputation, and traffic prediction. The relations between these problems are revealed, and the benefit of detrending is explained. For the first three problems, we summarize our findings in the last ten years and try to provide an integrated framework for future study. For traffic prediction problem, we present a new explanation on why prediction accuracy can be improved at data points representing the short-term trends if the traffic information from multiple sensors can be appropriately used. This finding indicates that the trend modeling is not only a technique to specify the temporal pattern but is also related to the spatial relation of traffic time series.",
"title": ""
},
{
"docid": "1e3d8e4d78052cfccc2f23dadcfa841b",
"text": "OBJECTIVE\nAlthough the underlying cause of Huntington's disease (HD) is well established, the actual pathophysiological processes involved remain to be fully elucidated. In other proteinopathies such as Alzheimer's and Parkinson's diseases, there is evidence for impairments of the cerebral vasculature as well as the blood-brain barrier (BBB), which have been suggested to contribute to their pathophysiology. We investigated whether similar changes are also present in HD.\n\n\nMETHODS\nWe used 3- and 7-Tesla magnetic resonance imaging as well as postmortem tissue analyses to assess blood vessel impairments in HD patients. Our findings were further investigated in the R6/2 mouse model using in situ cerebral perfusion, histological analysis, Western blotting, as well as transmission and scanning electron microscopy.\n\n\nRESULTS\nWe found mutant huntingtin protein (mHtt) aggregates to be present in all major components of the neurovascular unit of both R6/2 mice and HD patients. This was accompanied by an increase in blood vessel density, a reduction in blood vessel diameter, as well as BBB leakage in the striatum of R6/2 mice, which correlated with a reduced expression of tight junction-associated proteins and increased numbers of transcytotic vesicles, which occasionally contained mHtt aggregates. We confirmed the existence of similar vascular and BBB changes in HD patients.\n\n\nINTERPRETATION\nTaken together, our results provide evidence for alterations in the cerebral vasculature in HD leading to BBB leakage, both in the R6/2 mouse model and in HD patients, a phenomenon that may, in turn, have important pathophysiological implications.",
"title": ""
},
{
"docid": "85ccad436c7e7eed128825e3946ae0ef",
"text": "Recent research has made great strides in the field of detecting botnets. However, botnets of all kinds continue to plague the Internet, as many ISPs and organizations do not deploy these techniques. We aim to mitigate this state by creating a very low-cost method of detecting infected bot host. Our approach is to leverage the botnet detection work carried out by some organizations to easily locate collaborating bots elsewhere. We created BotMosaic as a countermeasure to IRC-based botnets. BotMosaic relies on captured bot instances controlled by a watermarker, who inserts a particular pattern into their network traffic. This pattern can then be detected at a very low cost by client organizations and the watermark can be tuned to provide acceptable false-positive rates. A novel feature of the watermark is that it is inserted collaboratively into the flows of multiple captured bots at once, in order to ensure the signal is strong enough to be detected. BotMosaic can also be used to detect stepping stones and to help trace back to the botmaster. It is content agnostic and can operate on encrypted traffic. We evaluate BotMosaic using simulations and a testbed deployment.",
"title": ""
},
{
"docid": "eabeed186d3ca4a372f5f83169d44e57",
"text": "In disciplines as diverse as social network analysis and neuroscience, many large graphs are believed to be composed of loosely connected smaller graph primitives, whose structure is more amenable to analysis We propose a robust, scalable, integrated methodology for community detection and community comparison in graphs. In our procedure, we first embed a graph into an appropriate Euclidean space to obtain a low-dimensional representation, and then cluster the vertices into communities. We next employ nonparametric graph inference techniques to identify structural similarity among these communities. These two steps are then applied recursively on the communities, allowing us to detect more fine-grained structure. We describe a hierarchical stochastic blockmodel—namely, a stochastic blockmodel with a natural hierarchical structure—and establish conditions under which our algorithm yields consistent estimates of model parameters and motifs, which we define to be stochastically similar groups of subgraphs. Finally, we demonstrate the effectiveness of our algorithm in both simulated and real data. Specifically, we address the problem of locating similar sub-communities in a partially reconstructed Drosophila connectome and in the social network Friendster.",
"title": ""
},
{
"docid": "b43cf46b0329172b6a9a6deadb6de8bc",
"text": "We present the approaches for the four video-tolanguage tasks of LSMDC 2016, including movie description, fill-in-the-blank, multiple-choice test, and movie retrieval. Our key idea is to adopt the semantic attention mechanism; we first build a set of attribute words that are consistently discovered on video frames, and then selectively fuse them with input words for more semantic representation and with output words for more accurate prediction. We show that our implementation of semantic attention indeed improves the performance of multiple video-tolanguage tasks. Specifically, the presented approaches participated in all the four tasks of the LSMDC 2016, and have won three of them, including fill-in-the-blank, multiplechoice test, and movie retrieval.",
"title": ""
},
{
"docid": "cc219b4f335c9e10f31db746b766b425",
"text": "Congenital tumors of the central nervous system (CNS) are often arbitrarily divided into “definitely congenital” (present or producing symptoms at birth), “probably congenital” (present or producing symptoms within the first week of life), and “possibly congenital” (present or producing symptoms within the first 6 months of life). They represent less than 2% of all childhood brain tumors. The clinical features of newborns include an enlarged head circumference, associated hydrocephalus, and asymmetric skull growth. At birth, a large head or a tense fontanel is the presenting sign in up to 85% of patients. Neurological symptoms as initial symptoms are comparatively rare. The prenatal diagnosis of congenital CNS tumors, while based on ultrasonography, has significantly benefited from the introduction of prenatal magnetic resonance imaging studies. Teratomas constitute about one third to one half of these tumors and are the most common neonatal brain tumor. They are often immature because of primitive neural elements and, rarely, a component of mixed malignant germ cell tumors. Other tumors include astrocytomas, choroid plexus papilloma, primitive neuroectodermal tumors, atypical teratoid/rhabdoid tumors, and medulloblastomas. Less common histologies include craniopharyngiomas and ependymomas. There is a strong predilection for supratentorial locations, different from tumors of infants and children. Differential diagnoses include spontaneous intracranial hemorrhage that can occur in the presence of coagulation factor deficiency or underlying vascular malformations, and congenital brain malformations, especially giant heterotopia. The prognosis for patients with congenital tumors is generally poor, usually because of the massive size of the tumor. However, tumors can be resected successfully if they are small and favorably located. The most favorable outcomes are achieved with choroid plexus tumors, where aggressive surgical treatment leads to disease-free survival.",
"title": ""
},
{
"docid": "3f5461231e7120be4fbddfd53c533a53",
"text": "OBJECTIVE\nTo develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common.\n\n\nSTUDY DESIGN\nRegression risk analysis estimates were compared with internal standards as well as with Mantel-Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR.\n\n\nDATA COLLECTION\nData sets produced using Monte Carlo simulations.\n\n\nPRINCIPAL FINDINGS\nRegression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases.\n\n\nCONCLUSIONS\nRegression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case-control studies, particularly when outcomes are common or effect size is large.",
"title": ""
},
{
"docid": "f16d93249254118060ce81b2f92faca5",
"text": "Radiologists are critically interested in promoting best practices in medical imaging, and to that end, they are actively developing tools that will optimize terminology and reporting practices in radiology. The RadLex® vocabulary, developed by the Radiological Society of North America (RSNA), is intended to create a unifying source for the terminology that is used to describe medical imaging. The RSNA Reporting Initiative has developed a library of reporting templates to integrate reusable knowledge, or meaning, into the clinical reporting process. This report presents the initial analysis of the intersection of these two major efforts. From 70 published radiology reporting templates, we extracted the names of 6,489 reporting elements. These terms were reviewed in conjunction with the RadLex vocabulary and classified as an exact match, a partial match, or unmatched. Of 2,509 unique terms, 1,017 terms (41%) matched exactly to RadLex terms, 660 (26%) were partial matches, and 832 reporting terms (33%) were unmatched to RadLex. There is significant overlap between the terms used in the structured reporting templates and RadLex. The unmatched terms were analyzed using the multidimensional scaling (MDS) visualization technique to reveal semantic relationships among them. The co-occurrence analysis with the MDS visualization technique provided a semantic overview of the investigated reporting terms and gave a metric to determine the strength of association among these terms.",
"title": ""
},
{
"docid": "bc781e8aa4fbc8ead4d996595ee49e72",
"text": "Recent studies of an increasing number of hominin fossils highlight regional and chronological diversities of archaic Homo in the Pleistocene of eastern Asia. However, such a realization is still based on limited geographical occurrences mainly from Indonesia, China and Russian Altai. Here we describe a newly discovered archaic Homo mandible from Taiwan (Penghu 1), which further increases the diversity of Pleistocene Asian hominins. Penghu 1 revealed an unexpectedly late survival (younger than 450 but most likely 190-10 thousand years ago) of robust, apparently primitive dentognathic morphology in the periphery of the continent, which is unknown among the penecontemporaneous fossil records from other regions of Asia except for the mid-Middle Pleistocene Homo from Hexian, Eastern China. Such patterns of geographic trait distribution cannot be simply explained by clinal geographic variation of Homo erectus between northern China and Java, and suggests survival of multiple evolutionary lineages among archaic hominins before the arrival of modern humans in the region.",
"title": ""
},
{
"docid": "e38cbee5c03319d15086e9c39f7f8520",
"text": "In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.",
"title": ""
},
{
"docid": "57c91bce931a23501f42772c103d15c1",
"text": "Faceted browsing is widely used in Web shops and product comparison sites. In these cases, a fixed ordered list of facets is often employed. This approach suffers from two main issues. First, one needs to invest a significant amount of time to devise an effective list. Second, with a fixed list of facets, it can happen that a facet becomes useless if all products that match the query are associated to that particular facet. In this work, we present a framework for dynamic facet ordering in e-commerce. Based on measures for specificity and dispersion of facet values, the fully automated algorithm ranks those properties and facets on top that lead to a quick drill-down for any possible target product. In contrast to existing solutions, the framework addresses e-commerce specific aspects, such as the possibility of multiple clicks, the grouping of facets by their corresponding properties, and the abundance of numeric facets. In a large-scale simulation and user study, our approach was, in general, favorably compared to a facet list created by domain experts, a greedy approach as baseline, and a state-of-the-art entropy-based solution.",
"title": ""
}
] |
scidocsrr
|
ef41e7316954743722eabdae8c8c7feb
|
Knowledge Management as an important tool in Organisational Management : A Review of Literature
|
[
{
"docid": "adcaa15fd8f1e7887a05d3cb1cd47183",
"text": "The dynamic capabilities framework analyzes the sources and methods of wealth creation and capture by private enterprise firms operating in environments of rapid technological change. The competitive advantage of firms is seen as resting on distinctive processes (ways of coordinating and combining), shaped by the firm's (specific) asset positions (such as the firm's portfolio of difftcult-to-trade knowledge assets and complementary assets), and the evolution path(s) it has aflopted or inherited. The importance of path dependencies is amplified where conditions of increasing retums exist. Whether and how a firm's competitive advantage is eroded depends on the stability of market demand, and the ease of replicability (expanding intemally) and imitatability (replication by competitors). If correct, the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing intemal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival's costs, and excludes new entrants. © 1997 by John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "907888b819c7f65fe34fb8eea6df9c93",
"text": "Most time-series datasets with multiple data streams have (many) missing measurements that need to be estimated. Most existing methods address this estimation problem either by interpolating within data streams or imputing across data streams; we develop a novel approach that does both. Our approach is based on a deep learning architecture that we call a Multidirectional Recurrent Neural Network (M-RNN). An M-RNN differs from a bi-directional RNN in that it operates across streams in addition to within streams, and because the timing of inputs into the hidden layers is both lagged and advanced. To demonstrate the power of our approach we apply it to a familiar real-world medical dataset and demonstrate significantly improved performance.",
"title": ""
},
{
"docid": "5b8cb0c530daef4e267a8572349f1118",
"text": "I enjoy doing research in Computer Security and Software Engineering and specifically in mobile security and adversarial machine learning. A primary goal of my research is to build adversarial-resilient intelligent security systems. I have been developing such security systems for the mobile device ecosystem that serves billions of users, millions of apps, and hundreds of thousands of app developers. For an ecosystem of this magnitude, manual inspection or rule-based security systems are costly and error-prone. There is a strong need for intelligent security systems that can learn from experiences, solve problems, and use knowledge to adapt to new situations. However, achieving intelligence in security systems is challenging. In the cat-and-mouse game between security analysts and adversaries, the intelligence of adversaries also increases. In this never-ending game, the adversaries continuously evolve their attacks to be specifically adversarial to newly proposed intelligent security techniques. To address this challenge, I have been pursuing two lines of research: (1) enhancing intelligence of existing security systems to automate the security-decision making by techniques such as program analysis [11, 8, 10, 6, U6] , natural language processing (NLP) [9, 7, U7, 1] , and machine learning [8, 4, 3, 2] ; (2) guarding against emerging attacks specifically adversarial to these newly-proposed intelligent security techniques by developing corresponding defenses [13, U1, U2] and testing methodologies [12, 5] . Throughout these research efforts, my general research methodology is to extract insightful data for security systems (through program analysis and NLP techniques), to enable intelligent decision making in security systems (through machine learning techniques that learn from the extracted data), and to strengthen robustness of the security systems by generating adversarial-testing inputs to check these intelligent security techniques and building defense to prevent the adversarial attacks. With this methodology, my research has derived solutions that have high impact on real-world systems. For instance, my work on analysis and testing of mobile applications (apps) [11, 10] in collaboration with Tencent Ltd. has been deployed and adopted in daily testing of a mobile app named WeChat, a popular messenger app with over 900 million monthly active users. A number of tools grown out of my research have been adopted by companies such as Fujitsu [P1, P2, 13, 6] , Samsung [12, 5] , and IBM.",
"title": ""
},
{
"docid": "f86e3894a6c61c3734e1aabda3500ef0",
"text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.",
"title": ""
},
{
"docid": "d9a9339672121fb6c3baeb51f11bfcd8",
"text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of dierent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f9f1cf949093c41a84f3af854a2c4a8b",
"text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.",
"title": ""
},
{
"docid": "9a86609ecefc5780a49ca638be4de64c",
"text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.",
"title": ""
},
{
"docid": "7082e7b9828c316b24f3113cb516a50d",
"text": "The analog voltage-controlled filter used in historical music synthesizers by Moog is modeled using a digital system, which is then compared in terms of audio measurements with the original analog filter. The analog model is mainly borrowed from D'Angelo's previous work. The digital implementation of the filter incorporates a recently proposed antialiasing method. This method enhances the clarity of output signals in the case of large-level input signals, which cause harmonic distortion. The combination of these two ideas leads to a novel digital model, which represents the state of the art in virtual analog musical filters. It is shown that without the antialiasing, the output signals in the nonlinear regime may be contaminated by undesirable spectral components, which are the consequence of aliasing, but that the antialiasing technique suppresses these components sufficiently. Comparison of measurements of the analog and digital filters show that the digital model is accurate within a few dB in the linear regime and has very similar behavior in the nonlinear regime in terms of distortion. The proposed digital filter model can be used as a building block in virtual analog music synthesizers.",
"title": ""
},
{
"docid": "7c17cb4da60caf8806027273c4c10708",
"text": "Recently, IEEE 802.11ax Task Group has adapted OFDMA as a new technique for enabling multi-user transmission. It has been also decided that the scheduling duration should be same for all the users in a multi-user OFDMA so that the transmission of the users should end at the same time. In order to realize that condition, the users with insufficient data should transmit null data (i.e. padding) to fill the duration. While this scheme offers strong features such as resilience to Overlapping Basic Service Set (OBSS) interference and ease of synchronization, it also poses major side issues of degraded throughput performance and waste of devices' energy. In this work, for OFDMA based 802.11 WLANs we first propose practical algorithm in which the scheduling duration is fixed and does not change from time to time. In the second algorithm the scheduling duration is dynamically determined in a resource allocation framework by taking into account the padding overhead, airtime fairness and energy consumption of the users. We analytically investigate our resource allocation problems through Lyapunov optimization techniques and show that our algorithms are arbitrarily close to the optimal performance at the price of reduced convergence rate. We also calculate the overhead of our algorithms in a realistic setup and propose solutions for the implementation issues.",
"title": ""
},
{
"docid": "8da9477e774902d4511d51a9ddb8b74b",
"text": "In modern system-on-chip architectures, specialized accelerators are increasingly used to improve performance and energy efficiency. The growing complexity of these systems requires the use of system-level design methodologies featuring high-level synthesis (HLS) for generating these components efficiently. Existing HLS tools, however, have limited support for the system-level optimization of memory elements, which typically occupy most of the accelerator area. We present a complete methodology for designing the private local memories (PLMs) of multiple accelerators. Based on the memory requirements of each accelerator, our methodology automatically determines an area-efficient architecture for the PLMs to guarantee performance and reduce the memory cost based on technology-related information. We implemented a prototype tool, called Mnemosyne, that embodies our methodology within a commercial HLS flow. We designed 13 complex accelerators for selected applications from two recently-released benchmark suites (Perfect and CortexSuite). With our approach we are able to reduce the memory cost of single accelerators by up to 45%. Moreover, when reusing memory IPs across accelerators, we achieve area savings that range between 17% and 55% compared to the case where the PLMs are designed separately.",
"title": ""
},
{
"docid": "52c0c6d1deacdca44df5000b2b437c78",
"text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.",
"title": ""
},
{
"docid": "883be979cd5e7d43ded67da1a40427ce",
"text": "This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.",
"title": ""
},
{
"docid": "5588fd19a3d0d73598197ad465315fd6",
"text": "The growing need for Chinese natural language processing (NLP) is largely in a range of research and commercial applications. However, most of the currently Chinese NLP tools or components still have a wide range of issues need to be further improved and developed. FudanNLP is an open source toolkit for Chinese natural language processing (NLP), which uses statistics-based and rule-based methods to deal with Chinese NLP tasks, such as word segmentation, part-ofspeech tagging, named entity recognition, dependency parsing, time phrase recognition, anaphora resolution and so on.",
"title": ""
},
{
"docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9",
"text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.",
"title": ""
},
{
"docid": "b1d00c44127956ab703204490de0acd7",
"text": "The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.",
"title": ""
},
{
"docid": "4a81bfdcd2c3d543d2cb182fef28da6c",
"text": "A novel printed compact wide-band planar antenna for mobile handsets is proposed and analyzed in this paper. The radiating patch of the proposed antenna is designed jointly with the shape of the ground plane. A prototype of the proposed antenna with 30 mm in height and 50 mm in width has been fabricated and tested. Its operating bandwidth with voltage standing wave ratio (VSWR) lower than 3:1 is 870-2450 MHz, which covers the global system for mobile communication (GSM, 890-960 MHz), the global positioning system (GPS, 1575.42 MHz), digital communication system (DCS, 1710-1880 MHz), personal communication system (PCS, 1850-1990 MHz), universal mobile telecommunication system (UMTS, 1920-2170 MHz), and wireless local area network (WLAN, 2400-2484 MHz) bands. Therefore, it could be applicable for the existing and future mobile communication systems. Design details and experimental results are also presented and discussed.",
"title": ""
},
{
"docid": "6625c08d03f755550f2a34086b4ae600",
"text": "The general requirement in the automotive radar application is to measure the target range R and radial velocity vr simultaneously and unambiguously with high accuracy and resolution even in multitarget situations, which is a matter of the appropriate waveform design. Based on a single continuous wave chirp transmit signal, target range R and radial velocity vr cannot be measured in an unambiguous way. Therefore a so-called multiple frequency shift keying (MFSK) transmit signal was developed, which is applied to measure target range and radial velocity separately and simultaneously. In this case the radar measurement is based on a frequency and additionally on a phase measurement, which suffers from a lower estimation accuracy compared with a pure frequency measurement. This MFSK waveform can therefore be improved and outperformed by a chirp sequences waveform. Each chirp signal has in this case very short time duration Tchirp. Therefore the measured beat frequency fB is dominated by target range R and is less influenced by the radial velocity vr. The range and radial velocity estimation is based on two separate frequency measurements with high accuracy in both cases. Classical chirp sequence waveforms suffer from possible ambiguities in the velocity measurement. It is the objective of this paper to modify the classical chirp sequence to get an unambiguous velocity measurement even in multitarget situations.",
"title": ""
},
{
"docid": "a58d2058fd310ca553aee16a84006f96",
"text": "This systematic literature review describes the epidemiology of dengue disease in Mexico (2000-2011). The annual number of uncomplicated dengue cases reported increased from 1,714 in 2000 to 15,424 in 2011 (incidence rates of 1.72 and 14.12 per 100,000 population, respectively). Peaks were observed in 2002, 2007, and 2009. Coastal states were most affected by dengue disease. The age distribution pattern showed an increasing number of cases during childhood, a peak at 10-20 years, and a gradual decline during adulthood. All four dengue virus serotypes were detected. Although national surveillance is in place, there are knowledge gaps relating to asymptomatic cases, primary/secondary infections, and seroprevalence rates of infection in all age strata. Under-reporting of the clinical spectrum of the disease is also problematic. Dengue disease remains a serious public health problem in Mexico.",
"title": ""
},
{
"docid": "e9353d465c5dfd8af684d4e09407ea28",
"text": "An overview of the main contributions that introduced the use of nonresonating modes for the realization of pseudoelliptic narrowband waveguide filters is presented. The following are also highlighted: early work using asymmetric irises; oversized H-plane cavity; transverse magnetic cavity; TM dual-mode cavity; and multiple cavity filters.",
"title": ""
},
{
"docid": "48411ae0253630f6ac97be4b478a669f",
"text": "Recently, there has been increasing interest in low-cost, non-contact and pervasive methods for monitoring physiological information for the drivers. For the intelligent driver monitoring system there has been so many approaches like facial expression based method, driving behavior based method and physiological parameters based method. Physiological parameters such as, heart rate (HR), heart rate variability (HRV), respiration rate (RR) etc. are mainly used to monitor physical and mental state. Also, in recent decades, there has been increasing interest in low-cost, non-contact and pervasive methods for measuring physiological information. Monitoring physiological parameters based on camera images is such kind of expected methods that could offer a new paradigm for driver's health monitoring. In this paper, we review the latest developments in using camera images for non-contact physiological parameters that provides a resource for researchers and developers working in the area.",
"title": ""
}
] |
scidocsrr
|
447617c2bca7b7adc981fd69a451a183
|
Object-Centric Anomaly Detection by Attribute-Based Reasoning
|
[
{
"docid": "704d068f791a8911068671cb3dca7d55",
"text": "Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.",
"title": ""
}
] |
[
{
"docid": "9113e4ba998ec12dd2536073baf40610",
"text": "Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task (\"Deep convolutional neural networks for LVCSR,\" T. N. Sainath et al., Proc. IEEE Acoust., Speech, Signal Process., 2013).",
"title": ""
},
{
"docid": "c07a0053f43d9e1f98bb15d4af92a659",
"text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.",
"title": ""
},
{
"docid": "a4488fdd33bab600bd4de1f02e3a418e",
"text": "An antidote for reproductive learning is to engage learners in active manipulative), constructive, intentional, complex, authentic, cooperative (collaborative and conversational), and reflective learning activities. Those characteristics are the goal of constructivist learning environments (CLEs). This paper presents a model for designing CLEs, which surround a problem/project/issue/question (including problem context, problem representation space, and problem, manipulation space) with related cases (to supplant learners’ lack of experiences and convey complexity), information resources that support knowledge construction, cognitive tools, conversation and collaboration tools, and social-contextual support for implementation. These components are supported by instructional supports, including modeling, coaching, and scaffolding. This model is directly applicable to web-based learning. Examples of web-CLEs will be demonstrated in the presentation.",
"title": ""
},
{
"docid": "b81c7806be48b25497c84cd1e623f6fc",
"text": "Time-of-flight range sensors have error characteristics, which are complementary to passive stereo. They provide real-time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes where stereo excels. We explore their complementary characteristics and introduce a method for combining the results from both methods that achieve better accuracy than either alone. In our fusion framework, the depth probability distribution functions from each of these sensor modalities are formulated and optimized. Robust and adaptive fusion is built on a pixel-wise reliability weighting function calculated for each method. In addition, since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturer's calibration. We demonstrate that our proposed techniques lead to improved accuracy and robustness on an extensive set of experimental results.",
"title": ""
},
{
"docid": "6adfcf6aec7b33a82e3e5e606c93295d",
"text": "Cyber security is a serious global concern. The potential of cyber terrorism has posed a threat to national security; meanwhile the increasing prevalence of malware and incidents of cyber attacks hinder the utilization of the Internet to its greatest benefit and incur significant economic losses to individuals, enterprises, and public organizations. This paper presents some recent advances in intrusion detection, feature selection, and malware detection. In intrusion detection, stealthy and low profile attacks that include only few carefully crafted packets over an extended period of time to delude firewalls and the intrusion detection system (IDS) have been difficult to detect. In protection against malware (trojans, worms, viruses, etc.), how to detect polymorphic and metamorphic versions of recognized malware using static scanners is a great challenge. We present in this paper an agent based IDS architecture that is capable of detecting probe attacks at the originating host and denial of service (DoS) attacks at the boundary controllers. We investigate and compare the performance of different classifiers implemented for intrusion detection purposes. Further, we study the performance of the classifiers in real-time detection of probes and DoS attacks, with respect to intrusion data collected on a real operating network that includes a variety of simulated attacks. Feature selection is as important for IDS as it is for many other modeling problems. We present several techniques for feature selection and compare their performance in the IDS application. It is demonstrated that, with appropriately chosen features, both probes and DoS attacks can be detected in real time or near real time at the originating host or at the boundary controllers. We also briefly present some encouraging recent results in detecting polymorphic and metamorphic malware with advanced static, signature-based scanning techniques.",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "e281a8dc16b10dff80fad36d149a8a2f",
"text": "We present a tree router for multichip systems that guarantees deadlock-free multicast packet routing without dropping packets or restricting their length. Multicast routing is required to efficiently connect massively parallel systems' computational units when each unit is connected to thousands of others residing on multiple chips, which is the case in neuromorphic systems. Our tree router implements this one-to-many routing by branching recursively-broadcasting the packet within a specified subtree. Within this subtree, the packet is only accepted by chips that have been programmed to do so. This approach boosts throughput because memory look-ups are avoided enroute, and keeps the header compact because it only specifies the route to the subtree's root. Deadlock is avoided by routing in two phases-an upward phase and a downward phase-and by restricting branching to the downward phase. This design is the first fully implemented wormhole router with packet-branching that can never deadlock. The design's effectiveness is demonstrated in Neurogrid, a million-neuron neuromorphic system consisting of sixteen chips. Each chip has a 256 × 256 silicon-neuron array integrated with a full-custom asynchronous VLSI implementation of the router that delivers up to 1.17 G words/s across the sixteen-chip network with less than 1 μs jitter.",
"title": ""
},
{
"docid": "566a2b2ff835d10e0660fb89fd6ae618",
"text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).",
"title": ""
},
{
"docid": "ec641ace6df07156891f2bf40ea5d072",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "33cce2750db6e1f680e8a6a2c89ad30a",
"text": "Present theories of visual recognition emphasize the role of interactive processing across populations of neurons within a given network, but the nature of these interactions remains unresolved. In particular, data describing the sufficiency of feedforward algorithms for conscious vision and studies revealing the functional relevance of feedback connections to the striate cortex seem to offer contradictory accounts of visual information processing. TMS is a good method to experimentally address this issue, given its excellent temporal resolution and its capacity to establish causal relations between brain function and behavior. We studied 20 healthy volunteers in a visual recognition task. Subjects were briefly presented with images of animals (birds or mammals) in natural scenes and were asked to indicate the animal category. MRI-guided stereotaxic single TMS pulses were used to transiently disrupt striate cortex function at different times after image onset (SOA). Visual recognition was significantly impaired when TMS was applied over the occipital pole at SOAs of 100 and 220 msec. The first interval has consistently been described in previous TMS studies and is explained as the interruption of the feedforward volley of activity. Given the late latency and discrete nature of the second peak, we hypothesize that it represents the disruption of a feedback projection to V1, probably from other areas in the visual network. These results provide causal evidence for the necessity of recurrent interactive processing, through feedforward and feedback connections, in visual recognition of natural complex images.",
"title": ""
},
{
"docid": "62309d3434c39ea5f9f901f8eb635539",
"text": "The flap design according Karaca et al., used during surgery for removal of impacted third molars prevents complications related to 2 molar periodontal status [125]. Suarez et al. believe that this design influences healing primary [122]. This prevents wound dehiscence and evaluated the suture technique to achieve this closure to Sanchis et al. [124], believe that primary closure avoids draining the socket and worse postoperative inflammation and pain, choose to place drains, obtaining a less postoperative painful [127].",
"title": ""
},
{
"docid": "cda5c6908b4f52728659f89bb082d030",
"text": "Until a few years ago the diagnosis of hair shaft disorders was based on light microscopy or scanning electron microscopy on plucked or cut samples of hair. Dermatoscopy is a new fast, noninvasive, and cost-efficient technique for easy in-office diagnosis of all hair shaft abnormalities including conditions such as pili trianguli and canaliculi that are not recognizable by examining hair shafts under the light microscope. It can also be used to identify disease limited to the eyebrows or eyelashes. Dermatoscopy allows for fast examination of the entire scalp and is very helpful to identify the affected hair shafts when the disease is focal.",
"title": ""
},
{
"docid": "dbf3650aadb4c18500ec3676d23dba99",
"text": "Current search engines do not, in general, perform well with longer, more verbose queries. One of the main issues in processing these queries is identifying the key concepts that will have the most impact on effectiveness. In this paper, we develop and evaluate a technique that uses query-dependent, corpus-dependent, and corpus-independent features for automatic extraction of key concepts from verbose queries. We show that our method achieves higher accuracy in the identification of key concepts than standard weighting methods such as inverse document frequency. Finally, we propose a probabilistic model for integrating the weighted key concepts identified by our method into a query, and demonstrate that this integration significantly improves retrieval effectiveness for a large set of natural language description queries derived from TREC topics on several newswire and web collections.",
"title": ""
},
{
"docid": "577f90976559e45c56bc4ca8004f990f",
"text": "In this paper, we address the problem of recognizing images with weakly annotated text tags. Most previous work either cannot be applied to the scenarios where the tags are loosely related to the images, or simply take a pre-fusion at the feature level or a post-fusion at the decision level to combine the visual and textual content. Instead, we first encode the text tags as the relations among the images, and then propose a semi-supervised relational topic model (ss-RTM) to explicitly model the image content and their relations. In such way, we can efficiently leverage the loosely related tags, and build an intermediate level representation for a collection of weakly annotated images. The intermediate level representation can be regarded as a mid-level fusion of the visual and textual content, which is able to explicitly model their intrinsic relationships. Moreover, image category labels are also modeled in the ss-RTM, and recognition can be conducted without training an additional discriminative classifier. Our extensive experiments on social multimedia datasets (images+tags) demonstrated the advantages of the proposed model.",
"title": ""
},
{
"docid": "c5efe5fe7c945e48f272496e7c92bb9c",
"text": "Knowing when a classifier’s prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier’s predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier’s discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier’s confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.",
"title": ""
},
{
"docid": "44f2eaf0219f44a82a9967ec9a9d36cd",
"text": "Two measures of social function designed for community studies of normal aging and mild senile dementia were evaluated in 195 older adults who underwent neurological, cognitive, and affective assessment. An examining and a reviewing neurologist and a neurologically trained nurse independently rated each on a Scale of Functional Capacity. Interrater reliability was high (examining vs. reviewing neurologist, r = .97; examining neurologist vs. nurse, tau b = .802; p less than .001 for both comparisons). Estimates correlated well with an established measure of social function and with results of cognitive tests. Alternate informants evaluated participants on the Functional Activities Questionnaire and the Instrumental Activities of Daily Living Scale. The Functional Activities Questionnaire was superior to the Instrumental Activities of Daily scores. Used alone as a diagnostic tool, the Functional Activities Questionnaire was more sensitive than distinguishing between normal and demented individuals.",
"title": ""
},
{
"docid": "328052245c3a5144c492e761e7f51bae",
"text": "The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships (QSARs), among other issues, are hot topics in the field of materials science. Traditional experiments and computational modelling often consume tremendous time and resources and are limited by their experimental conditions and theoretical foundations. Thus, it is imperative to develop a new method of accelerating the discovery and design process for novel materials. Recently, materials discovery and design using machine learning have been receiving increasing attention and have achieved great improvements in both time efficiency and prediction accuracy. In this review, we first outline the typical mode of and basic procedures for applying machine learning in materials science, and we classify and compare the main algorithms. Then, the current research status is reviewed with regard to applications of machine learning in material property prediction, in new materials discovery and for other purposes. Finally, we discuss problems related to machine learning in materials science, propose possible solutions, and forecast potential directions of future research. By directly combining computational studies with experiments, we hope to provide insight into the parameters that affect the properties of materials, thereby enabling more efficient and target-oriented research on materials dis-",
"title": ""
},
{
"docid": "f2fd1bee7b2770bbf808d8902f4964b4",
"text": "Antimicrobial and antiquorum sensing (AQS) activities of fourteen ethanolic extracts of different parts of eight plants were screened against four Gram-positive, five Gram-negative bacteria and four fungi. Depending on the plant part extract used and the test microorganism, variable activities were recorded at 3 mg per disc. Among the Grampositive bacteria tested, for example, activities of Laurus nobilis bark extract ranged between a 9.5 mm inhibition zone against Bacillus subtilis up to a 25 mm one against methicillin resistant Staphylococcus aureus. Staphylococcus aureus and Aspergillus fumigatus were the most susceptible among bacteria and fungi tested towards other plant parts. Of interest is the tangible antifungal activity of a Tecoma capensis flower extract, which is reported for the first time. However, minimum inhibitory concentrations (MIC's) for both bacteria and fungi were relatively high (0.5-3.0 mg). As for antiquorum sensing activity against Chromobacterium violaceum, superior activity (>17 mm QS inhibition) was associated with Sonchus oleraceus and Laurus nobilis extracts and weak to good activity (8-17 mm) was recorded for other plants. In conclusion, results indicate the potential of these plant extracts in treating microbial infections through cell growth inhibition or quorum sensing antagonism, which is reported for the first time, thus validating their medicinal use.",
"title": ""
},
{
"docid": "c460da4083842102fcf2a59ef73702a1",
"text": "I describe two aspects of metacognition, knowledge of cognition and regulation of cognition, and how they are related to domain-specific knowledge and cognitive abilities. I argue that metacognitive knowledge is multidimensional, domain-general in nature, and teachable. Four instructional strategies are described for promoting the construction and acquisition of metacognitive awareness. These include promoting general awareness, improving selfknowledge and regulatory skills, and promoting learning environments that are conducive to the construction and use of metacognition. This paper makes three proposals: (a) metacognition is a multidimensional phenomenon, (b) it is domain-general in nature, and (c) metacognitive knowledge and regulation can be improved using a variety of instructional strategies. Let me acknowledge at the beginning that each of these proposals is somewhat speculative. While there is a limited amount of research that supports them, more research is needed to clarify them. Each one of these proposals is addressed in a separate section of the paper. The first makes a distinction between knowledge of cognition and regulation of cognition. The second summarizes some of the recent research examining the relationship of metacognition to expertise and cognitive abilities. The third section describes four general instructional strategies for improving metacognition. These include fostering construction of new knowledge, explicating conditional knowledge, automatizing a monitoring heuristic, and creating a supportive motivational environment in the classroom. I conclude with a few thoughts about general cognitive skills instruction. A framework for understanding metacognition Researchers have been studying metacognition for over twenty years. Most agree that cognition and metacognition differ in that cognitive skills are necessary to perform a task, while metacognition is necessary to understand how the task was performed (Garner, 1987). Most researchers also make a VICTORY: PIPS No.: 136750 LAWKAP truchh7.tex; 9/12/1997; 18:12; v.6; p.1",
"title": ""
},
{
"docid": "2fc05946c4e17c0ca199cc8896e38362",
"text": "Hierarchical multilabel classification allows a sample to belong to multiple class labels residing on a hierarchy, which can be a tree or directed acyclic graph (DAG). However, popular hierarchical loss functions, such as the H-loss, can only be defined on tree hierarchies (but not on DAGs), and may also under- or over-penalize misclassifications near the bottom of the hierarchy. Besides, it has been relatively unexplored on how to make use of the loss functions in hierarchical multilabel classification. To overcome these deficiencies, we first propose hierarchical extensions of the Hamming loss and ranking loss which take the mistake at every node of the label hierarchy into consideration. Then, we first train a general learning model, which is independent of the loss function. Next, using Bayesian decision theory, we develop Bayes-optimal predictions that minimize the corresponding risks with the trained model. Computationally, instead of requiring an exhaustive summation and search for the optimal multilabel, the resultant optimization problem can be efficiently solved by a greedy algorithm. Experimental results on a number of real-world data sets show that the proposed Bayes-optimal classifier outperforms state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
0c3fee204cdc22086d082e496f27cff6
|
Learning inter-related visual dictionary for object recognition
|
[
{
"docid": "7eec1e737523dc3b78de135fc71b058f",
"text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches",
"title": ""
}
] |
[
{
"docid": "fac476744429cacfe1c07ec19ee295eb",
"text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.",
"title": ""
},
{
"docid": "a5428992001b7b4ed8d983d27df64dcf",
"text": "Travel websites and online booking platforms represent today’s major sources for customers when gathering information before a trip. In particular, community-provided customer reviews and ratings of various tourism services represent a valuable source of information for trip planning. With respect to customer ratings, many modern travel and tourism platforms – in contrast to several other e-commerce domains – allow customers to rate objects along multiple dimensions and thus to provide more fine-granular post-trip feedback on the booked accommodation or travel package. In this paper, we first show how this multi-criteria rating information can help to obtain a better understanding of factors driving customer satisfaction for different segments. For this purpose, we performed a Penalty-Reward Contrast analysis on a data set from a major tourism platform, which reveals that customer segments significantly differ in the way the formation of overall satisfaction can be explained. Beyond the pure identification of segment-specific satisfaction factors, we furthermore show how this fine-granular rating information can be exploited to improve the accuracy of rating-based recommender systems. In particular, we propose to utilize userand object-specific factor relevance weights which can be learned through linear regression. An empirical evaluation on datasets from different domains finally shows that our method helps us to predict the customer preferences more accurately and thus to develop better online recommendation services.",
"title": ""
},
{
"docid": "1380438b5c7739a77644520ebc744002",
"text": "The present work proposes a review and comparison of different Kernel functionals and neighborhood geometry for Nonlocal Means (NLM) in the task of digital image filtering. Some different alternatives to change the classical exponential kernel function used in NLM methods are explored. Moreover, some approaches that change the geometry of the neighborhood and use dimensionality reduction of the neighborhood or patches onto principal component analysis (PCA) are also analyzed, and their performance is compared with respect to the classic NLM method. Mainly, six approaches were compared using quantitative and qualitative evaluations, to do this an homogeneous framework has been established using the same simulation platform, the same computer, and same conditions for the initializing parameters. According to the obtained comparison, one can say that the NLM filtering could be improved when changing the kernel, particularly for the case of the Tukey kernel. On the other hand, the excellent performance given by recent hybrid approaches such as NLM SAP, NLM PCA (PH), and the BM3D SAPCA lead to establish that significantly improvements to the classic NLM could be obtained. Particularly, the BM3D SAPCA approach gives the best denoising results, however, the computation times were the longest.",
"title": ""
},
{
"docid": "dc766a7a35720b3337ed7006bc510c49",
"text": "This chapter presents the realization of second order active low pass, high pass and band pass filter using fully differential difference amplifier (FDDA). The fully differential difference amplifier is a balanced output differential difference amplifier. It provides low output distortion and high output voltage swing as compared to the DDA. The filters realized with FDDA possess attractive features that do not exist in both traditional (discrete) and modern fully integrated Op-amp circuits. However the frequency range of operation of FDDA is same as that made of the DDA or Op-Amp. The proposed filters possess orthogonality between the cutoff/central frequency and the quality factor. In view of the orthogonality property, the proposed circuits have wide applications in the instrumentation, control systems and signal processing. All the filter realizations have low sensitivity to parameter variations. The first two sections of this chapter present the implementation of DDA and FDDA. Subsequent sections are devoted to the proposed realization filters and main results. The Differential Difference Amplifier (DDA) is an emerging CMOS analog building block. It has been used in a number of applications such as instrumentation amplifier, continuous time filter, implementation of fully differential switch MOS capacitor circuit, common mode detection circuit, telephone line adaption circuit, biomedical applications, floating resistors, sample and hold circuits and MEMs. [61-89]. Differential Difference Amplifier is an extension of the conventional operational amplifier. An operational amplifier employs only one differential input, whereas two differential inputs in DDA. The schematic diagram of DDA is shown in Figure 3.1. It is a five terminal device with four input terminals named as V pp , V pn , V np , and V nn and the output terminal denoted by V 0 .",
"title": ""
},
{
"docid": "67cab04fa1850d0b419710bf3af9c4ee",
"text": "The Alcohol Use Disorders Identification Test (AUDIT) has been developed from a six-country WHO collaborative project as a screening instrument for hazardous and harmful alcohol consumption. It is a 10-item questionnaire which covers the domains of alcohol consumption, drinking behaviour, and alcohol-related problems. Questions were selected from a 150-item assessment schedule (which was administered to 1888 persons attending representative primary health care facilities) on the basis of their representativeness for these conceptual domains and their perceived usefulness for intervention. Responses to each question are scored from 0 to 4, giving a maximum possible score of 40. Among those diagnosed as having hazardous or harmful alcohol use, 92% had an AUDIT score of 8 or more, and 94% of those with non-hazardous consumption had a score of less than 8. AUDIT provides a simple method of early detection of hazardous and harmful alcohol use in primary health care settings and is the first instrument of its type to be derived on the basis of a cross-national study.",
"title": ""
},
{
"docid": "db95a67e1c532badd3ec97a31170bb0c",
"text": "The named entity recognition task aims at identifying and classifying named entities within an open-domain text. This task has been garnering significant attention recently as it has been shown to help improve the performance of many natural language processing applications. In this paper, we investigate the impact of using different sets of features in three discriminative machine learning frameworks, namely, support vector machines, maximum entropy and conditional random fields for the task of named entity recognition. Our language of interest is Arabic. We explore lexical, contextual and morphological features and nine data-sets of different genres and annotations. We measure the impact of the different features in isolation and incrementally combine them in order to evaluate the robustness to noise of each approach. We achieve the highest performance using a combination of 15 features in conditional random fields using broadcast news data (Fbeta = 1=83.34).",
"title": ""
},
{
"docid": "1d7d3a52e059a256434556c405c0e1fa",
"text": "Page segmentation is still a challenging problem due to the large variety of document layouts. Methods examining both foreground and background regions are among the most effective to solve this problem. However, their performance is influenced by the implementation of two key steps: the extraction and selection of background regions, and the grouping of background regions into separators. This paper proposes an efficient hybrid method for page segmentation. The method extracts white space rectangles based on connected component analysis, and filters white space rectangles progressively incorporating foreground and background information such that the remaining rectangles are likely to form column separators. Experimental results on the ICDAR2009 page segmentation competition test set demonstrate the effectiveness and superiority of the proposed method.",
"title": ""
},
{
"docid": "a7f1565d548359c9f19bed304c2fbba6",
"text": "Handwritten character generation is a popular research topic with various applications. Various methods have been proposed in the literatures which are based on methods such as pattern recognition, machine learning, deep learning or others. However, seldom method could generate realistic and natural handwritten characters with a built-in determination mechanism to enhance the quality of generated image and make the observers unable to tell whether they are written by a person. To address these problems, in this paper, we proposed a novel generative adversarial network, multi-scale multi-class generative adversarial network (MSMC-CGAN). It is a neural network based on conditional generative adversarial network (CGAN), and it is designed for realistic multi-scale character generation. MSMC-CGAN combines the global and partial image information as condition, and the condition can also help us to generate multi-class handwritten characters. Our model is designed with unique neural network structures, image features and training method. To validate the performance of our model, we utilized it in Chinese handwriting generation, and an evaluation method called mean opinion score (MOS) was used. The MOS results show that MSMC-CGAN achieved good performance.",
"title": ""
},
{
"docid": "8e50613e8aab66987d650cd8763811e5",
"text": "Along with the great increase of internet and e-commerce, the use of credit card is an unavoidable one. Due to the increase of credit card usage, the frauds associated with this have also increased. There are a lot of approaches used to detect the frauds. In this paper, behavior based classification approach using Support Vector Machines are employed and efficient feature extraction method also adopted. If any discrepancies occur in the behaviors transaction pattern then it is predicted as suspicious and taken for further consideration to find the frauds. Generally credit card fraud detection problem suffers from a large amount of data, which is rectified by the proposed method. Achieving finest accuracy, high fraud catching rate and low false alarms are the main tasks of this approach.",
"title": ""
},
{
"docid": "a1d1d61c61d1941329cdbc38639bd487",
"text": "America’s critical infrastructure is becoming “smarter” and increasingly dependent on highly specialized computers called industrial control systems (ICS). Networked ICS components now called the industrial Internet of Things (IIoT) are at the heart of the “smart city”, controlling critical infrastructure, such as CCTV security networks, electric grids, water networks, and transportation systems. Without the continuous, reliable functioning of these assets, economic and social disruption will ensue. Unfortunately, IIoT are hackable and difficult to secure from cyberattacks. This leaves our future smart cities in a state of perpetual uncertainty and the risk that the stability of our lives will be upended. The Local government has largely been absent from conversations about cybersecurity of critical infrastructure, despite its importance. One reason for this is public administrators do not have a good way of knowing which assets and which components of those assets are at the greatest risk. This is further complicated by the highly technical nature of the tools and techniques required to assess these risks. Using artificial intelligence planning techniques, an automated tool can be developed to evaluate the cyber risks to critical infrastructure. It can be used to automatically identify the adversarial strategies (attack trees) that can compromise these systems. This tool can enable both security novices and specialists to identify attack pathways. We propose and provide an example of an automated attack generation method that can produce detailed, scalable, and consistent attack trees–the first step in securing critical infrastructure from cyberattack.",
"title": ""
},
{
"docid": "85fdbd9d470d54196782a5d40abd2740",
"text": "The purpose of this study was to investigate the morphology of the superficial musculoaponeurotic system (SMAS). Eight embalmed cadavers were analyzed: one side of the face was macroscopically dissected; on the other side, full-thickness samples of the parotid, zygomatic, nasolabial fold and buccal regions were taken. In all specimens, a laminar connective tissue layer (SMAS) bounding two different fibroadipose connective layers was identified. The superficial fibroadipose layer presented vertically oriented fibrous septa, connecting the dermis with the superficial aspect of the SMAS. In the deep fibroadipose connective layer, the fibrous septa were obliquely oriented, connecting the deep aspect of the SMAS to the parotid-masseteric fascia. This basic arrangement shows progressive thinning of the SMAS from the preauricular district to the nasolabial fold (p < 0.05). In the parotid region, the mean thicknesses of the superficial and deep fibroadipose connective tissues were 1.63 and 0.8 mm, respectively, whereas in the region of the nasolabial fold the superficial layer is not recognizable and the mean thickness of the deep fibroadipose connective layer was 2.9 mm. The connective subcutaneous tissue of the face forms a three-dimensional network connecting the SMAS to the dermis and deep muscles. These connective laminae connect adipose lobules of various sizes within the superficial and deep fibroadipose tissues, creating a three-dimensional network which modulates transmission of muscle contractions to the skin. Changes in the quantitative and qualitative characteristics of the fibroadipose connective system, reducing its viscoelastic properties, may contribute to ptosis of facial soft tissues during aging.",
"title": ""
},
{
"docid": "f7ed4fb9015dad13d47dec677c469c4b",
"text": "In this paper, a low-cost, power efficient and fast Differential Cascode Voltage-Switch-Logic (DCVSL) based delay cell (named DCVSL-R) is proposed. We use the DCVSL-R cell to implement high frequency and power-critical delay cells and flip-flops of ring oscillators and frequency dividers. When compared to TSPC, DCVSL circuits offer small input and clock capacitance and a symmetric differential loading for previous RF stages. When compared to CML, they offer low transistor count, no headroom limitation, rail-to-rail swing and no static current consumption. However, DCVSL circuits suffer from a large low-to-high propagation delay, which limits their speed and results in asymmetrical output waveforms. The proposed DCVSL-R circuit embodies the benefits of DCVSL while reducing the total propagation delay, achieving faster operation. DCVSL-R also generates symmetrical output waveforms which are critical for differential circuits. Another contribution of this work is a closed-form delay model that predicts the speed of DCVSL circuits with 8% worst case accuracy. We implement two ring-oscillator-based VCOs in 0.13 μm technology with DCVSL and DCVSL-R delay cells. Measurements show that the proposed DCVSL-R based VCO consumes 30% less power than the DCVSL VCO for the same oscillation frequency (2.4 GHz) and same phase noise (-113 dBc/Hz at 10 MHz). DCVSL-R circuits are also used to implement the high frequency dual modulus prescaler (DMP) of a 2.4 GHz frequency synthesizer in 0.18 μm technology. The DMP consumes only 0.8 mW at 2.48 GHz, a 40% reduction in power when compared to other reported DMPs with similar division ratios and operating frequencies. The RF buffer that drives the DMP consumes only 0.27 mW, demonstrating the lowest combined DMP and buffer power consumption among similar synthesizers in literature.",
"title": ""
},
{
"docid": "d2f71960706eabfa2a4800f7ccb5d7b2",
"text": "Mixed land use refers to the effort of putting residential, commercial and recreational uses in close proximity to one another. This can contribute economic benefits, support viable public transit, and enhance the perceived security of an area. It is naturally promising to investigate how to rank real estate from the viewpoint of diverse mixed land use, which can be reflected by the portfolio of community functions in the observed area. To that end, in this paper, we develop a geographical function ranking method, named FuncDivRank, by incorporating the functional diversity of communities into real estate appraisal. Specifically, we first design a geographic function learning model to jointly capture the correlations among estate neighborhoods, urban functions, temporal effects, and user mobility patterns. In this way we can learn latent community functions and the corresponding portfolios of estates from human mobility data and Point of Interest (POI) data. Then, we learn the estate ranking indicator by simultaneously maximizing ranking consistency and functional diversity, in a unified probabilistic optimization framework. Finally, we conduct a comprehensive evaluation with real-world data. The experimental results demonstrate the enhanced performance of the proposed method for real estate appraisal.",
"title": ""
},
{
"docid": "0aca6e378ed309dd9b72228e3ce8228d",
"text": "BACKGROUND\nThe objective was to determine the test-retest reliability and criterion validity of the Physical Activity Scale for Individuals with Physical Disabilities (PASIPD).\n\n\nMETHODS\nForty-five non-wheelchair dependent subjects were recruited from three Dutch rehabilitation centers. Subjects' diagnoses were: stroke, spinal cord injury, whiplash, and neurological-, orthopedic- or back disorders. The PASIPD is a 7-d recall physical activity questionnaire that was completed twice, 1 wk apart. During this week, physical activity was also measured with an Actigraph accelerometer.\n\n\nRESULTS\nThe test-retest reliability Spearman correlation of the PASIPD was 0.77. The criterion validity Spearman correlation was 0.30 when compared to the accelerometer.\n\n\nCONCLUSIONS\nThe PASIPD had test-retest reliability and criterion validity that is comparable to well established self-report physical activity questionnaires from the general population.",
"title": ""
},
{
"docid": "637b6abdadd3653e95a127f48dc991db",
"text": "State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers. Thus, the performance of such joint models depends on the quality of the features obtained from these NLP tools. However, these features are not always accurate for various languages and contexts. In this paper, we propose a joint neural model which performs entity recognition and relation extraction simultaneously, without the need of any manually extracted features or the use of any external tool. Specifically, we model the entity recognition task using a CRF (Conditional Random Fields) layer and the relation extraction task as a multi-head selection problem (i.e., potentially identify multiple relations for each entity). We present an extensive experimental setup, to demonstrate the effectiveness of our method using datasets from various contexts (i.e., news, biomedical, real estate) and languages (i.e., English, Dutch). Our model outperforms the previous neural models that use automatically extracted features, while it performs within a reasonable margin of feature-based neural models, or even beats them.",
"title": ""
},
{
"docid": "4ee8f88b3587cd81c55dc4f676c5ed06",
"text": "This article discusses aspects of the new thinking founded on a risk-based ISO 9001:2015. It is noted that risk-based thinking is an effective tool for creating, auditing and improving quality management systems. Shows several practical examples of the implementation of the risk management process for the phase “Plan” PDCA-cycle, proposed a new model of the ISM, which contains all the basic entity for performance audits (criteria, object, audit observation) and allows to generate the level of security assessment. Additionally an approach is presented to determine the characteristics of technological devices for the healthcare industry.",
"title": ""
},
{
"docid": "653bdddafdb40af00d5d838b1a395351",
"text": "Advances in electronic location technology and the coming of age of mobile computing have opened the door for location-aware applications to permeate all aspects of everyday life. Location is at the core of a large number of high-value applications ranging from the life-and-death context of emergency response to serendipitous social meet-ups. For example, the market for GPS products and services alone is expected to grow to US$200 billion by 2015. Unfortunately, there is no single location technology that is good for every situation and exhibits high accuracy, low cost, and universal coverage. In fact, high accuracy and good coverage seldom coexist, and when they do, it comes at an extreme cost. Instead, the modern localization landscape is a kaleidoscope of location systems based on a multitude of different technologies including satellite, mobile telephony, 802.11, ultrasound, and infrared among others. This lecture introduces researchers and developers to the most popular technologies and systems for location estimation and the challenges and opportunities that accompany their use. For each technology, we discuss the history of its development, the various systems that are based on it, and their trade-offs and their effects on cost and performance. We also describe technology-independent algorithms that are commonly used to smooth streams of location estimates and improve the accuracy of object tracking. Finally, we provide an overview of the wide variety of application domains where location plays a key role, and discuss opportunities and new technologies on the horizon. KEyWoRDS localization, location systems, location tracking, context awareness, navigation, location sensing, tracking, Global Positioning System, GPS, infrared location, ultrasonic location, 802.11 location, cellular location, Bayesian filters, RFID, RSSI, triangulation",
"title": ""
},
{
"docid": "1e638842d245472a0d8365b7da27b20a",
"text": "How similar are the experiences of social rejection and physical pain? Extant research suggests that a network of brain regions that support the affective but not the sensory components of physical pain underlie both experiences. Here we demonstrate that when rejection is powerfully elicited--by having people who recently experienced an unwanted break-up view a photograph of their ex-partner as they think about being rejected--areas that support the sensory components of physical pain (secondary somatosensory cortex; dorsal posterior insula) become active. We demonstrate the overlap between social rejection and physical pain in these areas by comparing both conditions in the same individuals using functional MRI. We further demonstrate the specificity of the secondary somatosensory cortex and dorsal posterior insula activity to physical pain by comparing activated locations in our study with a database of over 500 published studies. Activation in these regions was highly diagnostic of physical pain, with positive predictive values up to 88%. These results give new meaning to the idea that rejection \"hurts.\" They demonstrate that rejection and physical pain are similar not only in that they are both distressing--they share a common somatosensory representation as well.",
"title": ""
},
{
"docid": "5157063545b7ec7193126951c3bdb850",
"text": "This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.",
"title": ""
}
] |
scidocsrr
|
b9aee2680bd50d68584e7c8d72882e24
|
A Survey on Sentiment Analysis and Opinion Mining Techniques
|
[
{
"docid": "faf000b318151222807ac69f2a557afd",
"text": "Sentiment analysis or opinion mining is the computational study of people’s opinions, appraisals, and emotions toward entities, events and their attributes. In the past few years, it attracted a great deal of attentions from both academia and industry due to many challenging research problems and a wide range of applications [1]. Opinions are important because whenever we need to make a decision we want to hear others’ opinions. This is not only true for individuals but also true for organizations. However, there was almost no computational study on opinions before the Web because there was little opinionated text available. In the past, when an individual needed to make a decision, he/she typically asked for opinions from friends and families. When an organization wanted to find opinions of the general public about its products and services, it conducted surveys and focus groups. However, with the explosive growth of the social media content on the Web in the past few years, the world has been transformed. People can now post reviews of products at merchant sites and express their views on almost anything in discussion forums and blogs, and at social network sites. Now if one wants to buy a product, one is no longer limited to asking one’s friends and families because there are many user reviews on the Web. For a company, it may no longer need to conduct surveys or focus groups in order to gather consumer opinions about its products and those of its competitors because there is a plenty of such information publicly available.",
"title": ""
},
{
"docid": "45eedd6e20f6b4c8e54a6da10fe1ed07",
"text": "Sentiment Analysis (SA) research has gained tremendous momentum in recent times. However, there has been little work in this area for an Indian language. We propose in this paper a fall-back strategy to do sentiment analysis for Hindi documents, a problem on which, to the best of our knowledge, no work has been done until now. (A) First of all, we study three approaches to perform SA in Hindi. We have developed a sentiment annotated corpora in the Hindi movie review domain. The first of our approaches involves training a classifier on this annotated Hindi corpus and using it to classify a new Hindi document. (B) In the second approach, we translate the given document into English and use a classifier trained on standard English movie reviews to classify the document. (C) In the third approach, we develop a lexical resource called Hindi-SentiWordNet (H-SWN) and implement a majority score based strategy to classify the given document. A comparison of performance of these approaches implies that we can adopt a fallback strategy for doing sentiment analysis for a new language, viz., (1) Train a sentiment classifier on in-language labeled corpus and use this classifier to classify a new document. (2) If in-language training data is not available, apply rough machine translation to translate the new document into a resource-rich language like English and detect the polarity of the translated document using a classifier for English, assuming polarity is not lost in translation. (3) If the translation cannot be done, put in place a SentiWordNet-like resource for the new language and apply a majority strategy to the document to be classified. Two additional contributions of our work are (i) the development of sentiment labeled corpus for Hindi movie reviews and (ii) construction of a lexical resource, Hindi SentiWordNet based on its English counterpart.",
"title": ""
},
{
"docid": "613ddf5a74bdb225608dea785ba97154",
"text": "We present a prototype system, code-named Pulse, for mining topics and sentiment orientation jointly from free text customer feedback. We describe the application of the prototype system to a database of car reviews. Pulse enables the exploration of large quantities of customer free text. The user can examine customer opinion “at a glance” or explore the data at a finer level of detail. We describe a simple but effective technique for clustering sentences, the application of a bootstrapping approach to sentiment classification, and a novel user-interface.",
"title": ""
},
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "1c891aa5787d52497f8869011b234440",
"text": "This paper compares different indexing techniques proposed for supporting efficient access to temporal data. The comparison is based on a collection of important performance criteria, including the space consumed, update processing, and query time for representative queries. The comparison is based on worst-case analysis, hence no assumptions on data distribution or query frequencies are made. When a number of methods have the same asymptotic worst-case behavior, features in the methods that affect average case behavior are discussed. Additional criteria examined are the pagination of an index, the ability to cluster related data together, and the ability to efficiently separate old from current data (so that larger archival storage media such as write-once optical disks can be used). The purpose of the paper is to identify the difficult problems in accessing temporal data and describe how the different methods aim to solve them. A general lower bound for answering basic temporal queries is also introduced.",
"title": ""
},
{
"docid": "84625e28d5545123a4bbd3f5a3154b0e",
"text": "Event recognition from still images is of great importance for image understanding. However, compared with event recognition in videos, there are much fewer research works on event recognition in images. This paper addresses the issue of event recognition from images and proposes an effective method with deep neural networks. Specifically, we design a new architecture, called Object-Scene Convolutional Neural Network (OS-CNN). This architecture is decomposed into object net and scene net, which extract useful information for event understanding from the perspective of objects and scene context, respectively. Meanwhile, we investigate different network architectures for OS-CNN design, and adapt the deep (AlexNet) and very-deep (GoogLeNet) networks to the task of event recognition. Furthermore, we find that the deep and very-deep networks are complementary to each other. Finally, based on the proposed OS-CNN and comparative study of different network architectures, we come up with a solution of five-stream CNN for the track of cultural event recognition at the ChaLearn Looking at People (LAP) challenge 2015. Our method obtains the performance of 85.5% and ranks the 1st place in this challenge.",
"title": ""
},
{
"docid": "3603e3d676a3ccae0c2ad18dc914b6a1",
"text": "In large storage systems, it is crucial to protect data from loss due to failures. Erasure codes lay the foundation of this protection, enabling systems to reconstruct lost data when components fail. Erasure codes can however impose significant performance overhead in two core operations: encoding, where coding information is calculated from newly written data, and decoding, where data is reconstructed after failures. This paper focuses on improving the performance of encoding, the more frequent operation. It does so by scheduling the operations of XOR-based erasure codes to optimize their use of cache memory. We call the technique XORscheduling and demonstrate how it applies to a wide variety of existing erasure codes. We conduct a performance evaluation of scheduling these codes on a variety of processors and show that XOR-scheduling significantly improves upon the traditional approach. Hence, we believe that XORscheduling has great potential to have wide impact in practical storage systems.",
"title": ""
},
{
"docid": "a96d6649a2274a919fbeb5b2221d69c6",
"text": "In this paper, a novel center frequency and bandwidth tunable, cross-coupled waveguide resonator filter is presented. The coupling between adjacent resonators can be adjusted using non-resonating coupling resonators. The negative sign for the cross coupling, which is required to generate transmission zeros, is enforced by choosing an appropriate resonant frequency for the cross-coupling resonator. The coupling iris design itself is identical regardless of the sign of the coupling. The design equations for the novel coupling elements are given in this paper. A four pole filter breadboard with two transmission zeros (elliptic filter function) has been built up and measured at various bandwidth and center frequency settings. It operates at Ka-band frequencies and can be tuned to bandwidths from 36 to 72 MHz in the frequency range 19.7-20.2 GHz.",
"title": ""
},
{
"docid": "db0c212b07969b3e78911337dd59d2c4",
"text": "Computational models for sarcasm detection have often relied on the content of utterances in isolation. However, the speaker’s sarcastic intent is not always apparent without additional context. Focusing on social media discussions, we investigate three issues: (1) does modeling conversation context help in sarcasm detection? (2) can we identify what part of conversation context triggered the sarcastic reply? and (3) given a sarcastic post that contains multiple sentences, can we identify the specific sentence that is sarcastic? To address the first issue, we investigate several types of Long Short-Term Memory (LSTM) networks that can model both the conversation context and the current turn. We show that LSTM networks with sentence-level attention on context and current turn, as well as the conditional LSTM network, outperform the LSTM model that reads only the current turn. As conversation context, we consider the prior turn, the succeeding turn, or both. Our computational models are tested on two types of social media platforms: Twitter and discussion forums. We discuss several differences between these data sets, ranging from their size to the nature of the gold-label annotations. To address the latter two issues, we present a qualitative analysis of the attention weights produced by the LSTM models (with attention) and discuss the results compared with human performance on the two tasks.",
"title": ""
},
{
"docid": "fbce9042585954b38f79be6b024759f5",
"text": "When making choices in software projects, engineers and other stakeholders engage in decision making that involves uncertain future outcomes. Research in psychology, behavioral economics and neuroscience has questioned many of the classical assumptions of how such decisions are made. This literature review aims to characterize the assumptions that underpin the study of these decisions in Software Engineering. We identify empirical research on this subject and analyze how the role of time has been characterized in the study of decision making in SE. The literature review aims to support the development of descriptive frameworks for empirical studies of intertemporal decision making in practice.",
"title": ""
},
{
"docid": "392a683cf9fdbd18c2ac6a46962a9911",
"text": "Recently, reinforcement learning has been successfully applied to the logical game of Go, various Atari games, and even a 3D game, Labyrinth, though it continues to have problems in sparse reward settings. It is difficult to explore, but also difficult to exploit, a small number of successes when learning policy. To solve this issue, the subgoal and option framework have been proposed. However, discovering subgoals online is too expensive to be used to learn options in large state spaces. We propose Micro-objective learning (MOL) to solve this problem. The main idea is to estimate how important a state is while training and to give an additional reward proportional to its importance. We evaluated our algorithm in two Atari games: Montezuma’s Revenge and Seaquest. With three experiments to each game, MOL significantly improved the baseline scores. Especially in Montezuma’s Revenge, MOL achieved two times better results than the previous state-of-the-art model.",
"title": ""
},
{
"docid": "97ad45410c0b613d08f1b0202777d124",
"text": "Much of the emerging literature on social media in the workplace is characterized by an “ideology of openness” that assumes social media use will increase knowledge sharing in organizations, and that open communication is effective and desirable. We argue that affordances of social media may in fact promote both overt and covert behavior, creating dialectical tensions for distributed workers that must be communicatively managed. Drawing on a case study of the engineering division of a distributed high tech start-up, we find our participants navigate tensions in visibility-invisibility, engagement-disengagement, and sharing-control and strategically manage these tensions to preserve both openness and ambiguity. These findings highlight ways in which organizational members limit as well as share knowledge through social media, and the productive role of tensions in enabling them to attend to multiple goals.",
"title": ""
},
{
"docid": "7ecf315d70e6d438ef90ec76b192b65f",
"text": "Stress is a common condition, a response to a physical threat or psychological distress, that generates a host of chemical and hormonal reactions in the body. In essence, the body prepares to fight or fiee, pumping more blood to the heart and muscles and shutting down all nonessential functions. As a temporary state, this reaction serves the body well to defend itself When the stress reaction is prolonged, however, the normal physical functions that have in response either been exaggerated or shut down become dysfunctional. Many have noted the benefits of exercise in diminishing the stress response, and a host of studies points to these benefits. Yoga, too, has been recommended and studied in relationship to stress, although the studies are less scientifically replicable. Nonetheless, several researchers claim highly beneficial results from Yoga practice in alleviating stress and its effects. The practices recommended range from intense to moderate to relaxed asana sequences, along yNith.pranayama and meditation. In all these approaches to dealing with stress, one common element stands out: The process is as important as the activity undertaken. Because it fosters self-awareness. Yoga is a promising approach for dealing with the stress response. Yoga and the Stress Response Stress has become a common catchword in our society to indicate a host of difficulties, both as cause and effect. The American Academy of Family Physicians has noted that stress-related symptoms prompt two-thirds of the office visits to family physicians.' Exercise and alternative therapies are now commonly prescribed for stress-related complaints and illness. Even a recent issue of Consumer Reports suggests Yoga for stress relief.̂ Many books and articles claim, as does Dr. Susan Lark, that practicing Yoga will \"provide effective relief of anxiety and stress.\"^ But is this an accurate promise? What Is the Stress Response? A review of the current thinking on stress reveals that the process is both biochemical and psychological. A very good summary of research on the stress response is contained in Robert Sapolsky's Why Zebras Don't Get",
"title": ""
},
{
"docid": "5dad207fe80469fe2b80d1f1e967575e",
"text": "As the geolocation capabilities of smartphones continue to improve, developers have continued to create more innovative applications that rely on this location information for their primary function. This can be seen with Niantic’s release of Pokémon GO, which is a massively multiplayer online role playing and augmented reality game. This game became immensely popular within just a few days of its release. However, it also had the propensity to be a distraction to drivers resulting in numerous accidents, and was used to as a tool by armed robbers to lure unsuspecting users into secluded areas. This facilitates a need for forensic investigators to be able to analyze the data within the application in order to determine if it may have been involved in these incidents. Because this application is new, limited research has been conducted regarding the artifacts that can be recovered from the application. In this paper, we aim to fill the gaps within the current research by assessing what forensically relevant information may be recovered from the application, and understanding the circumstances behind the creation of this information. Our research focuses primarily on the artifacts generated by the Upsight analytics platform, those contained within the bundles directory, and the Pokémon Go Plus accessory. Moreover, we present our new application specific analysis tool that is capable of extracting forensic artifacts from a backup of the Android application, and presenting them to an investigator in an easily readable format. This analysis tool exceeds the capabilities of UFED Physical Analyzer in processing Pokémon GO application data.",
"title": ""
},
{
"docid": "5539885c88d11eb6a9c4e54b6e399863",
"text": "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts/ identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset.",
"title": ""
},
{
"docid": "216d4c4dc479588fb91a27e35b4cb403",
"text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.",
"title": ""
},
{
"docid": "2a1920f22f22dcf473612a6d35cf0132",
"text": "We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a \"mixture of experts\" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data thus, a combined learning/classification operation much akin to what is done in image segmentation can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.",
"title": ""
},
{
"docid": "0f10aa71d58858ea1d8d7571a7cbfe22",
"text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.",
"title": ""
},
{
"docid": "d95b182517307844faa458e3f4edf0ab",
"text": "Scilab and Scicos are open-source and free software packages for design, simulation and realization of industrial process control systems. They can be used as the center of an integrated platform for the complete development process, including running controller with real plant (ScicosHIL: Hardware In the Loop) and automatic code generation for real time embedded platforms (Linux, RTAI/RTAI-Lab, RTAIXML/J-RTAI-Lab). These tools are mature, working alternatives to closed source, proprietary solutions for educational, academic, research and industrial applications. We present, using a working example, a complete development chain, from the design tools to the automatic code generation of stand alone embedded control and user interface program.",
"title": ""
},
{
"docid": "2e5a51176d1c0ab5394bb6a2b3034211",
"text": "School transport is used by millions of children worldwide. However, not a substantial effort is done in order to improve the existing school transport systems. This paper presents the development of an IoT based scholar bus monitoring system. The development of new telematics technologies has enabled the development of various Intelligent Transport Systems. However, these are not presented as ITS services to end users. This paper presents the development of an IoT based scholar bus monitoring system that through localization and speed sensors will allow many stakeholders such as parents, the goverment, the school and many other authorities to keep realtime track of the scholar bus behavior, resulting in a better controlled scholar bus.",
"title": ""
},
{
"docid": "343ba137056cac30d0d37e17a425d53b",
"text": "This thesis explores fundamental improvements in unsupervised deep learning algorithms. Taking a theoretical perspective on the purpose of unsupervised learning, and choosing learnt approximate inference in a jointly learnt directed generative model as the approach, the main question is how existing implementations of this approach, in particular auto-encoders, could be improved by simultaneously rethinking the way they learn and the way they perform inference. In such network architectures, the availability of two opposing pathways, one for inference and one for generation, allows to exploit the symmetry between them and to let either provide feedback signals to the other. The signals can be used to determine helpful updates for the connection weights from only locally available information, removing the need for the conventional back-propagation path and mitigating the issues associated with it. Moreover, feedback loops can be added to the usual usual feed-forward network to improve inference itself. The reciprocal connectivity between regions in the brain’s neocortex provides inspiration for how the iterative revision and verification of proposed interpretations could result in a fair approximation to optimal Bayesian inference. While extracting and combining underlying ideas from research in deep learning and cortical functioning, this thesis walks through the concepts of generative models, approximate inference, local learning rules, target propagation, recirculation, lateral and biased competition, predictive coding, iterative and amortised inference, and other related topics, in an attempt to build up a complex of insights that could provide direction to future research in unsupervised deep learning methods.",
"title": ""
},
{
"docid": "ba974ef3b1724a0b31331f558ed13e8e",
"text": "The paper presents a simple and effective sketch-based algorithm for large scale image retrieval. One of the main challenges in image retrieval is to localize a region in an image which would be matched with the query image in contour. To tackle this problem, we use the human perception mechanism to identify two types of regions in one image: the first type of region (the main region) is defined by a weighted center of image features, suggesting that we could retrieve objects in images regardless of their sizes and positions. The second type of region, called region of interests (ROI), is to find the most salient part of an image, and is helpful to retrieve images with objects similar to the query in a complicated scene. So using the two types of regions as candidate regions for feature extraction, our algorithm could increase the retrieval rate dramatically. Besides, to accelerate the retrieval speed, we first extract orientation features and then organize them in a hierarchal way to generate global-to-local features. Based on this characteristic, a hierarchical database index structure could be built which makes it possible to retrieve images on a very large scale image database online. Finally a real-time image retrieval system on 4.5 million database is developed to verify the proposed algorithm. The experiment results show excellent retrieval performance of the proposed algorithm and comparisons with other algorithms are also given.",
"title": ""
},
{
"docid": "35470a422cdb3a287d45797e39c04637",
"text": "In this paper, we propose a method to recognize food images which include multiple food items considering co-occurrence statistics of food items. The proposed method employs a manifold ranking method which has been applied to image retrieval successfully in the literature. In the experiments, we prepared co-occurrence matrices of 100 food items using various kinds of data sources including Web texts, Web food blogs and our own food database, and evaluated the final results obtained by applying manifold ranking. As results, it has been proved that co-occurrence statistics obtained from a food photo database is very helpful to improve the classification rate within the top ten candidates.",
"title": ""
},
{
"docid": "d00957d93af7b2551073ba84b6c0f2a6",
"text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn",
"title": ""
}
] |
scidocsrr
|
15367ce5825ce0d76f8e66fe98253c2e
|
Comparing feature-based classifiers and convolutional neural networks to detect arrhythmia from short segments of ECG
|
[
{
"docid": "a39f988fa6f7a55662f5a8821e9ad87c",
"text": "We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor. We build a dataset with more than 500 times the number of unique patients than previously studied corpora. On this dataset, we train a 34-layer convolutional neural network which maps a sequence of ECG samples to a sequence of rhythm classes. Committees of boardcertified cardiologists annotate a gold standard test set on which we compare the performance of our model to that of 6 other individual cardiologists. We exceed the average cardiologist performance in both recall (sensitivity) and precision (positive predictive value).",
"title": ""
}
] |
[
{
"docid": "ba67c3006c6167550bce500a144e63f1",
"text": "This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.",
"title": ""
},
{
"docid": "69561d0f42cf4aae73d4c97c1871739e",
"text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.",
"title": ""
},
{
"docid": "afed6411b9580d285eca622c2b24b2cd",
"text": "This paper presents a method to calibrate the model of serial and flexible lightweight robots with joint sided torque sensors in the assembled state. The calibration is done in an iterative three-step process, based on static robot poses. In the first step the kinematics and stiffnesses of the flexible components are calibrated. Second the models of the integrated torque sensors are identified in a linear least square solution. In the third step the masses, the centers of gravity and the torque sensor offsets are estimated using linear regression. The calibration steps are repeated stepwise to account for their dependencies. The calibration procedure is simulated and experimentally performed with the medical lightweight robot MIRO of the German Aerospace Center. Through the iterative procedure the pose accuracy improves from about 5mm translational error and 2.5 ° rotational error to 1mm and 0.3 ° regarding the entire workspace.",
"title": ""
},
{
"docid": "98cc792a4fdc23819c877634489d7298",
"text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"title": ""
},
{
"docid": "48b2d263a0f547c5c284c25a9e43828e",
"text": "This paper presents hierarchical topic models for integrating sentiment analysis with collaborative filtering. Our goal is to automatically predict future reviews to a given author from previous reviews. For this goal, we focus on differentiating author's preference, while previous sentiment analysis models process these review articles without this difference. Therefore, we propose a Latent Evaluation Topic model (LET) that infer each author's preference by introducing novel latent variables into author and his/her document layer. Because these variables distinguish the variety of words in each article by merging similar word distributions, LET incorporates the difference of writers' preferences into sentiment analysis. Consequently LET can determine the attitude of writers, and predict their reviews based on like-minded writers' reviews in the collaborative filtering approach. Experiments on review articles show that the proposed model can reduce the dimensionality of reviews to the low-dimensional set of these latent variables, and is a significant improvement over standard sentiment analysis models and collaborative filtering algorithms.",
"title": ""
},
{
"docid": "8182c8d1258ba2d7cca166249f227fb0",
"text": "Usability is increasingly recognized as an important quality factor for interactive software systems, including traditional GUIs-style applications, Web sites, and the large variety of mobile and PDA interactive services. Unusable user interfaces are probably the single largest reasons why encompassing interactive systems – computers plus people, fail in actual use. The design of this diversity of applications so that they actually achieve their intended purposes in term of ease of use is not an easy task. Although there are many individual methods for evaluating usability; they are not well integrated into a single conceptual framework that facilitate their usage by developers who are not trained in the filed of HCI. This is true in part because there are now several different standards (e.g., ISO 9241, ISO/IEC 9126, IEEE Std.610.12) or conceptual models (e.g., Metrics for Usability Standards in Computing [MUSiC]) for usability, and not all of these standards or models describe the same operational definitions and measures. This paper first reviews existing usability standards and models while highlighted the limitations and complementarities of the various standards. It then explains how these various models can be unified into a single consolidated, hierarchical model of usability measurement. This consolidated model is called Quality in Use Integrated Measurement (QUIM). Included in the QUIM model are 10 factors each of which corresponds to a specific facet of usability that is identified in an existing standard or model. These 10 factors are decomposed into a total of 26 sub-factors or measurable criteria that are furtherdecomposed into 127 specific metrics. The paper explains also how a consolidated model, such as QUIM, can help in developing a usability measurement theory.",
"title": ""
},
{
"docid": "c723ff511bc207b490b2f414ec3a3565",
"text": "This paper evaluates the performance of a shoe/foot mounted inertial system for pedestrian navigation application. Two different grades of inertial sensors are used, namely a medium cost tactical grade Honeywell HG1700 inertial measurement unit (IMU) and a low-cost MEMS-based Crista IMU (Cloud Cap Technology). The inertial sensors are used in two different ways for computing the navigation solution. The first method is a conventional integration algorithm where IMU measurements are processed through a set of mechanization equation to compute a six degree-offreedom (DOF) navigation solution. Such a system is referred to as an Inertial Navigation System (INS). The integration of this system with GPS is performed using a tightly coupled integration scheme. Since the sensor is placed on the foot, the designed integrated system exploits the small period for which foot comes to rest at each step (stance-phase of the gait cycle) and uses Zero Velocity Update (ZUPT) to keep the INS errors bounded in the absence of GPS. An algorithm for detecting the stance-phase using the pattern of three-dimensional acceleration is discussed. In the second method, the navigation solutions is computed using the fact that a pedestrian takes one step at a time, and thus positions can be computed by propagating the step-length in the direction of pedestrian motion. This algorithm is termed as pedestrian dead-reckoning (PDR) algorithm. The IMU measurement in this algorithm is used to detect the step, estimate the step-length, and determine the heading for solution propagation. Different algorithms for stridelength estimation and step-detection are discussed in this paper. The PDR system is also integrated with GPS through a tightly coupled integration scheme. The performance of both the systems is evaluated through field tests conducted in challenging GPS environments using both inertial sensors. The specific focus is on the system performance under long GPS outages of duration upto 30 minutes.",
"title": ""
},
{
"docid": "43cf88f985646c11bca3152740236aab",
"text": "You are in a new city. You are not familiar with the places and neighborhoods. You want to know all about the exciting sights, food outlets, and cultural venues that the locals frequent, in particular those that suit your personal interests. Even though there exist many mapping, local search, and travel assistance sites, they mostly provide popular and famous listings such as Statue of Liberty and Eiffel Tower, which are well-known places but may not suit your personal needs or interests. Therefore, there is a gap between what tourists want and what dominant tourism resources are providing. In this work, we seek to provide a solution to bridge this gap by exploiting the rich user-generated location contents in location-based social networks in order to offer tourists the most relevant and personalized local venue recommendations. In particular, we first propose a novel Bayesian approach to extract the social dimensions of people at different geographical regions to capture their latent local interests. We next mine the local interest communities in each geographical region. We then represent each local community using aggregated behaviors of community members. Finally, we correlate communities across different regions and generate venue recommendations to tourists via cross-region community matching. We have sampled a representative subset of check-ins from Foursquare and experimentally verified the effectiveness of our proposed approaches.",
"title": ""
},
{
"docid": "9a071b23eb370f053a5ecfd65f4a847d",
"text": "INTRODUCTION\nConcomitant obesity significantly impairs asthma control. Obese asthmatics show more severe symptoms and an increased use of medications.\n\n\nOBJECTIVES\nThe primary aim of the study was to identify genes that are differentially expressed in the peripheral blood of asthmatic patients with obesity, asthmatic patients with normal body mass, and obese patients without asthma. Secondly, we investigated whether the analysis of gene expression in peripheral blood may be helpful in the differential diagnosis of obese patients who present with symptoms similar to asthma.\n\n\nPATIENTS AND METHODS\nThe study group included 15 patients with asthma (9 obese and 6 normal-weight patients), while the control group-13 obese patients in whom asthma was excluded. The analysis of whole-genome expression was performed on RNA samples isolated from peripheral blood.\n\n\nRESULTS\nThe comparison of gene expression profiles between asthmatic patients with obesity and those with normal body mass revealed a significant difference in 6 genes. The comparison of the expression between controls and normal-weight patients with asthma showed a significant difference in 23 genes. The analysis of genes with a different expression revealed a group of transcripts that may be related to an increased body mass (PI3, LOC100008589, RPS6KA3, LOC441763, IFIT1, and LOC100133565). Based on gene expression results, a prediction model was constructed, which allowed to correctly classify 92% of obese controls and 89% of obese asthmatic patients, resulting in the overall accuracy of the model of 90.9%.\n\n\nCONCLUSIONS\nThe results of our study showed significant differences in gene expression between obese asthmatic patients compared with asthmatic patients with normal body mass as well as in obese patients without asthma compared with asthmatic patients with normal body mass.",
"title": ""
},
{
"docid": "846aff14ba654f154b37ae03089bb19f",
"text": "This paper presents a procedure to model the drawbar pull and resistive torque of an unknown terrain as a function of normal load and slip using on-board rover instruments. Kapvik , which is a planetary micro-rover prototype with a rocker-bogie mobility system, is simulated in two dimensions. A suite of sensors is used to take relevant measurements; in addition to typical rover measurements, forces above the wheel hubs and rover forward velocity are sensed. An estimator determines the drawbar pull, resistive torque, normal load, and slip of the rover. The collected data are used to create a polynomial fit model that closely resembles the real terrain response.",
"title": ""
},
{
"docid": "82e0394b9b5c88c14259fabd111ddc46",
"text": "In recent years, the venous flap has been highly regarded in microsurgical and reconstructive surgeries, especially in the reconstruction of hand and digit injuries. It is easily designed and harvested with good quality. It is thin and pliable, without the need of sacrificing a major artery at the donor site, and has no limitation on the donor site. It can be transferred not only as a pure skin flap, but also as a composite flap including tendons and nerves as well as vein grafts. All these advantages make it an optimal candidate for hand and digit reconstruction when conventional flaps are limited or unavailable. In this article, we review its classifications and the selection of donor sites, update its clinical applications, and summarize its indications for all types of venous flaps in hand and digit reconstruction.",
"title": ""
},
{
"docid": "5bde44a162fa6259ece485b4319b56a4",
"text": "3D reconstruction from single view images is an ill-posed problem. Inferring the hidden regions from self-occluded images is both challenging and ambiguous. We propose a two-pronged approach to address these issues. To better incorporate the data prior and generate meaningful reconstructions, we propose 3D-LMNet, a latent embedding matching approach for 3D reconstruction. We first train a 3D point cloud auto-encoder and then learn a mapping from the 2D image to the corresponding learnt embedding. To tackle the issue of uncertainty in the reconstruction, we predict multiple reconstructions that are consistent with the input view. This is achieved by learning a probablistic latent space with a novel view-specific ‘diversity loss’. Thorough quantitative and qualitative analysis is performed to highlight the significance of the proposed approach. We outperform state-of-the-art approaches on the task of single-view 3D reconstruction on both real and synthetic datasets while generating multiple plausible reconstructions, demonstrating the generalizability and utility of our approach.",
"title": ""
},
{
"docid": "804152900519454dc801e64b08144fef",
"text": "The point spread function or PSF of the human eye encompasses hugely different domains: a small-angle, high-intensity domain, called the 'PSF core', and a large-angle, low-intensity domain, usually referred to as 'straylight'. The first domain can be assessed by available double-pass or other optical techniques. For the second domain psychophysical techniques have been developed, in particular the Compensation Comparison or CC technique, recently made available for clinical application in the C-Quant instrument. We address the question of whether the psychophysical technique gives measures of straylight that are compatible with those made by optical methods. With a small adaptation the CC method can be used to assess straylight from physical light scattering samples, instead of straylight in the eye, using the same psychophysics, but without interference from the ocular straylight. The light scattered by each of seven light-scattering samples, encompassing the range of straylight values observed in human eyes, was measured by two optical methods and by the psychophysical technique. The results showed that the optical and psychophysical measurements for the seven samples were almost identical.",
"title": ""
},
{
"docid": "cfeb97c3be1c697fb500d54aa43af0e1",
"text": "The development of accurate and robust palmprint verification algorithms is a critical issue in automatic palmprint authentication systems. Among various palmprint verification approaches, the orientation based coding methods, such as competitive code (CompCode), palmprint orientation code (POC) and robust line orientation code (RLOC), are state-of-the-art ones. They extract and code the locally dominant orientation as features and could match the input palmprint in real-time and with high accuracy. However, using only one dominant orientation to represent a local region may lose some valuable information because there are cross lines in the palmprint. In this paper, we propose a novel feature extraction algorithm, namely binary orientation co-occurrence vector (BOCV), to represent multiple orientations for a local region. The BOCV can better describe the local orientation features and it is more robust to image rotation. Our experimental results on the public palmprint database show that the proposed BOCV outperforms the CompCode, POC and RLOC by reducing the equal error rate (EER) significantly. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e84e83443d65498a7ea37669122389e5",
"text": "In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function f . Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of f in a small but promising region and speedily identify the optimum. We formalise this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.",
"title": ""
},
{
"docid": "c3005b5b78b6e4bca3a7549faa01c168",
"text": "Finger vein technology is the newest biometric technology which utilizes the vein pattern which is hidden under the human finger for identification. As these patterns are hidden under the skin surface, they provide a huge privacy consideration, and are hence extremely difficult to forge. An approach to perform finger vein identification based on extracting minutiae features and the spurious minutiae removal is presented in this work. Minutiae feature extraction includes the extraction of end points and bifurcation points from the skeletal patterns of vein and the removal of spurious or false minutiae, makes the identification more accurate.",
"title": ""
},
{
"docid": "f740191f7c6d27811bb09bf40e8da021",
"text": "Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activities have to be executed and which design choices have to be made to create a process design. We report on a four year design science study, in which we developed a design approach for Collaboration Engineering that",
"title": ""
},
{
"docid": "d91b9e1058112b96f72bb8e89099385f",
"text": "It is estimated that familial aggregation and genetic susceptibility play a role in as many as 10% of pancreatic ductal adenocarcinomas. To investigate the role of germ-line mutations in the etiology of pancreatic cancer, we have analyzed samples from patients with pancreatic cancer enrolled in the NFPTR for mutations in four tumor suppressor candidate genes: (a) MAP2K4; (b) MADH4; (c) ACVR1B; and (d) BRCA2 by direct sequencing of constitutional DNA. These genes are mutated in clinically sporadic pancreatic cancer, but germ-line mutations are either not reported or anecdotal in familial pancreatic cancer. Pancreatic cancer patient samples were selected from kindreds in which three or more family members were affected with pancreatic cancer, at least two of which were first-degree relatives. No mutations were identified in mitogen-activated protein kinase kinase 4 (0 of 22), MADH4 (0 of 22), or ACVR1B (0 of 29), making it unlikely that germ-line mutations in these genes account for a significant number of inherited pancreatic cancers. BRCA2 gene sequencing identified five mutations (5 of 29, 17.2%) that are believed to be deleterious and one point mutation (M192T) unreported previously. Three patients harbored the common 6174delT frameshift mutation, one had the splice site mutation IVS 16-2A > G, and one had the splice site mutation IVS 15-1G > A. Two of the five BRCA2 mutation carriers reported a family history of breast cancer, and none reported a family history of ovarian cancer. These findings confirm the increased risk of pancreatic cancer in individuals with BRCA2 mutations and identify germ-line BRCA2 mutations as the most common inherited genetic alteration yet identified in familial pancreatic cancer.",
"title": ""
},
{
"docid": "a2fc7b5fbb88e45c84400b1fe15368ee",
"text": "There is increasing evidence from functional magnetic resonance imaging (fMRI) that visual awareness is not only associated with activity in ventral visual cortex but also with activity in the parietal cortex. However, due to the correlational nature of neuroimaging, it remains unclear whether this parietal activity plays a causal role in awareness. In the experiment presented here we disrupted activity in right or left parietal cortex by applying repetitive transcranial magnetic stimulation (rTMS) over these areas while subjects attempted to detect changes between two images separated by a brief interval (i.e. 1-shot change detection task). We found that rTMS applied over right parietal cortex but not left parietal cortex resulted in longer latencies to detect changes and a greater rate of change blindness compared with no TMS. These results suggest that the right parietal cortex plays a critical role in conscious change detection.",
"title": ""
}
] |
scidocsrr
|
bc5b69ea78fbccc8757f77e0a188ff0e
|
A Nonparametric Approach to Modeling Choice with Limited Data
|
[
{
"docid": "84c362cb2d4a737d7ea62d85b9144722",
"text": "This paper considers mixed, or random coeff icients, multinomial logit (MMNL) models for discrete response, and establishes the following results: Under mild regularity conditions, any discrete choice model derived from random utilit y maximization has choice probabiliti es that can be approximated as closely as one pleases by a MMNL model. Practical estimation of a parametric mixing family can be carried out by Maximum Simulated Likelihood Estimation or Method of Simulated Moments, and easily computed instruments are provided that make the latter procedure fairl y eff icient. The adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately defined artificial variables. An application to a problem of demand for alternative vehicles shows that MMNL provides a flexible and computationally practical approach to discrete response analysis. Acknowledgments: Both authors are at the Department of Economics, University of Cali fornia, Berkeley CA 94720-3880. Correspondence should be directed to mcfadden@econ.berkeley.edu. We are indebted to the E. Morris Cox fund for research support, and to Moshe Ben-Akiva, David Brownstone, Denis Bolduc, Andre de Palma, and Paul Ruud for useful comments. This paper was first presented at the University of Paris X in June 1997.",
"title": ""
}
] |
[
{
"docid": "fdfea6d3a5160c591863351395929a99",
"text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.",
"title": ""
},
{
"docid": "f0db74061a2befca317f9333a0712ab9",
"text": "This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.",
"title": ""
},
{
"docid": "e56bc26cd567aff51de3cb47f9682149",
"text": "Recent technological advances have expanded the breadth of available omic data, from whole-genome sequencing data, to extensive transcriptomic, methylomic and metabolomic data. A key goal of analyses of these data is the identification of effective models that predict phenotypic traits and outcomes, elucidating important biomarkers and generating important insights into the genetic underpinnings of the heritability of complex traits. There is still a need for powerful and advanced analysis strategies to fully harness the utility of these comprehensive high-throughput data, identifying true associations and reducing the number of false associations. In this Review, we explore the emerging approaches for data integration — including meta-dimensional and multi-staged analyses — which aim to deepen our understanding of the role of genetics and genomics in complex outcomes. With the use and further development of these approaches, an improved understanding of the relationship between genomic variation and human phenotypes may be revealed.",
"title": ""
},
{
"docid": "9c715e50cf36e14312407ed722fe7a7d",
"text": "Usual medical care often fails to meet the needs of chronically ill patients, even in managed, integrated delivery systems. The medical literature suggests strategies to improve outcomes in these patients. Effective interventions tend to fall into one of five areas: the use of evidence-based, planned care; reorganization of practice systems and provider roles; improved patient self-management support; increased access to expertise; and greater availability of clinical information. The challenge is to organize these components into an integrated system of chronic illness care. Whether this can be done most efficiently and effectively in primary care practice rather than requiring specialized systems of care remains unanswered.",
"title": ""
},
{
"docid": "b492a0063354a81bd99ac3f81c3fb1ec",
"text": "— Bangla automatic number plate recognition (ANPR) system using artificial neural network for number plate inscribing in Bangla is presented in this paper. This system splits into three major parts-number plate detection, plate character segmentation and Bangla character recognition. In number plate detection there arises many problems such as vehicle motion, complex background, distance changes etc., for this reason edge analysis method is applied. As Bangla number plate consists of two words and seven characters, detected number plates are segmented into individual words and characters by using horizontal and vertical projection analysis. After that a robust feature extraction method is employed to extract the information from each Bangla words and characters which is non-sensitive to the rotation, scaling and size variations. Finally character recognition system takes this information as an input to recognize Bangla characters and words. The Bangla character recognition is implemented using multilayer feed-forward network. According to the experimental result, (The abstract needs some exact figures of findings (like success rates of recognition) and how much the performance is better than previous one.) the performance of the proposed system on different vehicle images is better in case of severe image conditions.",
"title": ""
},
{
"docid": "056f5179fa5c0cdea06d29d22a756086",
"text": "Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by secondorder quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: June 26, 2017",
"title": ""
},
{
"docid": "359d76f0b4f758c3a58e886e840c5361",
"text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-",
"title": ""
},
{
"docid": "c4d0084aab61645fc26e099115e1995c",
"text": "Digital transformation often includes establishing big data analytics capabilities and poses considerable challenges for traditional manufacturing organizations, such as car companies. Successfully introducing big data analytics requires substantial organizational transformation and new organizational structures and business processes. Based on the three-stage evolution of big data analytics capabilities at AUDI, the full article provides recommendations for how traditional manufacturing organizations can successfully introduce big data analytics and master the related organizational transformations. Stage I: Advancing. In Stage I, AUDI’s sales and marketing department initiated data analytics projects. Commitment within the organization for data analytics grew slowly, and the strategic importance of the area was increasingly recognized. During this first stage, the IT department played a passive role, responding to the initiators of data analytics projects. The company’s digital innovation hub, however, laid the technology foundation for big data analytics during the Advancing stage. Stage II: Enabling. In Stage II, analytics competencies were built up not only in the digital innovation hub but also in the IT department. The IT department enabled big data analytics through isolated technology activities, sometimes taking on or insourcing tasks previously carried out by external consultancies or the digital innovation hub. Analytics services were developed through a more advanced technology infrastructure as well as analytics methods. Stage III: Leveraging. In the current Stage III, AUDI is leveraging the analytics competencies of the digital innovation hub and the IT department to centrally provide analytics-as-a-service. The IT department is now fully responsible for all technology tasks and is evolving to become a consulting partner for the other big data analytics stakeholders (sales and marketing department and digital innovation hub). In particular, digital services are enabled by leveraging the most valuable data source (i.e., operational car data).",
"title": ""
},
{
"docid": "0b4f44030a922ba2c970c263583e8465",
"text": "BACKGROUND\nSmoking remains one of the few potentially preventable factors associated with low birthweight, preterm birth and perinatal death.\n\n\nOBJECTIVES\nTo assess the effects of smoking cessation programs implemented during pregnancy on the health of the fetus, infant, mother, and family.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Tobacco Addiction Group trials register (July 2003), MEDLINE (January 2002 to July 2003), EMBASE (January 2002 to July 2003), PsychLIT (January 2002 to July 2003), CINAHL (January 2002 to July 2003), and AUSTHEALTH (January 2002 to 2003). We contacted trial authors to locate additional unpublished data. We handsearched references of identified trials and recent obstetric journals.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised trials of smoking cessation programs implemented during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nFour reviewers assessed trial quality and extracted data independently.\n\n\nMAIN RESULTS\nThis review included 64 trials. Fifty-one randomised controlled trials (20,931 women) and six cluster-randomised trials (over 7500 women) provided data on smoking cessation and/or perinatal outcomes. Despite substantial variation in the intensity of the intervention and the extent of reminders and reinforcement through pregnancy, there was an increase in the median intensity of both 'usual care' and interventions over time. There was a significant reduction in smoking in the intervention groups of the 48 trials included: (relative risk (RR) 0.94, 95% confidence interval (CI) 0.93 to 0.95), an absolute difference of six in 100 women continuing to smoke. The 36 trials with validated smoking cessation had a similar reduction (RR 0.94, 95% CI 0.92 to 0.95). Smoking cessation interventions reduced low birthweight (RR 0.81, 95% CI 0.70 to 0.94) and preterm birth (RR 0.84, 95% CI 0.72 to 0.98), and there was a 33 g (95% CI 11 g to 55 g) increase in mean birthweight. There were no statistically significant differences in very low birthweight, stillbirths, perinatal or neonatal mortality but these analyses had very limited power. One intervention strategy, rewards plus social support (two trials), resulted in a significantly greater smoking reduction than other strategies (RR 0.77, 95% CI 0.72 to 0.82). Five trials of smoking relapse prevention (over 800 women) showed no statistically significant reduction in relapse.\n\n\nREVIEWERS' CONCLUSIONS\nSmoking cessation programs in pregnancy reduce the proportion of women who continue to smoke, and reduce low birthweight and preterm birth. The pooled trials have inadequate power to detect reductions in perinatal mortality or very low birthweight.",
"title": ""
},
{
"docid": "03cd67f6c96d37b6345b187382b79c44",
"text": "Social media is a vital source of information during any major event, especially natural disasters. Data produced through social networking sites is seen as ubiquitous, rapid and accessible, and it is believed to empower average citizens to become more situationally aware during disasters and coordinate to help themselves. However, with the exponential increase in the volume of social media data, so comes the increase in data that are irrelevant to a disaster, thus, diminishing peoples’ ability to find the information that they need in order to organize relief efforts, find help, and potentially save lives. In this paper, we present an approach to identifying informative messages in social media streams during disaster events. Our approach is based on Convolutional Neural Networks and shows significant improvement in performance over models that use the “bag of words” and n-grams as features on several datasets of messages from flooding events.",
"title": ""
},
{
"docid": "46f41dd784c02185e0ba2f3ee4b5c8eb",
"text": "The purpose of this study was to examine the changes in temporomandibular joint (TMJ) morphology and clinical symptoms after intraoral vertical ramus osteotomy (IVRO) with and without a Le Fort I osteotomy. Of 50 Japanese patients with mandibular prognathism with mandibular and bimaxillary asymmetry, 25 underwent IVRO and 25 underwent IVRO in combination with a Le Fort I osteotomy. The TMJ symptoms and joint morphology, including disc tissue, were assessed preoperatively and postoperatively by magnetic resonance imaging and axial cephalogram. Improvement was seen in just 50% of joints with anterior disc displacement (ADD) that received IVRO and 52% of those that received IVRO with Le Fort I osteotomy. Fewer or no TMJ symptoms were reported postoperatively in 97% of the joints that received IVRO and 90% that received IVRO with Le Fort I osteotomy. Postoperatively, there were significant condylar position changes and horizontal changes in the condylar long axis on both sides in the two groups. There were no significant differences between improved ADD and unimproved ADD in condylar position change and the angle of the condylar long axis, although distinctive postoperative condylar sag was seen. These results suggest that IVRO with or without Le Fort I osteotomy can improve ADD and TMJ symptoms along with condylar position and angle, but it is difficult to predict the amount of improvement in ADD.",
"title": ""
},
{
"docid": "3af338a01d1419189b7706375feec0c2",
"text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws",
"title": ""
},
{
"docid": "a4731b9d3bfa2813858ff9ea97668577",
"text": "Both the Swenson and the Soave procedures have been adapted as transanal approaches. Our purpose is to compare the outcomes and complications between transanal Swenson and Soave procedures.This clinical analysis involved a retrospective series of 148 pediatric patients with HD from Dec, 2001, to Dec, 2015. Perioperative/operative characteristics, postoperative complications, and outcomes between the 2 groups were analyzed. Students' t-test and chi-squared analysis were performed.In total 148 patients (Soave 69, Swenson 79) were included in our study. Mean follow-up was 3.5 years. There are no significant differences in overall hospital stay and bowel function. We noted significant differences regarding mean operating time, blood loss, and overall complications. We noted significant differences in mean operating time, blood loss, and overall complications in favor of the Swenson group when compared to the Soave group (P < 0.05).According to our results, although transanal pullthrough Swenson cannot reduce overall hospital stay and improve bowel function compared with the Soave procedure, it results in less blood loss, shorter operation time, and a lower complication rate.",
"title": ""
},
{
"docid": "2a60990e13e7983edea29b131528222d",
"text": "We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.",
"title": ""
},
{
"docid": "cc4c0a749c6a3f4ac92b9709f24f03f4",
"text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.",
"title": ""
},
{
"docid": "508ad7d072a62433f3233d90286ef902",
"text": "The NP-hard Colorful Components problem is, given a vertex-colored graph, to delete a minimum number of edges such that no connected component contains two vertices of the same color. It has applications in multiple sequence alignment and in multiple network alignment where the colors correspond to species. We initiate a systematic complexity-theoretic study of Colorful Components by presenting NP-hardness as well as fixed-parameter tractability results for different variants of Colorful Components. We also perform experiments with our algorithms and additionally develop an efficient and very accurate heuristic algorithm clearly outperforming a previous min-cut-based heuristic on multiple sequence alignment data.",
"title": ""
},
{
"docid": "3c1db6405945425c61495dd578afd83f",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "370813b3114c8f8c2611b72876159efe",
"text": "Sciatic nerve structure and nomenclature: epineurium to paraneurium is this a new paradigm? We read with interest the study by Perlas et al., (1) about the sciatic nerve block at the level of its division in the popliteal fossa. We have been developing this technique in our routine practice during the past 7 years and have no doub about the effi cacy and safety of this approach (2,3). However, we do not agree with the author's defi nition of the structure and limits of the nerve. Given the impact of publications from the principal author's research group on the regional anesthesia community, we are compelled to comment on proposed terminology that we feel may create confusion and contribute to the creation of a new paradigm in peripheral nerve blockade. The peripheral nerve is a well-defi ned anatomical entity with an unequivocal histological structure (Figure 1). The fascicle is the noble and functional unit of the nerves. Fascicles are constituted by a group of axons covered individually by the endoneurium and tightly packed within the perineurium. The epineurium comprises all the tissues that hold and surround the fascicles and defi nes the macroscopic external limit of the nerve. Epineurium includes loose connective and adipose tissue and epineurial vessels. Fascicles can be found as isolated units or in groups of fascicles supported and held together into a mixed collagen and fat tissue in different proportions (within the epineurial cover). The epineurium cover is the thin layer of connective tissue that encloses the whole structure and constitutes the anatomical limit of the nerve. It acts as a mechanical barrier (limiting the spread of injected local anesthetic), but not as a physical barrier (allowing the passive diffusion of local anesthetic along the concentration gradient). The paraneurium is the connective tissue that supports and connects the nerve with the surrounding structures (eg, muscles, bone, joints, tendons, and vessels) and acts as a gliding layer. We agree that the limits of the epineurium of the sciatic nerve, like those of the brachial plexus, are more complex than in single nerves. Therefore, the sciatic nerve block deserves special consideration. If we accept that the sciatic nerve is an anatomical unit, the epineurium should include the groups of fascicles that will constitute the tibial and the common peroneal nerves. Similarly, the epineurium of the common peroneal nerve contains the fascicles that will be part of the lateral cutane-ous, …",
"title": ""
},
{
"docid": "3a942985eb615f459a670ada83ce3a41",
"text": "A new method of realising RF barcodes is presented using arrays of identical microstrip dipoles capacitively tuned to be resonant at different frequencies within the desired licensed-free ISM bands. When interrogated, the reader detects each dipole's resonance frequency and with n resonant dipoles, potentially 2/sup n/-1 items in the field can be tagged and identified. Results for RF barcode elements in the 5.8 GHz band are presented. It is shown that with accurate centre frequency prediction and by operating over multiple ISM and other license-exempt bands, a useful number of information bits can be realised. Further increase may be possible using ultra-wideband (UWB) technology. Low cost lithographic printing techniques based on using metal ink on low cost substrates could lead to an economical alternative to current RFID systems in many applications.",
"title": ""
},
{
"docid": "bd47b468b1754ddd9fecf8620eb0b037",
"text": "Common bean (Phaseolus vulgaris) is grown throughout the world and comprises roughly 50% of the grain legumes consumed worldwide. Despite this, genetic resources for common beans have been lacking. Next generation sequencing, has facilitated our investigation of the gene expression profiles associated with biologically important traits in common bean. An increased understanding of gene expression in common bean will improve our understanding of gene expression patterns in other legume species. Combining recently developed genomic resources for Phaseolus vulgaris, including predicted gene calls, with RNA-Seq technology, we measured the gene expression patterns from 24 samples collected from seven tissues at developmentally important stages and from three nitrogen treatments. Gene expression patterns throughout the plant were analyzed to better understand changes due to nodulation, seed development, and nitrogen utilization. We have identified 11,010 genes differentially expressed with a fold change ≥ 2 and a P-value < 0.05 between different tissues at the same time point, 15,752 genes differentially expressed within a tissue due to changes in development, and 2,315 genes expressed only in a single tissue. These analyses identified 2,970 genes with expression patterns that appear to be directly dependent on the source of available nitrogen. Finally, we have assembled this data in a publicly available database, The Phaseolus vulgaris Gene Expression Atlas (Pv GEA), http://plantgrn.noble.org/PvGEA/ . Using the website, researchers can query gene expression profiles of their gene of interest, search for genes expressed in different tissues, or download the dataset in a tabular form. These data provide the basis for a gene expression atlas, which will facilitate functional genomic studies in common bean. Analysis of this dataset has identified genes important in regulating seed composition and has increased our understanding of nodulation and impact of the nitrogen source on assimilation and distribution throughout the plant.",
"title": ""
}
] |
scidocsrr
|
d4c7e1dfe55118c0633b905bc737cc53
|
Lifelong Generative Modeling
|
[
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
}
] |
[
{
"docid": "49e824c73b62d4c05b28fbd46fde1a28",
"text": "The Advent of Internet-of-Things (IoT) paradigm has brought exciting opportunities to solve many real-world problems. IoT in industries is poised to play an important role not only to increase productivity and efficiency but also to improve customer experiences. Two main challenges that are of particular interest to industry include: handling device heterogeneity and getting contextual information to make informed decisions. These challenges can be addressed by IoT along with proven technologies like the Semantic Web. In this paper, we present our work, SQenIoT: a Semantic Query Engine for Industrial IoT. SQenIoT resides on a commercial product and offers query capabilities to retrieve information regarding the connected things in a given facility. We also propose a things query language, targeted for resource-constrained gateways and non-technical personnel such as facility managers. Two other contributions include multi-level ontologies and mechanisms for semantic tagging in our commercial products. The implementation details of SQenIoT and its performance results are also presented.",
"title": ""
},
{
"docid": "ad4d38ee8089a67353586abad319038f",
"text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.",
"title": ""
},
{
"docid": "8213f9488af8e1492d7a4ac2eec3a573",
"text": "The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of highdimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.",
"title": ""
},
{
"docid": "bd3f7e9fe1637a52adcf11aefc58f9aa",
"text": "Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert’s driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress – the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.",
"title": ""
},
{
"docid": "d7e0b50d818ab031c40763dd869c5615",
"text": "Visualization has become a valuable means for data exploration and analysis. Interactive visualization combines expressive graphical representations and effective user interaction. Although interaction is an important component of visualization approaches, much of the visualization literature tends to pay more attention to the graphical representation than to interaction. e goal of this work is to strengthen the interaction side of visualization. Based on a brief review of general aspects of interaction, we develop an interaction-oriented view on visualization. is view comprises five key aspects: the data, the tasks, the technology, the human, as well as the implementation. Picking up these aspects individually, we elaborate several interaction methods for visualization. We introduce a multi-threading architecture for efficient interactive exploration. We present interaction techniques for different types of data (e.g., multivariate data, spatio-temporal data, graphs) and different visualization tasks (e.g., exploratory navigation, visual comparison, visual editing). With respect to technology, we illustrate approaches that utilize modern interaction modalities (e.g., touch, tangibles, proxemics) as well as classic ones. While the human is important throughout this work, we also consider automatic methods to assist the interactive part. In addition to solutions for individual problems, a major contribution of this work is the overarching view of interaction in visualization as a whole. is includes a critical discussion of interaction, the identification of links between the key aspects of interaction, and the formulation of research topics for future work with a focus on interaction.",
"title": ""
},
{
"docid": "e84b6bbb2eaee0edb6ac65d585056448",
"text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.",
"title": ""
},
{
"docid": "2a5921cd4554caaa9eb6fd397088ecec",
"text": "This work examines how a classifier's output of a cut-in prediction can be mapped to a (semi-) automated car's reaction to it. Several approaches of decision making are compared using real world data of a lane change predictor for an automated longitudinal guidance system, similar to an adaptive cruise control system, as an example. We show how the decision algorithms affect the time when a new lead vehicle is selected and how much more comfortable we can decelerate given different selection strategies. We propose a novel decision algorithm and conducted a case study with a prototype research car to evaluate the subjective quality of the different approaches.",
"title": ""
},
{
"docid": "8dfd91ceadfcceea352975f9b5958aaf",
"text": "The bag-of-words representation commonly used in text analysis can be analyzed very efficiently and retains a great deal of useful information, but it is also troublesome because the same thought can be expressed using many different terms or one term can have very different meanings. Dimension reduction can collapse together terms that have the same semantics, to identify and disambiguate terms with multiple meanings and to provide a lower-dimensional representation of documents that reflects concepts instead of raw terms. In this chapter, we survey two influential forms of dimension reduction. Latent semantic indexing uses spectral decomposition to identify a lower-dimensional representation that maintains semantic properties of the documents. Topic modeling, including probabilistic latent semantic indexing and latent Dirichlet allocation, is a form of dimension reduction that uses a probabilistic model to find the co-occurrence patterns of terms that correspond to semantic topics in a collection of documents. We describe the basic technologies in detail and expose the underlying mechanism. We also discuss recent advances that have made it possible to apply these techniques to very large and evolving text collections and to incorporate network structure or other contextual information.",
"title": ""
},
{
"docid": "9ddd90ac97b6c3727d9f4f69d44bb873",
"text": "In her 2011 EVT/WOTE keynote, Travis County, Texas County Clerk Dana DeBeauvoir described the qualities she wanted in her ideal election system to replace their existing DREs. In response, in April of 2012, the authors, working with DeBeauvoir and her staff, jointly architected STAR-Vote, a voting system with a DRE-style human interface and a “belt and suspenders” approach to verifiability. It provides both a paper trail and end-toend cryptography using COTS hardware. It is designed to support both ballot-level risk-limiting audits, and auditing by individual voters and observers. The human interface and process flow is based on modern usability research. This paper describes the STAR-Vote architecture, which could well be the next-generation voting system for Travis County and perhaps elsewhere. This paper is a working draft. Significant changes should be expected as the STAR-Vote effort matures.",
"title": ""
},
{
"docid": "b15ed1584eb030fba1ab3c882983dbf0",
"text": "The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets.",
"title": ""
},
{
"docid": "b484d05525e016dfc834754568030a42",
"text": "This study examines the academic abilities of children and adolescents who were once diagnosed with an autism spectrum disorder, but who no longer meet diagnostic criteria for this disorder. These individuals have achieved social and language skills within the average range for their ages, receive little or no school support, and are referred to as having achieved \"optimal outcomes.\" Performance of 32 individuals who achieved optimal outcomes, 41 high-functioning individuals with a current autism spectrum disorder diagnosis (high-functioning autism), and 34 typically developing peers was compared on measures of decoding, reading comprehension, mathematical problem solving, and written expression. Groups were matched on age, sex, and nonverbal IQ; however, the high-functioning autism group scored significantly lower than the optimal outcome and typically developing groups on verbal IQ. All three groups performed in the average range on all subtests measured, and no significant differences were found in performance of the optimal outcome and typically developing groups. The high-functioning autism group scored significantly lower on subtests of reading comprehension and mathematical problem solving than the optimal outcome group. These findings suggest that the academic abilities of individuals who achieved optimal outcomes are similar to those of their typically developing peers, even in areas where individuals who have retained their autism spectrum disorder diagnoses exhibit some ongoing difficulty.",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
},
{
"docid": "d0fc352e347f7df09140068a4195eb9e",
"text": "A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies' market value has led to the development of cryptocurrency mining ( cryptomining ) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking ) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user's browser to mine cryptocurrencies without her consent. In this paper, we perform a comprehensive analysis on Alexa's Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns. Motivated by our findings, we investigate possible countermeasures against this type of attack. We discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation. Our approach could be integrated into browsers to warn users about silent cryptomining when visiting websites that do not ask for their consent.",
"title": ""
},
{
"docid": "70e3a918cb152278360c2c54a8934b2c",
"text": "In translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a cross-sentence context-aware approach and investigate the influence of historical contextual information on the performance of neural machine translation (NMT). First, this history is summarized in a hierarchical way. We then integrate the historical representation into NMT in two strategies: 1) a warm-start of encoder and decoder states, and 2) an auxiliary context source for updating decoder states. Experimental results on a large Chinese-English translation task show that our approach significantly improves upon a strong attention-based NMT system by up to +2.1 BLEU points.",
"title": ""
},
{
"docid": "2e08d9509b4dc7b75eb311d49dd1e6ca",
"text": "The use of memory forensic techniques has the potential to enhance computer forensic investigations. The analysis of digital evidence is facing several key challenges; an increase in electronic devices, network connections and bandwidth, the use of anti-forensic technologies and the development of network centric applications and technologies has lead to less potential evidence stored on static media and increased amounts of data stored off-system. Memory forensic techniques have the potential to overcome these issues in forensic analysis. While much of the current research in memory forensics has been focussed on low-level data, there is a need for research to extract high-level data from physical memory as a means of providing forensic investigators with greater insight into a target system. This paper outlines the need for further research into memory forensic techniques. In particular it stresses the need for methods and techniques for understanding context on a system and also as a means of augmenting other data sources to provide a more complete and efficient searching of investigations.",
"title": ""
},
{
"docid": "cfafeb416d45b77dd3c9e6a94bfb5049",
"text": " Choosing the best genetic strains of mice for developing a new knockout or transgenic mouse requires extensive knowledge of the endogenous traits of inbred strains. Background genes from the parental strains may interact with the mutated gene, in a manner which could severely compromise the interpretation of the mutant phenotype. The present overview summarizes the literature on a wide variety of behavioral traits for the 129, C57BL/6, DBA/2, and many other inbred strains of mice. Strain distributions are described for open field activity, learning and memory tasks, aggression, sexual and parental behaviors, acoustic startle and prepulse inhibition, and the behavioral actions of ethanol, nicotine, cocaine, opiates, antipsychotics, and anxiolytics. Using the referenced information, molecular geneticists can choose optimal parental strains of mice, and perhaps develop new embryonic stem cell progenitors, for new knockouts and transgenics to investigate gene function, and to serve as animal models in the development of novel therapeutics for human genetic diseases.",
"title": ""
},
{
"docid": "c0f4eda55d0d1021e8f15e34dd62268d",
"text": "This paper presents recent results of small pixel development for different applications and discusses optical and electrical characteristics of small pixels along with their respective images. Presented are basic optical and electrical characteristics of pixels with sizes in the range from 2.2μm to 1.1μm,. The paper provides a comparison of front side illumination (FSI) with back side illumination (BSI) technology and considers tradeoffs and applicability of each technology for different pixel sizes. Additional functionalities that can be added to pixel arrays with small pixel, in particular high dynamic range capabilities are also discussed. 1. FSI and BSI technology development Pixel shrinking is the common trend in image sensors for all areas of consumer electronics, including mobile imaging, digital still and video cameras, PC cameras, automotive, surveillance, and other applications. In mobile and digital still camera (DSC) applications, 1.75μm and 1.4μm pixels are widely used in production. Designers of image sensors are actively working on super-small 1.1μm and 0.9um pixels. In high-end DSC cameras with interchangeable lenses, pixel size reduces from the range of 5 – 6 μm to 3 – 4 μm, and even smaller. With very high requirements for angular pixel performance, this results in similar or even bigger challenges as for sub 1.4μm pixels. Altogether, pixel size reduction in all imaging areas has been the most powerful driving force for new technologies and innovations in pixel development. Aptina continues to develop FSI AptinaTM A-PixTM technology for pixel sizes of 1.4μm and bigger. Figures 1a and 1b illustrate a comparison of a regular pixel for a CMOS imager with Aptina’s A-Pix technology. Adding a light guide (LG) and extending the depth of the photodiode (PD) allow significant reduction of both optical and electrical crosstalk, thus significantly boosting pixel performance [1]. A-Pix technology has become a mature manufacturing process that provides high pixel performance with lower wafer cost compared to BSI technology. The latest efforts in developing A-Pix technology were focused on improving symmetry of the pixel, which resulted in extremely low optical cross-talk, reduced green imbalance and color shading. Improvements stem from improvements in the design and manufacturing of LG, along with the structure of Si PD. LG allows one to compensate for pixel asymmetry (at least its optical part) thus providing both optimal utilization of Si area, and minimal green imbalance / color shading. Figure 2 shows an example of green imbalance for 5Mpix sensors with 1.4μm pixels size designed for 27degree max CRA of the lens. Improvement of the LG design reduces green imbalance by more than 7x. BSI technology allows further reduction of pixel size to extremely small 1.1μm and 0.9μm, and more symmetrical pixel design for larger pixel nodes. Similar to A-Pix, the use of back side illumination in pixel design allows significant reduction of optical and electrical crosstalk, as illustrated in Figure 1c. Both BSI and Aptina Apix technology use the 90nm gate and 65nm pixel manufacturing process. Aptina’s BSI technology uses cost-effective P-EPI on P+ bulk silicon as starting wafers. The wafers receive normal FSI CMOS process with skipping some FSI p modules. Front side alignment marks are added for later backside alignments. The device wafers are bonded to BSI carrier wafers, and are thinned down to a few microns thick through wafer back side grinding, selective wet etch, and chemical-mechanical planarization process. The wafer thickness is matched to front side PD depth to reduce cross-talk. Finally, anti-reflective coatings are applied to backside silicon surface and micro-lens to increase pixel QE. Figure 3 shows normalized quantum efficiency spectral characteristics of 1.1μm BSI pixels. Pixels exhibit high QE for all 3 colors and small crosstalk that benefit overall image quality. Figure 4 presents luminance SNR plots for 1.4μm FSI and BSI pixels and 1.1μm BSI pixel. Due to advances of A-Pix technology, characteristics of FSI and BSI 1.4μm pixel are close, with the BSI pixel slightly outperforming FSI pixel, especially at very high CRA. However, the difference in performance is much smaller compared to conventional FSI pixel. For 1.1μm pixels, BSI technology definitely plays a key role in achieving high pixel performance. Major pixel photoelectrical characteristics are presented in Table 1. 2. Image quality of sensors with equal optical format Figure 5 presents SNR10 metrics for different pixel size inversely normalized per pixel area scene illumination at which luminance SNR is equal to 10x for specified lens conditions, integration time, and color correction matrix. As can be seen from the plot, the latest generation of pixels provides SNR10 performance that is scaled to the area, and as a result, provides the same image quality at the same optical format for the mid level of exposures. The latest generation of pixels with the size of (1.1μm – 2.2μm) in Figure 5 uses advances of A-pix technology to boost pixel performance. Many products for mobile and DSC applications use 1.4μm pixel; the latest generations of 1.75μm, 1.9μm, and 2.2μm are in mass production both for still shot and video-centric 2D and 3D applications. Bringing the latest technology to the large 5.6μm pixel has allowed us to significantly boost performance of that pixel (shown as a second bar of Figure 5 for 5.6μm pixel) for automotive applications. As was mentioned earlier, BSI technology furthers the extension of array size for the optical formats. The latest addition to the mainstream mobile cameras with 1⁄4‖ optical format is 8Mpix image sensor with 1.1μm pixels size. Figure 6 compares images from the previous 5Mpix sensor with 1⁄4‖ optical format with 1.4μm pixel size with images from the new 8Mpix sensor with 1.1μm pixel that fits into the same 1/4‖ optical format. Images were taken from the scene with ~100 lux illumination at 67ms integration time and typical f/2.8 lens for mobile applications. Zoomed fragments of the images with 100% zoom for 5Mpix sensor show very comparable quality of the images and confirm that similar image quality for a given optical format results when pixel performance that is scaled to the area continues to be the same. Figure 4 shows also the lowest achievable SNR10 for 1.4μm pixel at similar conditions for the ideal case of QE equal to 100% for all colors and no optical or electrical crosstalk – color overlaps are defined only by color filters. The shape of color filters is taken from large pixel sensor for high-end DSC application and assumes very good color reproduction. It is interesting to see that current 1.4μm pixel has only 40% lower SNR at conditions close to first acceptable image, SNR10 [2]. 3. Additional functionality for arrays with small pixels With the diffraction limits of imaging lenses, the minimum resolvable feature size (green light, Rayleigh limit) for an fnumber 2.8 lens is around 1.8 microns [3]. As pixel sizes continue to shrink below 1.8 microns, the image field produced from the optics is oversampled and system MTF does not continue to show scaled improvement based on increased frequency pixel sampling. How can we take advantage of increased frequency pixel sampling then? High Dynamic Range. Humans have the ability to gaze upon a fixed scene and clearly see very bright and dark objects simultaneously. The typical maximum brightness range visible by humans within a fixed scene is about 10,000 to 1 or 80dB [4]. Mobile and digital still cameras often struggle to match the intra-scene dynamic range of the human visual system and can’t capture high range scenes (50-80dB) primarily because the pixels in the camera’s sensors have a linear response and limited well capacities. HDR image capture technology can address the problem of limited dynamic range in today’s camera. However, a low cost technique that provides adequate performance for still and video applications is needed. Frame Multi-exposure HDR. The frame multi-exposure technique, otherwise known as exposure bracketing, is widely used in the industry to capture several photos of a scene and combine them into an HDR photo. Although this technique is simple, effective, and available to anyone with a camera with exposure control, the drawbacks relegate this technique to still scene photography and frame buffer-based post processing. If an HDR camera system is desired that doesn’t require frame memory and can reduce motion artifacts to a level where video capture is possible, the common image sensor architecture used in most cameras today must be changed. Can we use smaller pixels to provide multi-exposure HDR that doesn’t require frame memory for photos and reduces motion artifacts and allows video capture? Interleaved HDR Capture. With pixel size reduction there is an opportunity to take advantage of the diffraction limits of camera optical systems by spatially interleaving pixels with differing exposure time controls to achieve multi-exposure capture. Figure 7 shows an example of a dual exposure capture system using interleaved exposures within a standard Bayer pattern. This form of intra-frame multi-exposure HDR capture can be easily incorporated into standard CMOS sensors and doesn’t require the additional readout speed or large memories. The tradeoff of interleaving the exposures is that fewer pixels are available for each exposure image and can affect the overall captured image resolution. This is where the advantage of small pixels comes into play: as pixels shrink below the diffraction limit, the system approaches being oversampled such that the MTF doesn’t improve proportionally to pixel size. We propose that greater gain in overall image quality may be achieved by spatially sampling different exposures to capture higher scene quality rather than oversampling the image. In Figure 7, pairs of rows are used for each exposure to ens",
"title": ""
},
{
"docid": "d518f1b11f2d0fd29dcef991afe17d17",
"text": "Applications must be able to synchronize accesses to operating system resources in order to ensure correctness in the face of concurrency and system failures. System transactions allow the programmer to specify updates to heterogeneous system resources with the OS guaranteeing atomicity, consistency, isolation, and durability (ACID). System transactions efficiently and cleanly solve persistent concurrency problems that are difficult to address with other techniques. For example, system transactions eliminate security vulnerabilities in the file system that are caused by time-of-check-to-time-of-use (TOCTTOU) race conditions. System transactions enable an unsuccessful software installation to roll back without disturbing concurrent, independent updates to the file system.\n This paper describes TxOS, a variant of Linux 2.6.22 that implements system transactions. TxOS uses new implementation techniques to provide fast, serializable transactions with strong isolation and fairness between system transactions and non-transactional activity. The prototype demonstrates that a mature OS running on commodity hardware can provide system transactions at a reasonable performance cost. For instance, a transactional installation of OpenSSH incurs only 10% overhead, and a non-transactional compilation of Linux incurs negligible overhead on TxOS. By making transactions a central OS abstraction, TxOS enables new transactional services. For example, one developer prototyped a transactional ext3 file system in less than one month.",
"title": ""
},
{
"docid": "36d7f776d7297f67a136825e9628effc",
"text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.",
"title": ""
}
] |
scidocsrr
|
520f1dca620375534a26ec5941d88d95
|
A Lightweight Simulator for Autonomous Driving Motion Planning Development
|
[
{
"docid": "70710daefe747da7d341577947b6b8ff",
"text": "This paper describes an automated lane centering/changing control algorithm that was developed at General Motors Research and Development. Over the past few decades, there have been numerous studies in the autonomous vehicle motion control. These studies typically focused on improving the control accuracy of the autonomous driving vehicles. In addition to the control accuracy, driver/passenger comfort is also an important performance measure of the system. As an extension of authors' prior study, this paper further considers vehicle motion control to provide driver/passenger comfort based on the adjustment of the lane change maneuvering time in various traffic situations. While defining the driver/passenger comfort level is a human factor study topic, this paper proposes a framework to integrate the motion smoothness into the existing lane centering/changing control problem. The proposed algorithm is capable of providing smooth and aggressive lane change maneuvers according to traffic situation and driver preference. Several simulation results as well as on-road vehicle test results confirm the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "496bdd85a0aebb64d2f2b36c2050eb3a",
"text": "This research derives, implements, tunes and compares selected path tracking methods for controlling a car-like robot along a predetermined path. The scope includes commonly used m ethods found in practice as well as some theoretical methods found in various literature from other areas of rese arch. This work reviews literature and identifies important path tracking models and control algorithms from the vast back ground and resources. This paper augments the literature with a comprehensive collection of important path tracking idea s, a guide to their implementations and, most importantly, an independent and realistic comparison of the perfor mance of these various approaches. This document does not catalog all of the work in vehicle modeling and control; only a selection that is perceived to be important ideas when considering practical system identification, ease of implementation/tuning and computational efficiency. There are several other methods that meet this criteria, ho wever they are deemed similar to one or more of the approaches presented and are not included. The performance r esults, analysis and comparison of tracking methods ultimately reveal that none of the approaches work well in all applications a nd that they have some complementary characteristics. These complementary characteristics lead to an idea that a combination of methods may be useful for more general applications. Additionally, applications for which the methods in this paper do not provide adequate solutions are identified.",
"title": ""
}
] |
[
{
"docid": "4dda701b0bf796f044abf136af7b0a9c",
"text": "Legacy substation automation protocols and architectures typically provided basic functionality for power system automation and were designed to accommodate the technical limitations of the networking technology available for implementation. There has recently been a vast improvement in networking technology that has changed dramatically what is now feasible for power system automation in the substation. Technologies such as switched Ethernet, TCP/IP, high-speed wide area networks, and high-performance low-cost computers are providing capabilities that could barely be imagined when most legacy substation automation protocols were designed. In order to take advantage of modern technology to deliver additional new benefits to users of substation automation, the International Electrotechnical Commission (IEC) has developed and released a new global standard for substation automation: IEC 61850. The paper provides a basic technical overview of IEC 61850 and discusses the benefits of each major aspect of the standard. The concept of a virtual model comprising both physical and logical device models that includes a set of standardized communications services are described along with explanations of how these standardized models, object naming conventions, and communication services bring significant benefits to the substation automation user. New services to support self-describing devices and object-orient peer-to-peer data exchange are explained with an emphasis on how these services can be applied to reduce costs for substation automation. The substation configuration language (SCL) of IEC 61850 is presented with information on how the standardization of substation configuration will impact the future of substation automation. The paper concludes with a brief introduction to the UCA International Users Group as a forum where users and suppliers cooperate in improving substation automation with testing, education, and demonstrations of IEC 61850 and other IEC standards technology",
"title": ""
},
{
"docid": "7ec9f6b40242a732282520f1a4808d49",
"text": "In this paper, a novel technique to enhance the bandwidth of substrate integrated waveguide cavity backed slot antenna is demonstrated. The feeding technique to the cavity backed antenna has been modified by introducing offset feeding of microstrip line along with microstrip to grounded coplanar waveguide transition which helps to excite TE120 mode in the cavity and also to get improvement in impedance matching to the slot antenna simultaneously. The proposed antenna is designed to resonate in X band (8-12 GHz) and shows a resonance at 10.2 GHz with a bandwidth of 4.2% and a gain of 5.6 dBi, 15.6 dB front to back ratio and -30 dB maximum cross polarization level.",
"title": ""
},
{
"docid": "9d2ec490b7efb23909abdbf5f209f508",
"text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.",
"title": ""
},
{
"docid": "700c016add5f44c3fbd560d84b83b290",
"text": "This paper describes a novel framework, called I<scp>n</scp>T<scp>ens</scp>L<scp>i</scp> (\"intensely\"), for producing fast single-node implementations of dense tensor-times-matrix multiply (T<scp>tm</scp>) of arbitrary dimension. Whereas conventional implementations of T<scp>tm</scp> rely on explicitly converting the input tensor operand into a matrix---in order to be able to use any available and fast general matrix-matrix multiply (G<scp>emm</scp>) implementation---our framework's strategy is to carry out the T<scp>tm</scp> <i>in-place</i>, avoiding this copy. As the resulting implementations expose tuning parameters, this paper also describes a heuristic empirical model for selecting an optimal configuration based on the T<scp>tm</scp>'s inputs. When compared to widely used single-node T<scp>tm</scp> implementations that are available in the Tensor Toolbox and Cyclops Tensor Framework (C<scp>tf</scp>), In-TensLi's in-place and input-adaptive T<scp>tm</scp> implementations achieve 4× and 13× speedups, showing Gemm-like performance on a variety of input sizes.",
"title": ""
},
{
"docid": "9afb086e38b883676a503bb10fba3e8f",
"text": "This paper reports a structured literature survey of research in wearable technology for upper-extremity rehabilitation, e.g., after stroke, spinal cord injury, for multiple sclerosis patients or even children with cerebral palsy. A keyword based search returned 61 papers relating to this topic. Examination of the abstracts of these papers identified 19 articles describing distinct wearable systems aimed at upper extremity rehabilitation. These are classified in three categories depending on their functionality: movement and posture monitoring; monitoring and feedback systems that support rehabilitation exercises, serious games for rehabilitation training. We characterize the state of the art considering respectively the reported performance of these technologies, availability of clinical evidence, or known clinical applications.",
"title": ""
},
{
"docid": "e5f30c0d2c25b6b90c136d1c84ba8a75",
"text": "Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets.",
"title": ""
},
{
"docid": "997993e389cdb1e40714e20b96927890",
"text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.",
"title": ""
},
{
"docid": "937c8e25440c52fc6fde84d59c60ba7a",
"text": "We describe how paperXML, a logical document structure markup for scholarly articles, is generated on the basis of OCR tool outputs. PaperXML has been initially developed for the ACL Anthology Searchbench. The main purpose was to robustly provide uniform access to sentences in ACL Anthology papers from the past 46 years, ranging from scanned, typewriter-written conference and workshop proceedings papers, up to recent high-quality typeset, born-digital journal articles, with varying layouts. PaperXML markup includes information on page and paragraph breaks, section headings, footnotes, tables, captions, boldface and italics character styles as well as bibliographic and publication metadata. The role of paperXML in the ACL Contributed Task Rediscovering 50 Years of Discoveries is to serve as fall-back source (1) for older, scanned papers (mostly published before the year 2000), for which born-digital PDF sources are not available, (2) for borndigital PDF papers on which the PDFExtract method failed, (3) for document parts where PDFExtract does not output useful markup such as currently for tables. We sketch transformation of paperXML into the ACL Contributed Task’s TEI P5 XML.",
"title": ""
},
{
"docid": "0886827f658cd8744e926bcc1396769f",
"text": "An integrator circuit is presented in this paper that has used Differntial Difference Current Conveyor Transconductance Amplifier (DDCCTA). It has one DDCCTA and one passive component. It has been realized only with first order low pass response. The operation of the circuit has been observed and enforced at a supply voltage of ± 1.8V (bias current 50pA) using cadence and the model parameters of gpdk 180nm CMOS technology. The worthy of the proposed circuit has been test checked using DDCCTA and further tested for its efficiency on a laboratory breadboard. In this commercially available AD844AN and LM13600 ICs are used. Further, the circuit presented in this paper is impermeable to noise, possessing low voltage and insensitive to temperature.",
"title": ""
},
{
"docid": "d56e64ac41b4437a4c1409f17a6c7cf2",
"text": "A high step-up forward flyback converter with nondissipative snubber for solar energy application is introduced here. High gain DC/DC converters are the key part of renewable energy systems .The designing of high gain DC/DC converters is imposed by severe demands. It produces high step-up voltage gain by using a forward flyback converter. The energy in the coupled inductor leakage inductance can be recycled via a nondissipative snubber on the primary side. It consists of a combination of forward and flyback converter on the secondary side. It is a hybrid type of forward and flyback converter, sharing the transformer for increasing the utilization factor. By stacking the outputs of them, extremely high voltage gain can be obtained with small volume and high efficiency even with a galvanic isolation. The separated secondary windings in low turn-ratio reduce the voltage stress of the secondary rectifiers, contributing to achievement of high efficiency. Here presents a high step-up topology employing a series connected forward flyback converter, which has a series connected output for high boosting voltage-transfer gain. A MATLAB/Simulink model of the Photo Voltaic (PV) system using Maximum Power Point Tracking (MPPT) has been implimented along with a DC/DC hardware prototype.",
"title": ""
},
{
"docid": "6a490e3bc9e03222ebaaa6484de4b6a6",
"text": "This paper introduces GlobalFS, a POSIX-compliant geographically distributed file system. GlobalFS builds on two fundamental building blocks, an atomic multicast group communication abstraction and multiple instances of a single-site data store. We define four execution modes and show how all file system operations can be implemented with these modes while ensuring strong consistency and tolerating failures. We describe the GlobalFS prototype in detail and report on an extensive performance assessment. We have deployed GlobalFS across all EC2 regions and show that the system scales geographically, providing performance comparable to other state-of-the-art distributed file systems for local commands and allowing for strongly consistent operations over the whole system. The code of GlobalFS is available as open source.",
"title": ""
},
{
"docid": "11644dafde30ee5608167c04cb1f511c",
"text": "Dynamic Adaptive Streaming over HTTP (DASH) enables the video player to adapt the bitrate of the video while streaming to ensure playback without interruptions even with varying throughput. A DASH server hosts multiple representations of the same video, each of which is broken down into small segments of fixed playback duration. The video bitrate adaptation is purely driven by the player at the endhost. Typically, the player employs an Adaptive Bitrate (ABR) algorithm, that determines the most appropriate representation for the next segment to be downloaded, based on the current network conditions and user preferences. The aim of an ABR algorithm is to dynamically manage the Quality of Experience (QoE) of the user during the playback. ABR algorithms manage the QoE by maximizing the bitrate while at the same time trying to minimize the other QoE metrics: playback start time, duration and number of buffering events, and the number of bitrate switching events. Typically, the ABR algorithms manage the QoE by using the measured network throughput and buffer occupancy to adapt the playback bitrate. However, due to the video encoding schemes employed, the sizes of the individual segments may vary significantly. For low bandwidth networks, fluctuation in the segment sizes results in inaccurate estimation the expected segment fetch times, thereby resulting in inaccurate estimation of the optimum bitrate. In this paper we demonstrate how the Segment-Aware Rate Adaptation (SARA) algorithm, that considers the measured throughput, buffer occupancy, and the variation in segment sizes helps in better management of the users' QoE in a DASH system. By comparing with a typical throughput-based and buffer-based adaptation algorithm under varying network conditions, we demonstrate that SARA manages the QoE better, especially in a low bandwidth network. We also developed AStream, an open-source Python-based emulated DASH-video player that was used to evaluate three different ABR algorithms and measure the QoE metrics with each of them.",
"title": ""
},
{
"docid": "0cc665089be9aa8217baac32f0385f41",
"text": "Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.",
"title": ""
},
{
"docid": "b17f5cfea81608e5034121113dbc8de4",
"text": "Every question asked by a therapist may be seen to embody some intent and to arise from certain assumptions. Many questions are intended to orient the therapist to the client's situation and experiences; others are asked primarily to provoke therapeutic change. Some questions are based on lineal assumptions about the phenomena being addressed; others are based on circular assumptions. The differences among these questions are not trivial. They tend to have dissimilar effects. This article explores these issues and offers a framework for distinguishing four major groups of questions. The framework may be used by therapists to guide their decision making about what kinds of questions to ask, and by researchers to study different interviewing styles.",
"title": ""
},
{
"docid": "ba695228c0fbaf91d6db972022095e98",
"text": "This study evaluated the critical period hypothesis for second language (L2) acquisition. The participants were 240 native speakers of Korean who differed according to age of arrival (AOA) in the United States (1 to 23 years), but were all experienced in English (mean length of residence 5 15 years). The native Korean participants’ pronunciation of English was evaluated by having listeners rate their sentences for overall degree of foreign accent; knowledge of English morphosyntax was evaluated using a 144-item grammaticality judgment test. As AOA increased, the foreign accents grew stronger, and the grammaticality judgment test scores decreased steadily. However, unlike the case for the foreign accent ratings, the effect of AOA on the grammaticality judgment test scores became nonsignificant when variables confounded with AOA were controlled. This suggested that the observed decrease in morphosyntax scores was not the result of passing a maturationally defined critical period. Additional analyses showed that the score for sentences testing knowledge of rule based, generalizable aspects of English morphosyntax varied as a function of how much education the Korean participants had received in the United States. The scores for sentences testing lexically based aspects of English morphosyntax, on the other hand, depended on how much the Koreans used English. © 1999 Academic Press",
"title": ""
},
{
"docid": "0d020e98448f2413e271c70e2a321fb4",
"text": "Material classification is an important application in computer vision. The inherent property of materials to partially polarize the reflected light can serve as a tool to classify them. In this paper, a real-time polarization sensing CMOS image sensor using a wire grid polarizer is proposed. The image sensor consist of an array of 128 × 128 pixels, occupies an area of 5 × 4 mm2 and it has been designed and fabricated in a 180-nm CMOS process. We show that this image sensor can be used to differentiate between metal and dielectric surfaces in real-time due to the different nature in partially polarizing the specular and diffuse reflection components of the reflected light. This is achieved by calculating the Fresnel reflection coefficients, the degree of polarization and the variations in the maximum and minimum transmitted intensities for varying specular angle of incidence. Differences in the physical parameters for various metal surfaces result in different surface reflection behavior, influencing the Fresnel reflection coefficients. It is also shown that the image sensor can differentiate among various metals by sensing the change in the polarization Fresnel ratio.",
"title": ""
},
{
"docid": "c240da3cde126606771de3e6b3432962",
"text": "Oscillations in the alpha and beta bands can display either an event-related blocking response or an event-related amplitude enhancement. The former is named event-related desynchronization (ERD) and the latter event-related synchronization (ERS). Examples of ERS are localized alpha enhancements in the awake state as well as sigma spindles in sleep and alpha or beta bursts in the comatose state. It was found that alpha band activity can be enhanced over the visual region during a motor task, or during a visual task over the sensorimotor region. This means ERD and ERS can be observed at nearly the same time; both form a spatiotemporal pattern, in which the localization of ERD characterizes cortical areas involved in task-relevant processing, and ERS marks cortical areas at rest or in an idling state.",
"title": ""
},
{
"docid": "95f1862369f279f20fc1fb10b8b41ea8",
"text": "This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted , or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Intrusion detection in wireless ad-hoc networks / editors, Nabendu Chaki and Rituparna Chaki. pages cm Includes bibliographical references and index. Contents Preface ix a b o u t t h e e d i t o r s xi c o n t r i b u t o r s xiii chaP t e r 1 intro d u c t i o n 1 Nova ru N De b , M a N a l i CH a k r a bor T y, a N D N a beN Du CH a k i chaP t e r 2 a r c h i t e c t u r e a n d o r g a n i z at i o n is s u e s 43 M a N a l i CH a k r a bor T y, Nova ru N De b , De bDu T Ta ba r M a N roy, a N D r i T u pa r N a CH a k i chaP t e r 3 routin g f o r …",
"title": ""
},
{
"docid": "03a7bcafb322ee8f7812d66abbd36ce6",
"text": "This paper presents a Deep Bidirectional Long Short Term Memory (LSTM) based Recurrent Neural Network architecture for text recognition. This architecture uses Connectionist Temporal Classification (CTC) for training to learn the labels of an unsegmented sequence with unknown alignment. This work is motivated by the results of Deep Neural Networks for isolated numeral recognition and improved speech recognition using Deep BLSTM based approaches. Deep BLSTM architecture is chosen due to its ability to access long range context, learn sequence alignment and work without the need of segmented data. Due to the use of CTC and forward backward algorithms for alignment of output labels, there are no unicode re-ordering issues, thus no need of lexicon or postprocessing schemes. This is a script independent and segmentation free approach. This system has been implemented for the recognition of unsegmented words of printed Oriya text. This system achieves 4.18% character level error and 12.11% word error rate on printed Oriya text.",
"title": ""
}
] |
scidocsrr
|
deefdb6e5bce6cd80d5f5d349a92c5f2
|
MoFAP: A Multi-level Representation for Action Recognition
|
[
{
"docid": "c439a5c8405d8ba7f831a5ac4b1576a7",
"text": "1. Cao, L., Liu, Z., Huang, T.S.: Cross-dataset action detection. In: CVPR (2010). 2. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-of-parts. In: CVPR (2011) 3. Lan, T., etc.: Discriminative figure-centric models for joint action localization and recognition. In: ICCV (2011). 4. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: CVPR (2013). 5. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV (2013). Experiments",
"title": ""
},
{
"docid": "112fc675cce705b3bab9cb66ca1c08da",
"text": "Our Approach, 0.66 GIST 29.7 Spa>al Pyramid HOG 29.8 Spa>al Pyramid SIFT 34.4 ROI-‐GIST 26.5 Scene DPM 30.4 MM-‐Scene 28.0 Object Bank 37.6 Ours 38.1 Ours+GIST 44.0 Ours+SP 46.4 Ours+GIST+SP 47.5 Ours+DPM 42.4 Ours+GIST+DPM 46.9 Ours+SP+DPM 46.4 GIST+SP+DPM 43.1 Ours+GIST+SP+DPM 49.4 Two key requirements • representa,ve: Need to occur frequently enough • discrimina,ve: Need to be different enough from the rest of the “visual world” Goal: a mid-‐level visual representa>on Experimental Analysis Bonus: works even be`er if weakly supervised!",
"title": ""
},
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "52bce24f8ec738f9b9dfd472acd6b101",
"text": "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5%, 80.6% and 40.7% respectively, which are the best reported results to date.",
"title": ""
}
] |
[
{
"docid": "aa5a0018ae771cf6cfbca628b5d1e1fd",
"text": "Cloud computing discusses about sharing any imaginable entity such as process units, storage devices or software. The provided service is utterly economical and expandable. Cloud computing attractive benefits entice huge interest of both business owners and cyber thefts. Consequently, the “computer forensic investigation” step into the play to find evidences against criminals. As a result of the new technology and methods used in cloud computing, the forensic investigation techniques face different types of issues while inspecting the case. The most profound challenges are difficulties to deal with different rulings obliged on variety of data saved in different locations, limited access to obtain evidences from cloud and even the issue of seizing the physical evidence for the sake of integrity validation or evidence presentation. This paper suggests a simple yet very useful solution to conquer the aforementioned issues in forensic investigation of cloud systems. Utilizing TPM in hypervisor, implementing multi-factor authentication and updating the cloud service provider policy to provide persistent storage devices are some of the recommended solutions. Utilizing the proposed solutions, the cloud service will be compatible to the current digital forensic investigation practices; alongside it brings the great advantage of being investigable and consequently the trust of the client.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "29f17b7d7239a2845d513976e4981d6a",
"text": "Agriculture is the backbone of the Indian economy. As all know that demand of agricultural products are increasing day by day as the population is ever increasing, so there is a need to minimize labor, limit the use of water and increase the production of crops. So there is a need to switch from traditional agriculture to the modern agriculture. The introduction of internet of things into agriculture modernization will help solve these problems. This paper presents the IOT based agriculture production system which will monitor or analyze the crop environment like temperature humidity and moisture content in soil. This paper uses the integration of RFID technology and sensors. As both have different objective sensors are for sensing and RIFD technology is for identification This will effectively solve the problem of farmer, increase the yield and saves his time, power, money.",
"title": ""
},
{
"docid": "53aa1145047cc06a1c401b04896ff1b1",
"text": "Due to the increasing availability of whole slide scanners facilitating digitization of histopathological tissue, there is a strong demand for the development of computer based image analysis systems. In this work, the focus is on the segmentation of the glomeruli constituting a highly relevant structure in renal histopathology, which has not been investigated before in combination with CNNs. We propose two different CNN cascades for segmentation applications with sparse objects. These approaches are applied to the problem of glomerulus segmentation and compared with conventional fully-convolutional networks. Overall, with the best performing cascade approach, single CNNs are outperformed and a pixel-level Dice similarity coefficient of 0.90 is obtained. Combined with qualitative and further object-level analyses the obtained results are assessed as excellent also compared to recent approaches. In conclusion, we can state that especially one of the proposed cascade networks proved to be a highly powerful tool for segmenting the renal glomeruli providing best segmentation accuracies and also keeping the computing time at a low level.",
"title": ""
},
{
"docid": "82535c102f41dc9d47aa65bd71ca23be",
"text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.",
"title": ""
},
{
"docid": "e9b7eba9f15440ec7112a1938fad1473",
"text": "Recovery is not a new concept within mental health, although in recent times, it has come to the forefront of the policy agenda. However, there is no universal definition of recovery, and it is a contested concept. The aim of this study was to examine the British literature relating to recovery in mental health. Three contributing groups are identified: service users, health care providers and policy makers. A review of the literature was conducted by accessing all relevant published texts. A search was conducted using these terms: 'recovery', 'schizophrenia', 'psychosis', 'mental illness' and 'mental health'. Over 170 papers were reviewed. A thematic analysis was conducted. Six main themes emerged, which were examined from the perspective of the stakeholder groups. The dominant themes were identity, the service provision agenda, the social domain, power and control, hope and optimism, risk and responsibility. Consensus was found around the belief that good quality care should be made available to service users to promote recovery both as inpatient or in the community. However, the manner in which recovery was defined and delivered differed between the groups.",
"title": ""
},
{
"docid": "acbdb3f3abf3e56807a4e7f60869a2ee",
"text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.",
"title": ""
},
{
"docid": "e7a51207dd5119ad22fbf35a7b4afca7",
"text": "AIM\nTo characterize types of university students based on satisfaction with life domains that affect eating habits, satisfaction with food-related life and subjective happiness.\n\n\nMATERIALS AND METHODS\nA questionnaire was applied to a nonrandom sample of 305 students of both genders in five universities in Chile. The questionnaire included the abbreviated Multidimensional Student's Life Satisfaction Scale (MSLSS), Satisfaction with Food-related Life Scale (SWFL) and the Subjective Happiness Scale (SHS). Eating habits, frequency of food consumption in and outside the place of residence, approximate height and weight and sociodemographic variables were measured.\n\n\nRESULTS\nUsing factor analysis, the five-domain structure of the MSLSS was confirmed with 26 of the 30 items of the abbreviated version: Family, Friends, Self, Environment and University. Using cluster analysis four types of students were distinguished that differ significantly in the MSLSS global and domain scores, SWFL and SHS scores, gender, ownership of a food allowance card funded by the Chilean government, importance attributed to food for well-being and socioeconomic status.\n\n\nCONCLUSIONS\nHigher levels of life satisfaction and happiness are associated with greater satisfaction with food-related life. Other major life domains that affect students' subjective well-being are Family, Friends, University and Self. Greater satisfaction in some domains may counterbalance the lower satisfaction in others.",
"title": ""
},
{
"docid": "8b34b86cb1ce892a496740bfbff0f9c5",
"text": "Common subexpression elimination is commonly employed to reduce the number of operations in DSP algorithms after decomposing constant multiplications into shifts and additions. Conventional optimization techniques for finding common subexpressions can optimize constant multiplications with only a single variable at a time, and hence cannot fully optimize the computations with multiple variables found in matrix form of linear systems like DCT, DFT etc. We transform these computations such that all common subexpressions involving any number of variables can be detected. We then present heuristic algorithms to select the best set of common subexpressions. Experimental results show the superiority of our technique over conventional techniques for common subexpression elimination.",
"title": ""
},
{
"docid": "b4e942dc860e127d6370d4425176d62f",
"text": "Several years ago we introduced the Balanced Scorecard (Kaplan and Norton 1992). We began with the premise that an exclusive reliance on financial measures in a management system is insufficient. Financial measures are lag indicators that report on the outcomes from past actions. Exclusive reliance on financial indicators could promote behavior that sacrifices long-term value creation for short-term performance (Porter 1992; AICPA 1994). The Balanced Scorecard approach retains measures of financial performance-the lagging outcome indicators-but supplements these with measures on the drivers, the lead indicators, of future financial performance.",
"title": ""
},
{
"docid": "4294edb250b333a0fe5863860bcb7a8a",
"text": "Present-day malware analysis techniques use both virtualized and emulated environments to analyze malware. The reason is that such environments provide isolation and system restoring capabilities, which facilitate automated analysis of malware samples. However, there exists a class of malware, called VM-aware malware, which is capable of detecting such environments and then hide its malicious behavior to foil the analysis. Because of the artifacts introduced by virtualization or emulation layers, it has always been and will always be possible for malware to detect virtual environments.\n The definitive way to observe the actual behavior of VM-aware malware is to execute them in a system running on real hardware, which is called a \"bare-metal\" system. However, after each analysis, the system must be restored back to the previous clean state. This is because running a malware program can leave the system in an instable/insecure state and/or interfere with the results of a subsequent analysis run. Most of the available state-of-the-art system restore solutions are based on disk restoring and require a system reboot. This results in a significant downtime between each analysis. Because of this limitation, efficient automation of malware analysis in bare-metal systems has been a challenge.\n This paper presents the design, implementation, and evaluation of a malware analysis framework for bare-metal systems that is based on a fast and rebootless system restore technique. Live system restore is accomplished by restoring the entire physical memory of the analysis operating system from another, small operating system that runs outside of the target OS. By using this technique, we were able to perform a rebootless restore of a live Windows system, running on commodity hardware, within four seconds. We also analyzed 42 malware samples from seven different malware families, that are known to be \"silent\" in a virtualized or emulated environments, and all of them showed their true malicious behavior within our bare-metal analysis environment.",
"title": ""
},
{
"docid": "b2132ee641e8b2ae5da9f921e3f0ecd5",
"text": "action into more concrete ones. Each dashed arrow maps a task into a plan of actions. Cambridge University Press 978-1-107-03727-4 — Automated Planning and Acting Malik Ghallab , Dana Nau , Paolo Traverso Excerpt More Information www.cambridge.org © in this web service Cambridge University Press 1.2 Conceptual View of an Actor 7 above it, and decides what activities need to be performed to carry out those tasks. Performing a task may involve reining it into lower-level steps, issuing subtasks to other components below it in the hierarchy, issuing commands to be executed by the platform, and reporting to the component that issued the task. In general, tasks in different parts of the hierarchymay involve concurrent use of different types of models and specialized reasoning functions. This example illustrates two important principles of deliberation: hierarchical organization and continual online processing. Hierarchically organized deliberation. Some of the actions the actor wishes to perform do not map directly into a command executable by its platform. An action may need further reinement and planning. This is done online and may require different representations, tools, and techniques from the ones that generated the task. A hierarchized deliberation process is not intended solely to reduce the search complexity of ofline plan synthesis. It is needed mainly to address the heterogeneous nature of the actions about which the actor is deliberating, and the corresponding heterogeneous representations and models that such deliberations require. Continual online deliberation.Only in exceptional circumstances will the actor do all of its deliberation ofline before executing any of its planned actions. Instead, the actor generally deliberates at runtime about how to carry out the tasks it is currently performing. The deliberation remains partial until the actor reaches its objective, including through lexible modiication of its plans and retrials. The actor’s predictive models are often limited. Its capability to acquire and maintain a broad knowledge about the current state of its environment is very restricted. The cost of minor mistakes and retrials are often lower than the cost of extensive modeling, information gathering, and thorough deliberation. Throughout the acting process, the actor reines and monitors its actions; reacts to events; and extends, updates, and repairs its plan on the basis of its perception focused on the relevant part of the environment. Different parts of the actor’s hierarchy often use different representations of the state of the actor and its environment. These representations may correspond to different amounts of detail in the description of the state and different mathematical constructs. In Figure 1.2, a graph of discrete locations may be used at the upper levels, while the lower levels may use vectors of continuous coniguration variables for the robot limbs. Finally, because complex deliberations can be compiled down by learning into lowlevel commands, the frontier between deliberation functions and the execution platform is not rigid; it evolves with the actor’s experience.",
"title": ""
},
{
"docid": "8e0ec02b22243b4afb04a276712ff6cf",
"text": "1 Morphology with or without Affixes The last few years have seen the emergence of several clearly articulated alternative approaches to morphology. One such approach rests on the notion that only stems of the so-called lexical categories (N, V, A) are morpheme \"pieces\" in the traditional sense—connections between (bundles of) meaning (features) and (bundles of) sound (features). What look like affixes on this view are merely the by-product of morphophonological rules called word formation rules (WFRs) that are sensitive to features associated with the lexical categories, called lexemes. Such an amorphous or affixless theory, adumbrated by Beard (1966) and Aronoff (1976), has been articulated most notably by Anderson (1992) and in major new studies by Aronoff (1992) and Beard (1991). In contrast, Lieber (1992) has refined the traditional notion that affixes as well as lexical stems are \"mor-pheme\" pieces whose lexical entries relate phonological form with meaning and function. For Lieber and other \"lexicalists\" (see, e.g., Jensen 1990), the combining of lexical items creates the words that operate in the syntax. In this paper we describe and defend a third theory of morphology , Distributed Morphology, 1 which combines features of the affixless and the lexicalist alternatives. With Anderson, Beard, and Aronoff, we endorse the separation of the terminal elements involved in the syntax from the phonological realization of these elements. With Lieber and the lexicalists, on the other hand, we take the phonological realization of the terminal elements in the syntax to be governed by lexical (Vocabulary) entries that relate bundles of morphosyntactic features to bundles of pho-nological features. We have called our approach Distributed Morphology (hereafter DM) to highlight the fact that the machinery of what traditionally has been called morphology is not concentrated in a single component of the gram",
"title": ""
},
{
"docid": "209472a5a37a3bb362e43d1b0abb7fd3",
"text": "The goals of the review are threefold: (a) to highlight the educational and employment consequences of poorly developed mathematical competencies; (b) overview the characteristics of children with mathematical learning disability (MLD) and with persistently low achievement (LA) in mathematics; and (c) provide a primer on cognitive science research that is aimed at identifying the cognitive mechanisms underlying these learning disabilities and associated cognitive interventions. Literatures on the educational and economic consequences of poor mathematics achievement were reviewed and integrated with reviews of epidemiological, behavioral genetic, and cognitive science studies of poor mathematics achievement. Poor mathematical competencies are common among adults and result in employment difficulties and difficulties in many common day-to-day activities. Among students, ∼ 7% of children and adolescents have MLD and another 10% show persistent LA in mathematics, despite average abilities in most other areas. Children with MLD and their LA peers have deficits in understanding and representing numerical magnitude, difficulties retrieving basic arithmetic facts from long-term memory, and delays in learning mathematical procedures. These deficits and delays cannot be attributed to intelligence but are related to working memory deficits for children with MLD, but not LA children. These individuals have identifiable number and memory delays and deficits that seem to be specific to mathematics learning. Interventions that target these cognitive deficits are in development and preliminary results are promising.",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "05b6f7fd65ae6eee7fb3ae44e98fb2f9",
"text": "We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation. First, we learn local controllers that are able to perform the task starting at a predefined initial state. These controllers are constructed using trajectory optimization with respect to locally-linear time-varying models learned directly from sensor data. In some cases, we initialize the optimizer with human demonstrations collected via teleoperation in a virtual environment. We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state. We then consider two interpolation methods for generalizing to a wider range of initial conditions: deep learning, and nearest neighbors. We find that nearest neighbors achieve higher performance under full observability, while a neural network proves advantages under partial observability: it uses only tactile and proprioceptive feedback but no feedback about the object (i.e. it performs the task blind) and learns a time-invariant policy. In contrast, the nearest neighbors method switches between time-varying local controllers based on the proximity of initial object states sensed via motion capture. While both generalization methods leave room for improvement, our work shows that (i) local trajectory-based controllers for complex non-prehensile manipulation tasks can be constructed from surprisingly small amounts of training data, and (ii) collections of such controllers can be interpolated to form more global controllers. Results are summarized in the supplementary video: https://youtu.be/E0wmO6deqjo",
"title": ""
},
{
"docid": "185d1c51d1ebd4428a9754a7c68d82d5",
"text": "Intersex disorders are rare congenital malformations with over 80% being diagnosed with congenital adrenal hyperplasia (CAH). It can be challenging to determine the correct gender at birth and a detailed understanding of the embryology and anatomy is crucial. The birth of a child with intersex is a true emergency situation and an immediate transfer to a medical center familiar with the diagnosis and management of intersex conditions should occur. In children with palpable gonads the presence of a Y chromosome is almost certain, since ovotestes or ovaries usually do not descend. Almost all those patients with male pseudohermaphroditism lack Mullerian structures due to MIS production from the Sertoli cells, but the insufficient testosterone stimulation leads to an inadequate male phenotype. The clinical manifestation of all CAH forms is characterized by the virilization of the outer genitalia. Surgical correction techniques have been developed and can provide satisfactory cosmetic and functional results. The discussion of the management of patients with intersex disorders continues. Current data challenge the past practice of sex reassignment. Further data are necessary to formulate guidelines and recommendations fitting for the individual situation of each patient. Until then the parents have to be supplied with the current data and outcome studies to make the correct choice for their child.",
"title": ""
},
{
"docid": "f3aa019816ae399c3fe834ffce3db53e",
"text": "This paper presents a method to incorporate 3D line segments in vision based SLAM. A landmark initialization method that relies on the Plucker coordinates to represent a 3D line is introduced: a Gaussian sum approximates the feature initial state and is updated as new observations are gathered by the camera. Once initialized, the landmarks state is estimated along an EKF-based SLAM approach: constraints associated with the Plucker representation are considered during the update step of the Kalman filter. The whole SLAM algorithm is validated in simulation runs and results obtained with real data are presented.",
"title": ""
},
{
"docid": "aaabe81401e33f7e2bb48dd6d5970f9b",
"text": "Brain tumor is the most life undermining sickness and its recognition is the most challenging task for radio logistics by manual detection due to varieties in size, shape and location and sort of tumor. So, detection ought to be quick and precise and can be obtained by automated segmentation methods on MR images. In this paper, neutrosophic sets based segmentation is performed to detect the tumor. MRI is an intense apparatus over CT to analyze the interior segments of the body and the tumor. Tumor is detected and true, false and indeterminacy values of tumor are determined by this technique and the proposed method produce the beholden results.",
"title": ""
},
{
"docid": "1e2e099c849b165b31b0c36040825464",
"text": "In recent years, there has been a substantial amount of research on quantum computers – machines that exploit quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks. This Internal Report shares the National Institute of Standards and Technology (NIST)’s current understanding about the status of quantum computing and post-quantum cryptography, and outlines NIST’s initial plan to move forward in this space. The report also recognizes the challenge of moving to new cryptographic infrastructures and therefore emphasizes the need for agencies to focus on crypto agility.",
"title": ""
}
] |
scidocsrr
|
a8b17513342d77db582a95a9a6551bcb
|
Gyrophone: Recognizing Speech from Gyroscope Signals
|
[
{
"docid": "b053a55feb4d40652efc530705247153",
"text": "This paper presents design, implementation, and evaluation of AmbientSense, a real-time ambient sound recognition system on a smartphone. AmbientSense continuously recognizes user context by analyzing ambient sounds sampled from a smartphone's microphone. The phone provides a user with realtime feedback on recognised context. AmbientSense is implemented as an Android app and works in two modes: in autonomous mode processing is performed on the smartphone only. In server mode recognition is done by transmitting audio features to a server and receiving classification results back. We evaluated both modes in a set of 23 daily life ambient sound classes and describe recognition performance, phone CPU load, and recognition delay. The application runs with a fully charged battery up to 13.75 h on a Samsung Galaxy SII smartphone and up to 12.87 h on a Google Nexus One phone. Runtime and CPU load were similar for autonomous and server modes.",
"title": ""
},
{
"docid": "9b2f17d76fd0e44059d29083a931f2f1",
"text": "This paper presents a security system based on speaker identification. Mel frequency Cepstral Coefficients{MFCCs} have been used for feature extraction and vector quantization technique is used to minimize the amount of data to be handled .",
"title": ""
}
] |
[
{
"docid": "7aaf1de930b5aa3ca14fc8b0345999b0",
"text": "A disturbance in scapulohumeral rhythm may cause negative biomechanic effects on rotator cuff (RC). Alteration in scapular motion and shoulder pain can influence RC strength. Purpose of this study was to assess supraspinatus and infraspinatus strength in 29 overhead athletes with scapular dyskinesis, before and after 3 and 6 months of rehabilitation aimed to restore scapular musculature balance. A passive posterior soft tissues stretching was prescribed to balance shoulder mobility. Scapular dyskinesis patterns were evaluated according to Kibler et al. Clinical assessment was performed with the empty can (EC) test and infraspinatus strength test (IST). Strength values were recorded by a dynamometer; scores for pain were assessed with VAS scale. Changes of shoulder IR were measured. The force values increased at 3 months (P < 0.01) and at 6 months (P < 0.01). Changes of glenohumeral IR and decrease in pain scores were found at both follow-up. Outcomes registered on pain and strength confirm the role of a proper scapular position for an optimal length-tension relationship of the RC muscles. These data should encourage those caring for athletes to consider restoring of scapular musculature balance as essential part of the athletic training.",
"title": ""
},
{
"docid": "f3c2663cb0341576d754bb6cd5f2c0f5",
"text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.",
"title": ""
},
{
"docid": "8946acc84c07e1163aadc04cf25f4840",
"text": "Leisure travelers increasingly prefer to book hotel online when considering the convenience and cost/ time saving. This research examines the direct and mediating effects of brand image, perceived price, trust, perceived value on consumers' booking intentions and compares the gender differences in online hotel booking. The outcomes confirm most of the direct and indirect path effects and are consistent with findings from previous studies. Consumers in Taiwan tend to believe the hotel price is affordable, the hotel brand is attractive, the hotel is trustworthy, the hotel will offer good value for the price and the likelihood of their booking intentions is high. Brand image, perceived price, and perceived value are the three critical determinants directly influencing purchase intentions. However, the impact of trust on purchase intentions is not significant. The differences between males and females on purchase intentions are not significant as well. Managerial implications of these results are discussed. © 2015 College of Management, National Cheng Kung University. Production and hosting by Elsevier Taiwan LLC. All rights reserved.",
"title": ""
},
{
"docid": "0396940ea3ced8d79ba3eda1fae2c469",
"text": "Adblocking tools like Adblock Plus continue to rise in popularity, potentially threatening the dynamics of advertising revenue streams. In response, a number of publishers have ramped up efforts to develop and deploy mechanisms for detecting and/or counter-blocking adblockers (which we refer to as anti-adblockers), effectively escalating the online advertising arms race. In this paper, we develop a scalable approach for identifying third-party services shared across multiple websites and use it to provide a first characterization of antiadblocking across the Alexa Top-5K websites. We map websites that perform anti-adblocking as well as the entities that provide anti-adblocking scripts. We study the modus operandi of these scripts and their impact on popular adblockers. We find that at least 6.7% of websites in the Alexa Top-5K use anti-adblocking scripts, acquired from 12 distinct entities – some of which have a direct interest in nourishing the online advertising industry.",
"title": ""
},
{
"docid": "31328c32656d25d00d45a714df0f6d94",
"text": "In a heterogeneous cellular network (HetNet) consisting of $M$ tiers of densely-deployed base stations (BSs), consider that each of the BSs in the HetNet that are associated with multiple users is able to simultaneously schedule and serve two users in a downlink time slot by performing the (power-domain) non-orthogonal multiple access (NOMA) scheme. This paper aims at the preliminary study on the downlink coverage performance of the HetNet with the non-cooperative and the proposed cooperative NOMA schemes. First, we study the coverage probability of the NOMA users for the non-cooperative NOMA scheme in which no BSs are coordinated to jointly transmit the NOMA signals for a particular cell and the coverage probabilities of the two NOMA users of the BSs in each tier are derived. We show that the coverage probabilities can be largely reduced if allocated transmit powers for the NOMA users are not satisfied with some constraints. Next, we study and derive the coverage probabilities for the proposed cooperative NOMA scheme in which the void BSs that are not tagged by any users are coordinated to enhance the far NOMA user in a particular cell. Our analyses show that cooperative NOMA can significantly improve the coverage of all NOMA users as long as the transmit powers for the NOMA users are properly allocated.",
"title": ""
},
{
"docid": "bcda82b5926620060f65506ccbac042f",
"text": "This paper investigates spirolaterals for their beauty of form and the unexpected complexity arising from them. From a very simple generative procedure, spirolaterals can be created having great complexity and variation. Using mathematical and computer-based methods, issues of closure, variation, enumeration, and predictictability are discussed. A historical review is also included. The overriding interest in this research is to develop methods and procedures to investigate geometry for the purpose of inspiration for new architectural and sculptural forms. This particular phase will concern the two dimensional representations of spirolaterals.",
"title": ""
},
{
"docid": "a6f0d6d270520c60b060d0051d0a9877",
"text": "Multi-Object Tracking (MOT) is a challenging task in the complex scene such as surveillance and autonomous driving. In this paper, we propose a novel tracklet processing method to cleave and re-connect tracklets on crowd or longterm occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet generation utilizes object features extracted by CNN and RNN to create the high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in the generation process, the tracklets from different objects are split into several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based tracklet re-connection method is applied to link the sub-tracklets which belong to the same object to form a whole trajectory. In addition, we extract the track-let images from existing MOT datasets and propose a novel dataset to train our networks. The proposed dataset contains more than 95160 pedestrian images. It has 793 different persons in it. On average, there are 120 images for each person with positions and sizes. Experimental results demonstrate the advantages of our model over the state-of-the-art methods on MOTI6.",
"title": ""
},
{
"docid": "39755a818e818d2e10b0bac14db6c347",
"text": "Algorithms to solve variational regularization of ill-posed inverse problems usually involve operators that depend on a collection of continuous parameters. When these operators enjoy some (local) regularity, these parameters can be selected using the socalled Stein Unbiased Risk Estimate (SURE). While this selection is usually performed by exhaustive search, we address in this work the problem of using the SURE to efficiently optimize for a collection of continuous parameters of the model. When considering non-smooth regularizers, such as the popular l1-norm corresponding to soft-thresholding mapping, the SURE is a discontinuous function of the parameters preventing the use of gradient descent optimization techniques. Instead, we focus on an approximation of the SURE based on finite differences as proposed in [51]. Under mild assumptions on the estimation mapping, we show that this approximation is a weakly differentiable function of the parameters and its weak gradient, coined the Stein Unbiased GrAdient estimator of the Risk (SUGAR), provides an asymptotically (with respect to the data dimension) unbiased estimate of the gradient of the risk. Moreover, in the particular case of softthresholding, it is proved to be also a consistent estimator. This gradient estimate can then be used as a basis to perform a quasi-Newton optimization. The computation of the SUGAR relies on the closed-form (weak) differentiation of the non-smooth function. We provide its expression for a large class of iterative methods including proximal splitting ones and apply our strategy to regularizations involving non-smooth convex structured penalties. Illustrations on various image restoration and matrix completion problems are given.",
"title": ""
},
{
"docid": "12579b211831d9df508ecd1f90469399",
"text": "This article considers stochastic algorithms for efficiently solving a class of large scale non-linear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps are quantified. Such stochastic steps involve approximating the NLS objective function using Monte-Carlo methods, and this is equivalent to the estimation of the trace of corresponding symmetric positive semi-definite (SPSD) matrices. For the latter, we prove tight necessary and sufficient conditions on the sample size (which translates to cost) to satisfy the prescribed probabilistic accuracy. We show that these conditions are practically computable and yield small sample sizes. They are then incorporated in our stochastic algorithm to quantify the uncertainty in each randomized step. The bounds we use are applications of more general results regarding extremal tail probabilities of linear combinations of gamma distributed random variables. We derive and prove new results concerning the maximal and minimal tail probabilities of such linear combinations, which can be considered independently of the rest of this paper.",
"title": ""
},
{
"docid": "ec1f585fbb97c8e6468dd992e1a933ff",
"text": "Scientists continue to find challenges in the ever increasing amount of information that has been produced on a world wide scale, during the last decades. When writing a paper, an author searches for the most relevant citations that started or were the foundation of a particular topic, which would very likely explain the thinking or algorithms that are employed. The search is usually done using specific keywords submitted to literature search engines such as Google Scholar and CiteSeer. However, finding relevant citations is distinctive from producing articles that are only topically similar to an author's proposal. In this paper, we address the problem of citation recommendation using a singular value decomposition approach. The models are trained and evaluated on the Citeseer digital library. The results of our experiments show that the proposed approach achieves significant success when compared with collaborative filtering methods on the citation recommendation task.",
"title": ""
},
{
"docid": "572fbd0682b1b6ded39e8ef42325ad7c",
"text": "Here, we describe a real planning problem in the tramp shipping industry. A tramp shipping company may have a certain amount of contract cargoes that it is committed to carry, and tries to maximize the profit from optional cargoes. For real long-term contracts, the sizes of the cargoes are flexible. However, in previous research within tramp ship routing, the cargo quantities are regarded as fixed. We present an MP-model of the problem and a set partitioning approach to solve the multi-ship pickup and delivery problem with time windows and flexible cargo sizes. The columns are generated a priori and the most profitable ship schedule for each cargo set–ship combination is included in the set partitioning problem. We have tested the method on several real-life cases, and the results show the potential economical effects for the tramp shipping companies by utilizing flexible cargo sizes when generating the schedules. Journal of the Operational Research Society (2007) 58, 1167–1177. doi:10.1057/palgrave.jors.2602263 Published online 16 August 2006",
"title": ""
},
{
"docid": "68387fac4e4e320b522f928c98127e9d",
"text": "Nowadays, industrial robots play an important role automating recurring manufacturing tasks. New trends towards Smart Factory and Industry 4.0 however take a more productdriven approach and demand for more flexibility of the robotic systems. When a varying order of processing steps is required, intra-factory logistics has to cope with the new challenges. To achieve this flexibility, mobile robots can be used for transporting goods, or even mobile manipulators consisting of a mobile platform and a robot arm for independently grasping work pieces and manipulating them while in motion. Working with mobile robots however poses new challenges that did not yet occur for industrial manipulators: First, mobile robots have a greater position inaccuracy and typically work in not fully structured environments, requiring to interpret sensor data and to more often react to events from the environment. Furthermore, independent mobile robots introduce the aspect of distribution. For mobile manipulators, an additional challenge arises from the combination of platform and arm, where platform and arm, but also sensors have to be coordinated to achieve the desired behavior. The main contribution of this work is an approach that allows the object-oriented modeling and coordination of mobile robots, supporting the cooperation of mobile manipulators. Within a mobile manipulator, the approach allows to define real-time reactions to sensor data and to synchronize the different actuators and sensors present, allowing sensor-aware combinations of motions for platform and arm. Moreover, the approach facilitates an easy way of programming, provides means to handle kinematic restrictions or redundancy, and supports advanced capabilities such as impedance control to mitigate position uncertainty. Working with multiple independent mobile robots, each has a different knowledge about its environment, based on the available sensors. These different views are modeled, allowing consistent coordination of robots in applications using the data available on each robot. To cope with geometric uncertainty, sensors are modeled and the relationship between their measurements and geometric aspects is defined. Based on these definitions and incoming sensor data, position estimates are automatically derived. Additionally, the more dynamic environment leads to different possible outcomes of task execution. These are explicitly modeled and can be used to define reactive behavior. The approach was successfully evaluated based on two application examples, ranging from physical interaction between two mobile manipulators handing over a work-piece to gesture control of a quadcopter for carrying goods.",
"title": ""
},
{
"docid": "66e2128ebdbd5c348b775d70de1f7127",
"text": "With the rapid development of various online video sharing platforms, large numbers of videos are produced every day. Video affective content analysis has become an active research area in recent years, since emotion plays an important role in the classification and retrieval of videos. In this work, we explore to train very deep convolutional networks using ConvLSTM layers to add more expressive power for video affective content analysis models. Network-in-network principles, batch normalization, and convolution auto-encoder are applied to ensure the effectiveness of the model. Then an extended emotional representation model is used as an emotional annotation. In addition, we set up a database containing two thousand fragments to validate the effectiveness of the proposed model. Experimental results on the proposed data set show that deep learning approach based on ConvLSTM outperforms the traditional baseline and reaches the state-of-the-art system.",
"title": ""
},
{
"docid": "8e8bd847c0d5e3e04d7c2f8f8f42ea63",
"text": "The community-based generation of content has been tremendously successful in the World-Wide Web - people help each other by providing information that could be useful to others. We are trying to transfer this approach to robotics in order to help robots acquire the vast amounts of knowledge needed to competently perform everyday tasks. RoboEarth is intended to be a web community by robots for robots to autonomously share descriptions of tasks they have learned, object models they have created, and environments they have explored. In this paper, we report on the formal language we developed for encoding this information and present our approaches to solve the inference problems related to finding information, to determining if information is usable by a robot, and to grounding it on the robot platform.",
"title": ""
},
{
"docid": "acd6557e2d0ffa9bd6a6285ece5a4c98",
"text": "In recent years, the large amount of labeled data available has also helped tend research toward using minimal domain knowledge, e.g., in deep neural network research. However, in many situations, data is limited and of poor quality. Can domain knowledge be useful in such a setting? In this paper, we propose domain adapted neural networks (DANN) to explore how domain knowledge can be integrated into model training for deep networks. In particular, we incorporate loss terms for knowledge available as monotonicity constraints and approximation constraints. We evaluate our model on both synthetic data generated using the popular Bohachevsky function and a real-world dataset for predicting oxygen solubility in water. In both situations, we find that our DANN model outperforms its domain-agnostic counterpart yielding an overall mean performance improvement of 19.5% with a worst- and best-case performance improvement of 4% and 42.7%, respectively.",
"title": ""
},
{
"docid": "a0c4e7dd7709e41cc5f877a33f021e3f",
"text": "Security Visualization is a very young term. It expresses the idea that common visualization techniques have been designed for use cases that are not supportive of security-related data, demanding novel techniques fine tuned for the purpose of thorough analysis. Significant amount of work has been published in this area, but little work has been done to study this emerging visualization discipline. We offer a comprehensive review of network security visualization and provide a taxonomy in the form of five use-case classes encompassing nearly all recent works in this area. We outline the incorporated visualization techniques and data sources and provide an informative table to display our findings. From the analysis of these systems, we examine issues and concerns regarding network security visualization and provide guidelines and directions for future researchers and visual system developers.",
"title": ""
},
{
"docid": "36ae895829fda8c8b58bf49eaa607695",
"text": "In this paper, we describe SymDiff, a language-agnostic tool for equivalence checking and displaying semantic (behavioral) differences over imperative programs. The tool operates on an intermediate verification language Boogie, for which translations exist from various source languages such as C, C# and x86. We discuss the tool and the front-end interface to target various source languages. Finally, we provide a brief description of the front-end for C programs.",
"title": ""
},
{
"docid": "d43fcfcf45c0024c5f2a107b31804f86",
"text": "Weproposeanimagesegmentationalgorithmthatis basedonspatially adapti ve color and texture features. The featuresare first developedindependently , andthencombinedto obtainanoverall segmentation.Texture featureestimationrequiresa finite neighborhoodwhich limits the spatialresolutionof texture segmentation, while color segmentationprovidesaccurateandpreciseedge localization.We combinea previously proposedadapti ve clustering algorithmfor color segmentationwith a simplebut effective texturesegmentationapproachto obtainanoverall imagesegmentation. Our focus is in the domainof photographicimageswith anessentiallyunlimitedrangeof topics.Theimagesareassumed to be of relatively low resolutionandmay be degradedor compressed.",
"title": ""
},
{
"docid": "ba17adc705d92a5a7d6122a6bd25c732",
"text": "Penile size is a major concern among men all over world. Men throughout history and still today, feel the need to enlarge their penis in order to improve their self-esteem and sexual performance. There are a variety of social, cultural, and psychological aspects regarding the size of men genitals, resulting such that, men often feel the need to enlarge their penis. “Bigger is better” is still a relevant belief in our days and based on the “phallic identity” – the tendency of males to seek their personality in their penis. This trend is supported by the numerous and still increasing number of penile enlargement procedures performed in the past years and today, generally in men with normal size penises. This condition is called “the locker room syndrome” – men concerned about their flaccid penile size even though in most cases their penile length and girth are normal. however, the surgical procedures available for changing penile appearance remains highly controversial mainly due to high complication rates and low satisfactory surgical outcomes.",
"title": ""
}
] |
scidocsrr
|
0df4457737de4ada7f60aba6a12979cd
|
Natural Language Inference with Attentive Neural Networks
|
[
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "ba16a6634b415dd2c478c83e1f65cb3c",
"text": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.",
"title": ""
}
] |
[
{
"docid": "8100e82d990acf1ec1e8551b479fc71f",
"text": "String similarity search and join are two important operations in data cleaning and integration, which extend traditional exact search and exact join operations in databases by tolerating the errors and inconsistencies in the data. They have many real-world applications, such as spell checking, duplicate detection, entity resolution, and webpage clustering. Although these two problems have been extensively studied in the recent decade, there is no thorough survey. In this paper, we present a comprehensive survey on string similarity search and join. We first give the problem definitions and introduce widely-used similarity functions to quantify the similarity. We then present an extensive set of algorithms for string similarity search and join. We also discuss their variants, including approximate entity extraction, type-ahead search, and approximate substring matching. Finally, we provide some open datasets and summarize some research challenges and open problems.",
"title": ""
},
{
"docid": "134ecc62958fa9bb930ff934c5fad7a3",
"text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.",
"title": ""
},
{
"docid": "05bc787d000ecf26c8185b084f8d2498",
"text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling",
"title": ""
},
{
"docid": "b8b02f98f21b81ad5e25a73f5f95598f",
"text": "Datalog is a family of ontology languages that combine good computational properties with high expressive power. Datalog languages are provably able to capture many relevant Semantic Web languages. In this paper we consider the class of weakly-sticky (WS) Datalog programs, which allow for certain useful forms of joins in rule bodies as well as extending the well-known class of weakly-acyclic TGDs. So far, only nondeterministic algorithms were known for answering queries on WS Datalog programs. We present novel deterministic query answering algorithms under WS Datalog. In particular, we propose: (1) a bottom-up grounding algorithm based on a query-driven chase, and (2) a hybrid approach based on transforming a WS program into a so-called sticky one, for which query rewriting techniques are known. We discuss how our algorithms can be optimized and effectively applied for query answering in real-world scenarios.",
"title": ""
},
{
"docid": "ace9af1a19077f66b57275677cac60cb",
"text": "Recently several researchers have investigated techniques for using data to learn Bayesian networks containing compact representations for the conditional probability distributions (CPDs) stored at each node. The majority of this work has concentrated on using decision-tree representations for the CPDs. In addition, researchers typically apply non-Bayesian (or asymptotically Bayesian) scoring functions such as MDL to evaluate the goodness-oft of networks to the data. In this paper we investigate a Bayesian approach to learning Bayesian networks that contain the more general decision-graph representations of the CPDs. First, we describe how to evaluate the posterior probability|that is, the Bayesian score|of such a network, given a database of observed cases. Second, we describe various search spaces that can be used, in conjunction with a scoring function and a search procedure, to identify one or more high-scoring networks. Finally, we present an experimental evaluation of the search spaces, using a greedy algorithm and a Bayesian scoring function.",
"title": ""
},
{
"docid": "c07a0053f43d9e1f98bb15d4af92a659",
"text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.",
"title": ""
},
{
"docid": "9c80e8db09202335f427ebf02659eac3",
"text": "The present paper reviews and critiques studies assessing the relation between sleep patterns, sleep quality, and school performance of adolescents attending middle school, high school, and/or college. The majority of studies relied on self-report, yet the researchers approached the question with different designs and measures. Specifically, studies looked at (1) sleep/wake patterns and usual grades, (2) school start time and phase preference in relation to sleep habits and quality and academic performance, and (3) sleep patterns and classroom performance (e.g., examination grades). The findings strongly indicate that self-reported shortened total sleep time, erratic sleep/wake schedules, late bed and rise times, and poor sleep quality are negatively associated with academic performance for adolescents from middle school through the college years. Limitations of the current published studies are also discussed in detail in this review.",
"title": ""
},
{
"docid": "b3a9ad04e7df1b2250f0a7b625509efd",
"text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.",
"title": ""
},
{
"docid": "0ca445eed910eacccbb9f2cc9569181b",
"text": "Nanotechnology promises new solutions for many applications in the biomedical, industrial and military fields as well as in consumer and industrial goods. The interconnection of nanoscale devices with existing communication networks and ultimately the Internet defines a new networking paradigm that is further referred to as the Internet of Nano-Things. Within this context, this paper discusses the state of the art in electromagnetic communication among nanoscale devices. An in-depth view is provided from the communication and information theoretic perspective, by highlighting the major research challenges in terms of channel modeling, information encoding and protocols for nanonetworks and the Internet of Nano-Things.",
"title": ""
},
{
"docid": "ec0da5cea716d1270b2143ffb6c610d6",
"text": "This study focuses on the development of a web-based Attendance Register System or formerly known as ARS. The development of this system is motivated due to the fact that the students’ attendance records are one of the important elements that reflect their academic achievements in the higher academic institutions. However, the current practice implemented in most of the higher academic institutions in Malaysia is becoming more prone to human errors and frauds. Assisted by the System Development Life Cycle (SDLC) methodology, the ARS has been built using the web-based applications such as PHP, MySQL and Apache to cater the recording and reporting of the students’ attendances. The development of this prototype system is inspired by the feasibility study done in Universiti Teknologi MARA, Malaysia where 550 respondents have taken part in answering the questionnaires. From the analysis done, it has revealed that a more systematic and revolutionary system is indeed needed to be reinforced in order to improve the process of recording and reporting the attendances in the higher academic institution. ARS can be easily accessed by the lecturers via the Web and most importantly, the reports can be generated in realtime processing, thus, providing invaluable information about the students’ commitments in attending the classes. This paper will discuss in details the development of ARS from the feasibility study until the design phase.",
"title": ""
},
{
"docid": "9b13225d4a51419578362a38f22b9c9c",
"text": "Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.",
"title": ""
},
{
"docid": "3b2c18828ef155233ede7f51d80f656a",
"text": "It is crucial for cancer diagnosis and treatment to accurately identify the site of origin of a tumor. With the emergence and rapid advancement of DNA microarray technologies, constructing gene expression profiles for different cancer types has already become a promising means for cancer classification. In addition to research on binary classification such as normal versus tumor samples, which attracts numerous efforts from a variety of disciplines, the discrimination of multiple tumor types is also important. Meanwhile, the selection of genes which are relevant to a certain cancer type not only improves the performance of the classifiers, but also provides molecular insights for treatment and drug development. Here, we use semisupervised ellipsoid ARTMAP (ssEAM) for multiclass cancer discrimination and particle swarm optimization for informative gene selection. ssEAM is a neural network architecture rooted in adaptive resonance theory and suitable for classification tasks. ssEAM features fast, stable, and finite learning and creates hyperellipsoidal clusters, inducing complex nonlinear decision boundaries. PSO is an evolutionary algorithm-based technique for global optimization. A discrete binary version of PSO is employed to indicate whether genes are chosen or not. The effectiveness of ssEAM/PSO for multiclass cancer diagnosis is demonstrated by testing it on three publicly available multiple-class cancer data sets. ssEAM/PSO achieves competitive performance on all these data sets, with results comparable to or better than those obtained by other classifiers",
"title": ""
},
{
"docid": "859c6f75ac740e311da5e68fcd093531",
"text": "PURPOSE\nTo understand the effect of socioeconomic status (SES) on the risk of complications in type 1 diabetes (T1D), we explored the relationship between SES and major diabetes complications in a prospective, observational T1D cohort study.\n\n\nMETHODS\nComplete data were available for 317 T1D persons within 4 years of age 28 (ages 24-32) in the Pittsburgh Epidemiology of Diabetes Complications Study. Age 28 was selected to maximize income, education, and occupation potential and to minimize the effect of advanced diabetes complications on SES.\n\n\nRESULTS\nThe incidences over 1 to 20 years' follow-up of end-stage renal disease and coronary artery disease were two to three times greater for T1D individuals without, compared with those with a college degree (p < .05 for both), whereas the incidence of autonomic neuropathy was significantly greater for low-income and/or nonprofessional participants (p < .05 for both). HbA(1c) was inversely associated only with income level. In sex- and diabetes duration-adjusted Cox models, lower education predicted end-stage renal disease (hazard ratio [HR], 2.9; 95% confidence interval [95% CI], 1.1-7.7) and coronary artery disease (HR, 2.5, 95% CI, 1.3-4.9), whereas lower income predicted autonomic neuropathy (HR, 1.7; 95% CI, 1.0-2.9) and lower-extremity arterial disease (HR, 3.7; 95% CI, 1.1-11.9).\n\n\nCONCLUSIONS\nThese associations, partially mediated by clinical risk factors, suggest that lower SES T1D individuals may have poorer self-management and, thus, greater complications from diabetes.",
"title": ""
},
{
"docid": "076d5906b35995f93ac10b392089d3c3",
"text": "A GPU's computing power lies in its abundant memory bandwidth and massive parallelism. However, its hardware thread schedulers, despite being able to quickly distribute computation to processors, often fail to capitalize on program characteristics effectively, achieving only a fraction of the GPU's full potential. Moreover, current GPUs do not allow programmers or compilers to control this thread scheduling, forfeiting important optimization opportunities at the program level. This paper presents a transformation centered on Streaming Multiprocessors (SM); this software approach to circumventing the limitations of the hardware scheduler allows flexible program-level control of scheduling. By permitting precise control of job locality on SMs, the transformation overcomes inherent limitations in prior methods.\n With this technique, flexible control of GPU scheduling at the program level becomes feasible, which opens up new opportunities for GPU program optimizations. The second part of the paper explores how the new opportunities could be leveraged for GPU performance enhancement, what complexities there are, and how to address them. We show that some simple optimization techniques can enhance co-runs of multiple kernels and improve data locality of irregular applications, producing 20-33% average increase in performance, system throughput, and average turnaround time.",
"title": ""
},
{
"docid": "13ae9c0f1c802de86b80906558b27713",
"text": "Anaerobic saccharolytic bacteria thriving at high pH values were studied in a cellulose-degrading enrichment culture originating from the alkaline lake, Verkhneye Beloye (Central Asia). In situ hybridization of the enrichment culture with 16S rRNA-targeted probes revealed that abundant, long, thin, rod-shaped cells were related to Cytophaga. Bacteria of this type were isolated with cellobiose and five isolates were characterized. Isolates were thin, flexible, gliding rods. They formed a spherical cyst-like structure at one cell end during the late growth phase. The pH range for growth was 7.5–10.2, with an optimum around pH 8.5. Cultures produced a pinkish pigment tentatively identified as a carotenoid. Isolates did not degrade cellulose, indicating that they utilized soluble products formed by so far uncultured hydrolytic cellulose degraders. Besides cellobiose, the isolates utilized other carbohydrates, including xylose, maltose, xylan, starch, and pectin. The main organic fermentation products were propionate, acetate, and succinate. Oxygen, which was not used as electron acceptor, impaired growth. A representative isolate, strain Z-7010, with Marinilabilia salmonicolor as the closest relative, is described as a new genus and species, Alkaliflexus imshenetskii. This is the first cultivated alkaliphilic anaerobic member of the Cytophaga/Flavobacterium/Bacteroides phylum.",
"title": ""
},
{
"docid": "ebd4901b9352f98f879c27f50e999ef1",
"text": "This paper describes a probabilistic approach to global localization within an in-door environment with minimum infrastructure requirements. Global localization is a flavor of localization in which the device is unaware of its initial position and has to determine the same from scratch. Localization is performed based on the received signal strength indication (RSSI) as the only sensor reading, which is provided by most off-the-shelf wireless network interface cards. Location and orientation estimates are computed using Bayesian filtering on a sample set derived using Monte-Carlo sampling. Research leading to the proposed method is outlined along with results and conclusions from simulations and real life experiments.",
"title": ""
},
{
"docid": "c71635ec5c0ef83c850cab138330f727",
"text": "Academic institutions are now drawing attention in finding methods for making effective learning process, for identifying learner’s achievements and weakness, for tracing academic progress and also for predicting future performance. People’s increased expectation for accountability and transparency makes it necessary to implement big data analytics in the educational institution. But not all the educationalist and administrators are ready to take the challenge. So, it is now obvious to know about the necessity and opportunity as well as challenges of implementing big data analytics. This paper will describe the needs, opportunities and challenges of implementing big data analytics in the education sector.",
"title": ""
},
{
"docid": "8057cddc406a90177fda5f3d4ee7c375",
"text": "This paper introduces the task of questionanswer driven semantic role labeling (QA-SRL), where question-answer pairs are used to represent predicate-argument structure. For example, the verb “introduce” in the previous sentence would be labeled with the questions “What is introduced?”, and “What introduces something?”, each paired with the phrase from the sentence that gives the correct answer. Posing the problem this way allows the questions themselves to define the set of possible roles, without the need for predefined frame or thematic role ontologies. It also allows for scalable data collection by annotators with very little training and no linguistic expertise. We gather data in two domains, newswire text and Wikipedia articles, and introduce simple classifierbased models for predicting which questions to ask and what their answers should be. Our results show that non-expert annotators can produce high quality QA-SRL data, and also establish baseline performance levels for future work on this task.",
"title": ""
},
{
"docid": "0f6dbf39b8e06a768b3d2b769327168d",
"text": "In this paper, we focus on how to boost the multi-view clustering by exploring the complementary information among multi-view features. A multi-view clustering framework, called Diversity-induced Multi-view Subspace Clustering (DiMSC), is proposed for this task. In our method, we extend the existing subspace clustering into the multi-view domain, and utilize the Hilbert Schmidt Independence Criterion (HSIC) as a diversity term to explore the complementarity of multi-view representations, which could be solved efficiently by using the alternating minimizing optimization. Compared to other multi-view clustering methods, the enhanced complementarity reduces the redundancy between the multi-view representations, and improves the accuracy of the clustering results. Experiments on both image and video face clustering well demonstrate that the proposed method outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "eaa2ed7e15a3b0a3ada381a8149a8214",
"text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.",
"title": ""
}
] |
scidocsrr
|
847b4f34e84358574d404ee33878859c
|
Control of cable actuated devices using smooth backlash inverse
|
[
{
"docid": "ff8f72d7afb43513c7a7a6b041a13040",
"text": "The paper first discusses the reasons why simplified solutions for the mechanical structure of fingers in robotic hands should be considered a worthy design goal. After a brief discussion about the mechanical solutions proposed so far for robotic fingers, a different design approach is proposed. It considers finger structures made of rigid links connected by flexural hinges, with joint actuation obtained by means of flexures that can be guided inside each finger according to different patterns. A simplified model of one of these structures is then presented, together with preliminary results of simulation, in order to evaluate the feasibility of the concept. Examples of technological implementation are finally presented and the perspective and problems of application are briefly discussed.",
"title": ""
},
{
"docid": "7252372bdacaa69b93e52a7741c8f4c2",
"text": "This paper introduces a novel type of actuator that is investigated by ESA for force-reflection to a wearable exoskeleton. The actuator consists of a DC motor that is relocated from the joint by means of Bowden cable transmissions. The actuator shall support the development of truly ergonomic and compact wearable man-machine interfaces. Important Bowden cable transmission characteristics are discussed, which dictate a specific hardware design for such an actuator. A first prototype is shown, which was used to analyze these basic characteristics of the transmissions and to proof the overall actuation concept. A second, improved prototype is introduced, which is currently used to investigate the achievable performance as a master actuator in a master-slave control with force-feedback. Initial experimental results are presented, which show good actuator performance in a 4 channel control scheme with a slave joint. The actuator features low movement resistance in free motion and can reflect high torques during hard contact situations. High contact stability can be achieved. The actuator seems therefore well suited to be implemented into the ESA exoskeleton for space-robotic telemanipulation",
"title": ""
}
] |
[
{
"docid": "708fbc1eff4d96da2f3adaa403db3090",
"text": "We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution. We argue that such networks are limited in their ability to generate creative products in their original design. We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution. We conducted experiments to compare the response of human subjects to the generated art with their response to art created by artists. The results show that human subjects could not distinguish art generated by the proposed system from art generated by contemporary artists and shown in top art",
"title": ""
},
{
"docid": "d74df8673db783ff80d01f2ccc0fe5bf",
"text": "The search for strategies to mitigate undesirable economic, ecological, and social effects of harmful resource consumption has become an important, socially relevant topic. An obvious starting point for businesses that wish to make value creation more sustainable is to increase the utilization rates of existing resources. Modern social Internet technology is an effective means by which to achieve IT-enabled sharing services, which make idle resource capacity owned by one entity accessible to others who need them but do not want to own them. Successful sharing services require synchronized participation of providers and users of resources. The antecedents of the participation behavior of providers and users has not been systematically addressed by the extant literature. This article therefore proposes a model that explains and predicts the participation behavior in sharing services. Our search for a theoretical foundation revealed the Theory of Planned Behavior as most appropriate lens, because this theory enables us to integrate provider behavior and user behavior as constituents of participation behavior. The model is novel for that it is the first attempt to study the interdependencies between the behavior types in sharing service participation and for that it includes both general and specific determinants of the participation behavior.",
"title": ""
},
{
"docid": "941e570d74435332641f9d4f63c403ff",
"text": "Taniguchi defines City Logistics as “the process of totally optimising the logistics and transport activities by private companies in urban areas while considering the traffic environment, traffic congestion and energy consumption within the framework of a market economy”. The distribution of goods based on road services in urban areas contribute to traffic congestion, generates environmental impacts and in some cases incurs in high logistics costs. On the other hand the various stakeholders involved in the applications may have possibly conflicting objectives. Industrial firms, shippers, freight carriers, have individually established to meet consumer demands looking to maximize the company effectiveness and as a consequence from a social point of view the resulting logistics system is inefficient from the point of view of the social costs and environmental impacts. As a consequence the design and evaluation of City Logistics applications requires an integrated framework in which all components could work together. Therefore City Logistics models must be models that, further than including the main components of City Logistics applications, as vehicle routing and fleet management models, should be able of including also the dynamic aspects of the underlying road network, namely if ICT applications are taken into account. Some of the methodological proposals made so far are based on an integration of vehicle routing models and, dynamic traffic simulation models that emulate the actual traffic conditions providing at each time interval the estimates of the current travel times, queues, etc. on each link of the road network, that is, the information that will be used by the logistic model (i.e. a fleet management system identifying in real-time the positions of each vehicle in the fleet and its operational conditions type of load, available capacity, etc. – to determine the optimal dynamic routing and scheduling of the vehicle.",
"title": ""
},
{
"docid": "0c991f86cee8ab7be1719831161a3fec",
"text": "Conversational systems have become increasingly popular as a way for humans to interact with computers. To be able to provide intelligent responses, conversational systems must correctly model the structure and semantics of a conversation. We introduce the task of measuring semantic (in)coherence in a conversation with respect to background knowledge, which relies on the identification of semantic relations between concepts introduced during a conversation. We propose and evaluate graph-based and machine learning-based approaches for measuring semantic coherence using knowledge graphs, their vector space embeddings and word embedding models, as sources of background knowledge. We demonstrate how these approaches are able to uncover different coherence patterns in conversations on the Ubuntu Dialogue Corpus.",
"title": ""
},
{
"docid": "dcc55431a2da871c60abfd53ce270bad",
"text": "Synchrophasor Standards have evolved since the introduction of the first one, IEEE Standard 1344, in 1995. IEEE Standard C37.118-2005 introduced measurement accuracy under steady state conditions as well as interference rejection. In 2009, the IEEE started a joint project with IEC to harmonize real time communications in IEEE Standard C37.118-2005 with the IEC 61850 communication standard. These efforts led to the need to split the C37.118 into 2 different standards: IEEE Standard C37.118.1-2011 that now includes performance of synchrophasors under dynamic systems conditions; and IEEE Standard C37.118.2-2011 Synchrophasor Data Transfer for Power Systems, the object of this paper.",
"title": ""
},
{
"docid": "bb3ba0a17727d2ea4e2aba74f7144da6",
"text": "A roof automobile antenna module for Long Term Evolution (LTE) application is proposed. The module consists of two LTE antennas for the multiple-input multiple-output (MIMO) method which requests low mutual coupling between the antennas for larger capacity. On the other hand, the installation location for a roof-top module is limited from safety or appearance viewpoint and this makes the multiple LTE antennas located there cannot be separated with enough space. In order to retain high isolation between the two antennas in such compact space, the two antennas are designed to have different shapes, different heights and different polarizations, and their ground planes are placed separately. In the proposed module, one antenna is a monopole type and has its element printed on a shark-fin-shaped substrate which is perpendicular to the car-roof. Another one is a planar inverted-F antenna (PIFA) and has its element on a lower plane parallel to the roof. In this manner, the two antennas cover the LTE-bands with omni-directional radiation in the horizontal directions and high radiation gain. The two antennas have reasonably good isolation between them even the module is compact with a dimension of 62×65×73 mm3.",
"title": ""
},
{
"docid": "11eaad434ef87c06562d8cd6baea4207",
"text": "Port address hopping (PAH) communication is a powerful network moving target defense (MTD) mechanism. It was inspired by frequency hopping in wireless communications. One of the critical and difficult issues with PAH is synchronization. Existing schemes usually provide hops for each session lasting only a few seconds/minutes, making them easily influenced by network events such as transmission delays, traffic jams, packet dropouts, reordering, and retransmission. To address these problems, in this paper we propose a novel self-synchronization scheme, called ‘keyed-hashing based self-synchronization (KHSS)’. The proposed method generates the message authentication code (MAC) based on the hash based MAC (HMAC), which is then further used as the synchronization information for port address encoding and decoding. Providing the PAH communication system with one-packet-one-hopping and invisible message authentication abilities enables both clients and servers to constantly change their identities as well as perform message authentication over unreliable communication mediums without synchronization and authentication information transmissions. Theoretical analysis and simulation and experiment results show that the proposed method is effective in defending against man-in-the-middle (MITM) attacks and network scanning. It significantly outperforms existing schemes in terms of both security and hopping efficiency.",
"title": ""
},
{
"docid": "316fa5c677ce5d51a6f31a128b00ebdb",
"text": "Intelligent user interfaces have been proposed as a means to overcome some of the problems that directmanipulation interfaces cannot handle, such as: information overflow problems; providing help on how to use complex systems; or real-time cognitive overload problems. Intelligent user interfaces are also being proposed as a means to make systems individualised or personalised, thereby increasing the systems flexibility and appeal. Unfortunately, there are a number of problems not yet solved that prevent us from creating good intelligent user interface applications: there is a need for methods for how to develop them; there are demands on better usability principles for them; we need a better understanding of the possible ways the interface can utilise intelligence to improve the interaction; and finally, we need to design better tools that will enable an intelligent system to survive the life-cycle of a system (including updates of the database, system support, etc.). We define these problems further and start to outline their solutions.",
"title": ""
},
{
"docid": "abcd64d8aac6d7951fe02d562d5034ed",
"text": "Dialogue continues on the \"readiness\" of new graduates for practice despite significant advancements in the foundational educational preparation for nurses. In this paper, the findings from an exploratory study about the meaning of new graduate \"readiness\" for practice are reported. Data was collected during focus group interviews with one-hundred and fifty nurses and new graduates. Themes were generated using content analysis. Our findings point to agreement about the meaning of new graduate nurses' readiness for practice as having a generalist foundation and some job specific capabilities, providing safe client care, keeping up with the current realities of nursing practice, being well equipped with the tools needed to adapt to the future needs of clients, and possessing a balance of doing, knowing, and thinking. The findings from this exploratory study have implications for policies and programs targeted towards new graduate nurses entering practice.",
"title": ""
},
{
"docid": "07dc406a7ae61845d2a309c5aa07e072",
"text": "The advance of internet technology has stimulated the rise of professional virtual communities (PVCs). The objective of PVCs is to encourage people to exploit or explore knowledge through websites. However, many virtual communities have failed due to the reluctance of members to continue their participation in these PVCs. Motivated by such concerns, this study formulates and tests a theoretical model to explain the factors influencing individuals’ intention to continue participating in PVCs’ knowledge activities. Drawing from the information system and knowledge management literatures, two academic perspectives related to PVC continuance are incorporated in the integrated model. This model posits that an individual’s intention to stay in a professional virtual community is influenced by a contextual factor and technological factors. Specifically, the antecedents of PVC members’ intention to continue sharing knowledge include social interaction ties capital and satisfaction at post-usage stage. These variables, in turn, are adjusted based on the confirmation of pre-usage expectations. A longitudinal study is conducted with 360 members of a professional virtual community. Results indicate that the contextual factor and technological factors both exert significant impacts on PVC participants’ continuance intentions.",
"title": ""
},
{
"docid": "4a26afba58270d7ce1a0eb50bd659eae",
"text": "Recommendation can be reduced to a sub-problem of link prediction, with specific nodes (users and items) and links (similar relations among users/items, and interactions between users and items). However, the previous link prediction algorithms need to be modified to suit the recommendation cases since they do not consider the separation of these two fundamental relations: similar or dissimilar and like or dislike. In this paper, we propose a novel and unified way to solve this problem, which models the relation duality using complex number. Under this representation, the previous works can directly reuse. In experiments with the Movie Lens dataset and the Android software website AppChina.com, the presented approach achieves significant performance improvement comparing with other popular recommendation algorithms both in accuracy and coverage. Besides, our results revealed some new findings. First, it is observed that the performance is improved when the user and item popularities are taken into account. Second, the item popularity plays a more important role than the user popularity does in final recommendation. Since its notable performance, we are working to apply it in a commercial setting, AppChina.com website, for application recommendation.",
"title": ""
},
{
"docid": "37426a6261243f5bbe6d59be3826a82f",
"text": "A key to successful face recognition is accurate and reliable face alignment using automatically-detected facial landmarks. Given this strong dependency between face recognition and facial landmark detection, robust face recognition requires knowledge of when the facial landmark detection algorithm succeeds and when it fails. Facial landmark confidence represents this measure of success. In this paper, we propose two methods to measure landmark detection confidence: local confidence based on local predictors of each facial landmark, and global confidence based on a 3D rendered face model. A score fusion approach is also introduced to integrate these two confidences effectively. We evaluate both confidence metrics on two datasets for face recognition: JANUS CS2 and IJB-A datasets. Our experiments show up to 9% improvements when face recognition algorithm integrates the local-global confidence metrics.",
"title": ""
},
{
"docid": "114381e33d6c08724057e3116952dafc",
"text": "We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. The platform contains a rich suite of low-level and high-level natural language processing technologies: automatic speech recognition of broadcast media, machine translation, automated tagging and classification of named entities, semantic parsing to detect relationships between entities, and automatic construction / augmentation of factual knowledge bases. Implemented on the Docker platform, it can easily be deployed, customised, and scaled to large volumes of incoming media streams.",
"title": ""
},
{
"docid": "394c8f7a708d69ca26ab0617ab1530ab",
"text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.",
"title": ""
},
{
"docid": "246da8e4f576306eaf94b25786746aa5",
"text": "I am struck by how little is known about so much of cognition. One goal of this poper is to argue for the need to consider a rich set of interlocking issues in the study of cognition. Mainstream work in cognitiorr-including my ow+ignores many critical aspects of animate cognitive systems. Perhaps one reason that existing theories say so little reievant to real world activities is the neglect of social and cultural factors, of emotion, and of the maior points that distinguish an animate cognitive system from an artificial one: the need to survive, to regulate its own operation, to maintain itself, to exist in the environment, to change from a small, uneducated, immature system to an adult, developed, knowledgeable one.",
"title": ""
},
{
"docid": "2abdf71604c7eaa593fa43199817838c",
"text": "We review our work towards achieving competitive performance (classification accuracies) for on-chip machine learning (ML) of large-scale artificial neural networks (ANN) using Non-Volatile Memory (NVM)-based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25×) and lower-power (from 120-2850×) ML training than GPU-based hardware.",
"title": ""
},
{
"docid": "f741b47d671f4f36aa4b48dc1b112b9a",
"text": "With the development of wireless networks, the scale of network optimization problems is growing correspondingly. While algorithms have been designed to reduce complexity in solving these problems under given size, the approach of directly reducing the size of problem has not received much attention. This motivates us to investigate an innovative approach to reduce problem scale while maintaining the optimality of solution. Through analysis on the optimization solutions, we discover that part of the elements may not be involved in the solution, such as unscheduled links in the flow constrained optimization problem. The observation indicates that it is possible to reduce problem scale without affecting the solution by excluding the unused links from problem formulation. In order to identify the link usage before solving the problem, we exploit deep learning to find the latent relationship between flow information and link usage in optimal solution. Based on this, we further predict whether a link will be scheduled through link evaluation and eliminate unused link from formulation to reduce problem size. Numerical results demonstrate that the proposed method can reduce computation cost by at least 50% without affecting optimality, thus greatly improve the efficiency of solving large scale network optimization problems.",
"title": ""
},
{
"docid": "f1ce3ab900a280ccbf638653ffc19310",
"text": "Executive function (EF) refers to fundamental capacities that underlie more complex cognition and have ecological relevance across the individual's lifespan. However, emerging executive functions have rarely been studied in young preterm children (age 3) whose critical final stages of fetal development are interrupted by their early birth. We administered four novel touch-screen computerized measures of working memory and inhibition to 369 participants born between 2004 and 2006 (52 Extremely Low Birth Weight [ELBW]; 196 late preterm; 121 term-born). ELBW performed worse than term-born on simple and complex working memory and inhibition tasks and had the highest percentage of incomplete performance on a continuous performance test. The latter finding indicates developmental immaturity and the ELBW group's most at-risk preterm status. Additionally, late-preterm participants performed worse compared with term-born on measures of complex working memory but did not differ from those term-born on response inhibition measures. These results are consistent with a recent literature that identifies often subtle but detectable neurocognitive deficits in late-preterm children. Our results support the development and standardization of computerized touch-screen measures to assess EF subcomponent abilities during the formative preschool period. Such measures may be useful to monitor the developmental trajectory of critical executive function abilities in preterm children, and their use is necessary for timely recognition of deficit and application of appropriate interventional strategies.",
"title": ""
},
{
"docid": "7665f2f179d2230abbf33ccf99a7d5b0",
"text": "C R E D IT : J O E S U T L IF F /W W W .C D A D .C O M /J O E S cientifi c publications have at least two goals: (i) to announce a result and (ii) to convince readers that the result is correct. Mathematics papers are expected to contain a proof complete enough to allow knowledgeable readers to fi ll in any details. Papers in experimental science should describe the results and provide a clear enough protocol to allow successful repetition and extension. Over the past ~35 years, computational science has posed challenges to this traditional paradigm—from the publication of the four-color theorem in mathematics ( 1), in which the proof was partially performed by a computer program, to results depending on computer simulation in chemistry, materials science, astrophysics, geophysics, and climate modeling. In these settings, the scientists are often sophisticated, skilled, and innovative programmers who develop large, robust software packages. More recently, scientists who are not themselves computational experts are conducting data analysis with a wide range of modular software tools and packages. Users may often combine these tools in unusual or novel ways. In biology, scientists are now routinely able to acquire and explore data sets far beyond the scope of manual analysis, including billions of DNA bases, millions of genotypes, and hundreds of thousands of RNA measurements. Similar issues may arise in other fi elds, such as astronomy, seismology, and meteorology. While propelling enormous progress, this increasing and sometimes “indirect” use of computation poses new challenges for scientifi c publication and replication. Large data sets are often analyzed many times, with modifi cations to the methods and parameters, and sometimes even updates of the data, until the fi nal results are produced. The resulting publication often gives only scant attention to the computational details. Some have suggested these papers are “merely the advertisement of scholarship whereas the computer programs, input data, parameter values, etc. embody the scholarship itself ” ( 2). However, the actual code or software “mashup” that gave rise to the fi nal analysis may be lost or unrecoverable. For example, colleagues and I published a computational method for distinguishing between two types of acute leukemia, based on large-scale gene expression profi les obtained from DNA microarrays ( 3). This paper generated hundreds of requests from scientists interested in replicating and extending the results. The method involved a complex pipeline of steps, including (i) preprocessing of the data, to eliminate likely artifacts; (ii) selection of genes to be used in the model; (iii) building the actual model and setting the appropriate parameters for it from the training data; (iv) preprocessing independent test data; and fi nally (v) applying the model to test its effi cacy. The result was robust and replicable, and the original data were available online, but there was no standardized form in which to make available the various software components and the precise details of their use.",
"title": ""
},
{
"docid": "7c804a568854a80af9d5c564a270d079",
"text": "Large-scale online ride-sharing platforms have substantially transformed our lives by reallocating transportation resources to alleviate traffic congestion and promote transportation efficiency. An efficient fleet management strategy not only can significantly improve the utilization of transportation resources but also increase the revenue and customer satisfaction. It is a challenging task to design an effective fleet management strategy that can adapt to an environment involving complex dynamics between demand and supply. Existing studies usually work on a simplified problem setting that can hardly capture the complicated stochastic demand-supply variations in high-dimensional space. In this paper we propose to tackle the large-scale fleet management problem using reinforcement learning, and propose a contextual multi-agent reinforcement learning framework including two concrete algorithms, namely contextual deep Q-learning and contextual multi-agent actor-critic, to achieve explicit coordination among a large number of agents adaptive to different contexts. We show significant improvements of the proposed framework over state-of-the-art approaches through extensive empirical studies.",
"title": ""
}
] |
scidocsrr
|
6a0932e5640541f60d4c6abc14bf7c58
|
A Biologically Inspired System for Action Recognition
|
[
{
"docid": "b9a893fb526955b5131860a1402e2f7c",
"text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"title": ""
}
] |
[
{
"docid": "f333ebc879cf311bfc78297b78839ad9",
"text": "This paper explores the effectiveness of sparse representations obtained by learning a set of overcomplete basis (dictionary) in the context of action recognition in videos. Although this work concentrates on recognizing human movements-physical actions as well as facial expressions-the proposed approach is fairly general and can be used to address other classification problems. In order to model human actions, three overcomplete dictionary learning frameworks are investigated. An overcomplete dictionary is constructed using a set of spatio-temporal descriptors (extracted from the video sequences) in such a way that each descriptor is represented by some linear combination of a small number of dictionary elements. This leads to a more compact and richer representation of the video sequences compared to the existing methods that involve clustering and vector quantization. For each framework, a novel classification algorithm is proposed. Additionally, this work also presents the idea of a new local spatio-temporal feature that is distinctive, scale invariant, and fast to compute. The proposed approach repeatedly achieves state-of-the-art results on several public data sets containing various physical actions and facial expressions.",
"title": ""
},
{
"docid": "1298ddbeea84f6299e865708fd9549a6",
"text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.",
"title": ""
},
{
"docid": "33f53ba19c1198fc2342960c57dd22f8",
"text": "This paper reports on a facile and low cost method to fabricate highly stretchable potentiometric pH sensor arrays for biomedical and wearable applications. The technique uses laser carbonization of a thermoset polymer followed by transfer and embedment of carbonized nanomaterial onto an elastomeric matrix. The process combines selective laser pyrolization/carbonization with meander interconnect methodology to fabricate stretchable conductive composites with which pH sensors can be realized. The stretchable pH sensors display a sensitivity of -51 mV/pH over the clinically-relevant range of pH 4-10. The sensors remain stable for strains of up to 50 %.",
"title": ""
},
{
"docid": "059b8861a00bb0246a07fa339b565079",
"text": "Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.",
"title": ""
},
{
"docid": "0b21eda7c840d37a9486d8bfccfe45ba",
"text": "Enterprise systems are complex and expensive and create dramatic organizational change. Implementing an enterprise system can be the \"corporate equivalent of a root canal,\" a meaningful analogy given that an ES with its single database replaces myriad special-purpose legacy systems that once operated in isolation. An ES, or enterprise resource planning system, has the Herculean task of seamlessly supporting and integrating a full range of business processes, uniting functional islands and making their data visible across the organization in real time. The authors offer guidelines based on five years of observing ES implementations that can help managers circumvent obstacles and control the tensions during and after the project.",
"title": ""
},
{
"docid": "5cc2a5b23d2da7f281270e0ca4a097e1",
"text": "It is widely accepted that the deficiencies in public sector health system can only be overcome by significant reforms. The need for reforms in India s health sector has been emphasized by successive plan documents since the Eighth Five-Year Plan in 1992, by the 2002 national health policy and by international donor agencies. The World Bank (2001:12,14), which has been catalytic in initiating health sector reforms in many states, categorically emphasized: now is the time to carry out radical experiments in India’s health sector, particularly since the status quo is leading to a dead end. . But it is evident that there is no single strategy that would be best option The proposed reforms are not cheap, but the cost of not reforming is even greater”.",
"title": ""
},
{
"docid": "e0fd648da901ed99ddbed3457bc83cfe",
"text": "This clinical trial assessed the ability of Gluma Dentin Bond to inhibit dentinal sensitivity in teeth prepared to receive complete cast restorations. Twenty patients provided 76 teeth for the study. Following tooth preparation, dentinal surfaces were coated with either sterile water (control) or two 30-second applications of Gluma Dentin Bond (test) on either intact or removed smear layers. Patients were recalled after 14 days for a test of sensitivity of the prepared dentin to compressed air, osmotic stimulus (saturated CaCl2 solution), and tactile stimulation via a scratch test under controlled loads. A significantly lower number of teeth responded to the test stimuli for both Gluma groups when compared to the controls (P less than .01). No difference was noted between teeth with smear layers intact or removed prior to treatment with Gluma.",
"title": ""
},
{
"docid": "01f6ca9feadff8d680232a7b4566bd4c",
"text": "Specific language impairment (SLI) is diagnosed when a child's language development is deficient for no obvious reason. For many years, there was a tendency to assume that SLI was caused by factors such as poor parenting, subtle brain damage around the time of birth, or transient hearing loss. Subsequently it became clear that these factors were far less important than genes in determining risk for SLI. A quest to find \"the gene for SLI\" was undertaken, but it soon became apparent that no single cause could account for all cases. Furthermore, although fascinating cases of SLI caused by a single mutation have been discovered, in most children the disorder has a more complex basis, with several genetic and environmental risk factors interacting. The clearest evidence for genetic effects has come from studies that diagnosed SLI using theoretically motivated measures of underlying cognitive deficits rather than conventional clinical criteria.",
"title": ""
},
{
"docid": "243367110d677f2b428d57b3c07ef910",
"text": "This paper describes the GermEval 2014 Named Entity Recognition (NER) Shared Task workshop at KONVENS. It provides background information on the motivation of this task, the data-set, the evaluation method, and an overview of the participating systems, followed by a discussion of their results. In contrast to previous NER tasks, the GermEval 2014 edition uses an extended tagset to account for derivatives of names and tokens that contain name parts. Further, nested named entities had to be predicted, i.e. names that contain other names. The eleven participating teams employed a wide range of techniques in their systems. The most successful systems used state-of-theart machine learning methods, combined with some knowledge-based features in hybrid systems.",
"title": ""
},
{
"docid": "343ed18e56e6f562fa509710e4cf8dc6",
"text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.",
"title": ""
},
{
"docid": "1baaa67ff7b4d00d6f03ae908cf1ca71",
"text": "Function approximation has been found in many applications. The radial basis function (RBF) network is one approach which has shown a great promise in this sort of problems because of its faster learning capacity. A traditional RBF network takes Gaussian functions as its basis functions and adopts the least-squares criterion as the objective function, However, it still suffers from two major problems. First, it is difficult to use Gaussian functions to approximate constant values. If a function has nearly constant values in some intervals, the RBF network will be found inefficient in approximating these values. Second, when the training patterns incur a large error, the network will interpolate these training patterns incorrectly. In order to cope with these problems, an RBF network is proposed in this paper which is based on sequences of sigmoidal functions and a robust objective function. The former replaces the Gaussian functions as the basis function of the network so that constant-valued functions can be approximated accurately by an RBF network, while the latter is used to restrain the influence of large errors. Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.",
"title": ""
},
{
"docid": "f8def1217137641547921e3f52c0b4ae",
"text": "A 50-GHz charge pump phase-locked loop (PLL) utilizing an LC-oscillator-based injection-locked frequency divider (ILFD) was fabricated in 0.13-mum logic CMOS process. The PLL can be locked from 45.9 to 50.5 GHz and output power level is around -10 dBm. The operating frequency range is increased by tracking the self-oscillation frequencies of the voltage-controlled oscillator (VCO) and the frequency divider. The PLL including buffers consumes 57 mW from 1.5/0.8-V supplies. The phase noise at 50 kHz, 1 MHz, and 10 MHz offset from the carrier is -63.5, -72, and -99 dBc/Hz, respectively. The PLL also outputs second-order harmonics at frequencies between 91.8 and 101 GHz. The output frequency of 101 GHz is the highest for signals locked by a PLL fabricated using the silicon integrated circuits technology.",
"title": ""
},
{
"docid": "78f34ee1d29e4f67d2718f9e7fdc544d",
"text": "In this paper, we present a detailed dynamic and aerodynamic model of a quadrotor that can be used for path planning and control design of high performance, complex and aggressive manoeuvres without the need for iterative learning techniques. The accepted nonlinear dynamic quadrotor model is based on a thrust and torque model with constant thrust and torque coefficients derived from static thrust tests. Such a model is no longer valid when the vehicle undertakes dynamic manoeuvres that involve significant displacement velocities. We address this by proposing an implicit thrust model that incorporates the induced momentum effects associated with changing airflow through the rotor. The proposed model uses power as input to the system. To complete the model, we propose a hybrid dynamic model to account for the switching between different vortex ring states of the rotor.",
"title": ""
},
{
"docid": "6e6655838474fdd7d6b0f989c5727c07",
"text": "We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed.",
"title": ""
},
{
"docid": "56a0b2f912718c097502c95349b279b9",
"text": "Using a relational DBMS as back-end engine for an XQuery processing system leverages relational query optimization and scalable query processing strategies provided by mature DBMS engines in the XML domain. Though a lot of theoretical work has been done in this area and various solutions have been proposed, no complete systems have been made available so far to give the practical evidence that this is a viable approach. In this paper, we describe the ourely relational XQuery processor Pathfinder that has been built on top of the extensible RDBMS MonetDB. Performance results indicate that the system is capable of evaluating XQuery queries efficiently, even if the input XML documents become huge. We additionally present further contributions such as loop-lifted staircase join, techniques to derive order properties and to reduce sorting effort in the generated relational algebra plans, as well as methods for optimizing XQuery joins, which, taken together, enabled us to reach our performance and scalability goals. 1998 ACM Computing Classification System: H.2.4, H.2.3, H.2.2, E.1",
"title": ""
},
{
"docid": "da607ab67cb9c1e1d08a70b15f9470d7",
"text": "Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE.",
"title": ""
},
{
"docid": "4e8131e177330af2fb8999c799508b58",
"text": "Unmanned aerial vehicles (UAVs) such as multi-copters are expected to be used for inspection of aged infrastructure or for searching damaged buildings in the event of a disaster. However, in a confined space in such environments, UAVs suffer a high risk of falling as a result of contact with an obstacle. To ensure an aerial inspection in the confined space, we have proposed a UAV with a passive rotating spherical shell (PRSS UAV); The UAV and the spherical shell are connected by a 3DOF gimbal mechanism to allow them to rotate in all directions independently, so that the UAV can maintain its flight stability during a collision with an obstacle because only the shell is disturbed and rotated. To apply the PRSS UAV into real-world missions, we have to carefully choose many design parameters such as weight, structure, diameter, strength of the spherical shell, axis configuration of the gimbal, and model of the UAV. In this paper, we propose a design strategy for applying the concept of the PRSS mechanism, focusing on disaster response and infrastructure inspection. We also demonstrate the validity of this approach by the successful result of quantitative experiments and practical field tests.",
"title": ""
},
{
"docid": "5a9993aec0c3290d41d91fd1e42c1857",
"text": "In this paper, we propose an approach to improve the accuracy of home activity estimation using device-free sensors in the home. This is achieved by transferring existing training data to a new household considering the differences between households. We assumed two scenarios in which we only have training data from other households with labels and we also have training labels for our own household, and proposed the method to compose supervised transfer between labeled data and unsupervised transfer between labeled unlabeled data for each scenario. To evaluate in realistic settings, we developed the system which consists of an application for use on tablet terminals, which continuously collect light and any optional sensor data, and a Web-based server system that stores the sensor data, estimates activities, provides visualization on users’ Web browsers, and enables users to edit the activity labels. Using the system, we gathered subjects from open called households during a period of approximately four months, and obtained approximately 11,745 activity inputs, approximately 7.14GB of sensor data, and power consumption data of 237,280 hours from 35 households. As a result of evaluation, our method outperformed naive methods, both in the first and second scenarios.",
"title": ""
}
] |
scidocsrr
|
0bae4685e259bf0ab03242f346601e9e
|
An Efficient TVL1 Algorithm for Deblurring Multichannel Images Corrupted by Impulsive Noise
|
[
{
"docid": "00bbfb52c5c54d83ea31fed1ec85b1a2",
"text": "We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or q-linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.",
"title": ""
}
] |
[
{
"docid": "3c36004741028267e2c12938f112a584",
"text": "As autonomy becomes prevalent in many applications, ranging from recommendation systems to fully autonomous vehicles, there is an increased need to provide safety guarantees for such systems. The problem is difficult, as these are large, complex systems which operate in uncertain environments, requiring data-driven machine-learning components. However, learning techniques such as Deep Neural Networks, widely used today, are inherently unpredictable and lack the theoretical foundations to provide strong assurance guarantees. We present a compositional approach for the scalable, formal verification of autonomous systems that contain Deep Neural Network components. The approach uses assumeguarantee reasoning whereby contracts, encoding the input-output behavior of individual components, allow the designer to model and incorporate the behavior of the learning-enabled components working side-by-side with the other components. We illustrate the approach on an example taken from the autonomous vehicles domain.",
"title": ""
},
{
"docid": "82f38828416d08bbb6ee195c3ca071eb",
"text": "Real-time ride-sharing applications (e.g., Uber and Lyft) are very popular in recent years. Motivated by the ride-sharing application, we propose a new type of query in road networks, called the optimal multi-meeting-point route (OMMPR) query. Given a road network G, a source nodes, a target node t, and a set of query nodes U, the OMMPR query aims at finding the best route starting from s and ending at t such that the weighted average cost between the cost of the route and the total cost of the shortest paths from every query node to the route is minimized. We show that the problem of computing the OMMPR query is NP-hard. To answer the OMMPR query efficiently, we propose two novel parameterized solutions based on dynamic programming (DP), with the number of query nodes l (i.e., l = |U|) as a parameter, which is typically very small in practice. The two proposed parameterized algorithms run in O(3l · m + 2l · n · (l + log (n))) and O(2l · (m + n · (l + log (n)))) time, respectively, where n and m denote the number of nodes and edges in graph G, thus they are tractable in practice. To reduce the search space of the DP-based algorithms, we propose two novel optimized algorithms based on bidirectional DP and a carefully-designed lower bounding technique. We conduct extensive experimental studies on four large real-world road networks, and the results demonstrate the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "e4d1053a64a09a02f4890af66b28bbba",
"text": "Branchio-oculo-facial syndrome (BOFS) is a rare autosomal dominant condition with variable expressivity, caused by mutations in the TFAP2A gene. We report a three generational family with four affected individuals. The consultand has typical features of BOFS including infra-auricular skin nodules, coloboma, lacrimal duct atresia, cleft lip, conductive hearing loss and typical facial appearance. She also exhibited a rare feature of preaxial polydactyly. Her brother had a lethal phenotype with multiorgan failure. We also report a novel variant in TFAP2A gene. This family highlights the variable severity of BOFS and, therefore, the importance of informed genetic counselling in families with BOFS.",
"title": ""
},
{
"docid": "0f122797e9102c6bab57e64176ee5e84",
"text": "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"title": ""
},
{
"docid": "f4cb0eb6d39c57779cf9aa7b13abef14",
"text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.",
"title": ""
},
{
"docid": "059aed9f2250d422d76f3e24fd62bed8",
"text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms",
"title": ""
},
{
"docid": "293ee45d26440539398188cf086655c1",
"text": "This article reviews recent computer vision techniques used in the assessment of image aesthetic quality. Image aesthetic assessment aims at computationally distinguishing high-quality from low-quality photos based on photographic rules, typically in the form of binary classification or quality scoring. A variety of approaches has been proposed in the literature to try to solve this challenging problem. In this article, we summarize these approaches based on visual feature types (hand-crafted features and deep features) and evaluation criteria (data set characteristics and evaluation metrics). The main contributions and novelties of the reviewed approaches are highlighted and discussed. In addition, following the emergence of deep-learning techniques, we systematically evaluate recent deep-learning settings that are useful for developing a robust deep model for aesthetic scoring.",
"title": ""
},
{
"docid": "72682ac5c2ec0a1ad1f211f3de562062",
"text": "Red blood cell (RBC) aggregation is greatly affected by cell deformability and reduced deformability and increased RBC aggregation are frequently observed in hypertension, diabetes mellitus, and sepsis, thus measurement of both these parameters is essential. In this study, we investigated the effects of cell deformability and fibrinogen concentration on disaggregating shear stress (DSS). The DSS was measured with varying cell deformability and geometry. The deformability of cells was gradually decreased with increasing concentration of glutaraldehyde (0.001~0.005%) or heat treatment at 49.0°C for increasing time intervals (0~7 min), which resulted in a progressive increase in the DSS. However, RBC rigidification by either glutaraldehyde or heat treatment did not cause the same effect on RBC aggregation as deformability did. The effect of cell deformability on DSS was significantly increased with an increase in fibrinogen concentration (2~6 g/L). These results imply that reduced cell deformability and increased fibrinogen levels play a synergistic role in increasing DSS, which could be used as a novel independent hemorheological index to characterize microcirculatory diseases, such as diabetic complications with high sensitivity.",
"title": ""
},
{
"docid": "2526e181083af43aac08a77c67ec402f",
"text": "In its native Europe, the bumblebee, Bombus terrestris (L.) has co-evolved with a large array of parasites whose numbers are negatively linked to the genetic diversity of the colony. In Tasmania B. terrestris was first detected in 1992 and has since spread over much of the state. In order to understand the bee’s invasive success and as part of a wider study into the genetic diversity of bumblebees across Tasmania, we screened bees for co-invasions of ectoparasitic and endoparasitic mites, nematodes and micro-organisms, and searched their nests for brood parasites. The only bee parasite detected was the relatively benign acarid mite Kuzinia laevis (Dujardin) whose numbers per bee did not vary according to region. Nests supported no brood parasites, but did contain the pollen-feeding life stages of K. laevis. Upon summer-autumn collected drones and queens, mites were present on over 80% of bees, averaged ca. 350–400 per bee and were more abundant on younger bees. Nest searching spring queens had similar mite numbers to those collected in summer-autumn but mite numbers dropped significantly once spring queens began foraging for pollen. The average number of mites per queen bee was over 30 fold greater than that reported in Europe. Mite incidence and mite numbers were significantly lower on worker bees than drones or queens, being present on just 51% of bees and averaging 38 mites per bee. Our reported incidence of worker bee parasitism by this mite is 5–50 times higher than reported in Europe. That only one parasite species co-invaded Tasmania supports the notion that a small number of queens founded the Tasmanian population. However, it is clearly evident that both the bee in the absence of parasites, and the mite have been extraordinarily successful invaders.",
"title": ""
},
{
"docid": "1c04afe05954a425209aaf0267236255",
"text": "Twitter is an online social networking service where worldwide users publish their opinions on a variety of topics, discuss current issues, complain, and express positive or negative sentiment for products they use in daily life. Therefore, Twitter is a rich source of data for opinion mining and sentiment analysis. However, sentiment analysis for Twitter messages (tweets) is regarded as a challenging problem because tweets are short and informal. This paper focuses on this problem by the analyzing of symbols called emotion tokens, including emotion symbols (e.g. emoticons and emoji ideograms). According to observation, these emotion tokens are commonly used. They directly express one’s emotions regardless of his/her language, hence they have become a useful signal for sentiment analysis on multilingual tweets. The paper describes the approach to performing sentiment analysis, that is able to determine positive, negative and neutral sentiments for a tested topic.",
"title": ""
},
{
"docid": "329195d467c5084dcfeb5762e885aec2",
"text": "This paper provides an analysis of human mobility data in an urban area using the amount of available bikes in the stations of the community bicycle program Bicing in Barcelona. Based on data sampled from the operator’s website, it is possible to detect temporal and geographic mobility patterns within the city. These patterns are applied to predict the number of available bikes for any station someminutes/hours ahead. The predictions could be used to improve the bicycle programand the information given to the users via the Bicing website. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "efb9686dbd690109e8e5341043648424",
"text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.",
"title": ""
},
{
"docid": "36f73143b6f4d80e8f1d77505fabbfcf",
"text": "Progress of IoT and ubiquitous computing technologies has strong anticipation to realize smart services in households such as efficient energy-saving appliance control and elderly monitoring. In order to put those applications into practice, high-accuracy and low-cost in-home living activity recognition is essential. Many researches have tackled living activity recognition so far, but the following problems remain: (i)privacy exposure due to utilization of cameras and microphones; (ii) high deployment and maintenance costs due to many sensors used; (iii) burden to force the user to carry the device and (iv) wire installation to supply power and communication between sensor node and server; (v) few recognizable activities; (vi) low recognition accuracy. In this paper, we propose an in-home living activity recognition method to solve all the problems. To solve the problems (i)--(iv), our method utilizes only energy harvesting PIR and door sensors with a home server for data collection and processing. The energy harvesting sensor has a solar cell to drive the sensor and wireless communication modules. To solve the problems (v) and (vi), we have tackled the following challenges: (a) determining appropriate features for training samples; and (b) determining the best machine learning algorithm to achieve high recognition accuracy; (c) complementing the dead zone of PIR sensor semipermanently. We have conducted experiments with the sensor by five subjects living in a home for 2-3 days each. As a result, the proposed method has achieved F-measure: 62.8% on average.",
"title": ""
},
{
"docid": "be76c7f877ad43668fe411741478c43b",
"text": "With the surging of smartphone sensing, wireless networking, and mobile social networking techniques, Mobile Crowd Sensing and Computing (MCSC) has become a promising paradigm for cross-space and large-scale sensing. MCSC extends the vision of participatory sensing by leveraging both participatory sensory data from mobile devices (offline) and user-contributed data from mobile social networking services (online). Further, it explores the complementary roles and presents the fusion/collaboration of machine and human intelligence in the crowd sensing and computing processes. This article characterizes the unique features and novel application areas of MCSC and proposes a reference framework for building human-in-the-loop MCSC systems. We further clarify the complementary nature of human and machine intelligence and envision the potential of deep-fused human--machine systems. We conclude by discussing the limitations, open issues, and research opportunities of MCSC.",
"title": ""
},
{
"docid": "b0382aa0f8c8171b78dba1c179554450",
"text": "This paper is concerned with the hard thresholding operator which sets all but the k largest absolute elements of a vector to zero. We establish a tight bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the `1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the global linear convergence for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex.",
"title": ""
},
{
"docid": "3832812ee527c811a504c10619c59ee3",
"text": "The growing need of the driving public for accurate traffic information has spurred the deployment of large scale dedicated monitoring infrastructure systems, which mainly consist in the use of inductive loop detectors and video cameras. On-board electronic devices have been proposed as an alternative traffic sensing infrastructure, as they usually provide a cost-effective way to collect traffic data, leveraging existing communication infrastructure such as the cellular phone network. A traffic monitoring system based on GPS-enabled smartphones exploits the extensive coverage provided by the cellular network, the high accuracy in position and velocity measurements provided by GPS devices, and the existing infrastructure of the communication network. This article presents a field experiment nicknamed Mobile Century, which was conceived as a proof of concept of such a system. Mobile Century included 100 vehicles carrying a GPS-enabled Nokia N95 phone driving loops on a 10-mile stretch of I-880 near Union City, California, for 8 hours. Data were collected using virtual trip lines, which are geographical markers stored in the handset that probabilistically trigger position and speed updates when the handset crosses them. The proposed prototype system provided sufficient data for traffic monitoring purposes while managing the privacy of participants. The data obtained in the experiment were processed in real-time and successfully broadcast on the internet, demonstrating the feasibility of the proposed system for real-time traffic monitoring. Results suggest that a 2-3% penetration of cell phones in the driver population is enough to provide accurate measurements of the velocity of the traffic flow.",
"title": ""
},
{
"docid": "d566e25ed5ff6e479887a350572cadad",
"text": "Lorentz reciprocity is a fundamental characteristic of the vast majority of electronic and photonic structures. However, non-reciprocal components such as isolators, circulators and gyrators enable new applications ranging from radio frequencies to optical frequencies, including full-duplex wireless communication and on-chip all-optical information processing. Such components today dominantly rely on the phenomenon of Faraday rotation in magneto-optic materials. However, they are typically bulky, expensive and not suitable for insertion in a conventional integrated circuit. Here we demonstrate magnetic-free linear passive non-reciprocity based on the concept of staggered commutation. Commutation is a form of parametric modulation with very high modulation ratio. We observe that staggered commutation enables time-reversal symmetry breaking within very small dimensions (λ/1,250 × λ/1,250 in our device), resulting in a miniature radio-frequency circulator that exhibits reduced implementation complexity, very low loss, strong non-reciprocity, significantly enhanced linearity and real-time reconfigurability, and is integrated in a conventional complementary metal-oxide-semiconductor integrated circuit for the first time.",
"title": ""
},
{
"docid": "ad25cdd1bc4012d6dae8029654c512bd",
"text": "AIM\nThe purpose of this study was to evaluate factors associated with the fill of inter-dental spaces by gingival papillae.\n\n\nMATERIALS AND METHODS\nNinety-six adult subjects were evaluated. Papilla score (PS), tooth form/shape, interproximal contact length and gingival thickness were recorded for 672 maxillary anterior and first pre-molar interproximal sites. Statistical analyses included a non-parametric chi(2) test, anova, the Mixed Procedure for SAS and Pearson's correlation coefficient (r).\n\n\nRESULTS\nPapilla deficiency was more frequent in older subjects (p<0.05), as papilla height decreased 0.012 mm with each year of increasing age (p<0.05). Competent papillae (complete fill inter-dentally) were associated with: (1) crown width: length >or=0.87; (2) proximal contact length >or=2.8 mm; (3) bone crest-contact point <or=5 mm; and (4) interproximal gingival tissue thickness >or=1.5 mm. Gingival thickness correlated negatively with PS (r=-0.37 to -0.54) and positively with tissue height (r=0.23-0.43). Tooth form (i.e. crown width to length ratio) correlated negatively with PS (r=-0.37 to -0.61). Other parameters failed to show any significant effects.\n\n\nCONCLUSIONS\nGingival papilla appearance was associated significantly with subject age, tooth form/shape, proximal contact length, crestal bone height and interproximal gingival thickness.",
"title": ""
},
{
"docid": "b278b9e532600ea1da8c19e07807d899",
"text": "Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system’s inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications. The success of the module relies on the assumption that the network’s computation and reasoning is represented in its internal layer activations. While in principle InterpNET could be applied to any existing classification architecture, it is evaluated via an image classification and explanation task. Experiments on a CUB bird classification and explanation dataset show qualitatively and quantitatively that the model is able to generate high-quality explanations. While the current state-of-the-art METEOR score on this dataset is 29.2, InterpNET achieves a much higher METEOR score of 37.9.",
"title": ""
},
{
"docid": "dd9a09431e7816e6774aaf7b2ce33a6f",
"text": "Image based social networks are among the most popular social networking services in recent years. With tremendous images uploaded everyday, understanding users’ preferences to the user-generated images and recommending them to users have become an urgent need. However, this is a challenging task. On one hand, we have to overcome the extremely data sparsity issue in image recommendation. On the other hand, we have to model the complex aspects that influence users’ preferences to these highly subjective content from the heterogeneous data. In this paper, we develop an explainable social contextual image recommendation model to simultaneously explain and predict users’ preferences to images. Specifically, in addition to user interest modeling in the standard recommendation, we identify three key aspects that affect each user’s preference on the social platform, where each aspect summarizes a contextual representation from the complex relationships between users and images. We design a hierarchical attention model in recommendation process given the three contextual aspects. Particularly, the bottom layered attention networks learn to select informative elements of each aspect from heterogeneous data, and the top layered attention network learns to score the aspect importance of the three identified aspects for each user. In this way, we could overcome the data sparsity issue by leveraging the social contextual aspects from heterogeneous data, and explain the underlying reasons for each user’s behavior with the learned hierarchial attention scores. Extensive experimental results on realworld datasets clearly show the superiority of our proposed model.",
"title": ""
}
] |
scidocsrr
|
ce7b5032d44053e5c0e7850ddf5b079b
|
Got issues? Who cares about it? A large scale investigation of issue trackers from GitHub
|
[
{
"docid": "65385d7aee49806476dc913f6768fc43",
"text": "Software developers spend a significant portion of their resources handling user-submitted bug reports. For software that is widely deployed, the number of bug reports typically outstrips the resources available to triage them. As a result, some reports may be dealt with too slowly or not at all. \n We present a descriptive model of bug report quality based on a statistical analysis of surface features of over 27,000 publicly available bug reports for the Mozilla Firefox project. The model predicts whether a bug report is triaged within a given amount of time. Our analysis of this model has implications for bug reporting systems and suggests features that should be emphasized when composing bug reports. \n We evaluate our model empirically based on its hypothetical performance as an automatic filter of incoming bug reports. Our results show that our model performs significantly better than chance in terms of precision and recall. In addition, we show that our modelcan reduce the overall cost of software maintenance in a setting where the average cost of addressing a bug report is more than 2% of the cost of ignoring an important bug report.",
"title": ""
},
{
"docid": "1313fbdd0721b58936a05da5080239df",
"text": "Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as \"bug\" for lack of a better classification support or of knowledge about the possible kinds.\n This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities.\n We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77% and 82% of correct decisions.",
"title": ""
}
] |
[
{
"docid": "b9f774ccd37e0bf0e399dd2d986f258d",
"text": "Predicting the final state of a running process, the remaining time to completion or the next activity of a running process are important aspects of runtime process management. Runtime management requires the ability to identify processes that are at risk of not meeting certain criteria in order to offer case managers decision information for timely intervention. This in turn requires accurate prediction models for process outcomes and for the next process event, based on runtime information available at the prediction and decision point. In this paper, we describe an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event. This is both a novel method in process prediction, which has previously relied on explicit process models in the form of Hidden Markov Models (HMM) or annotated transition systems, and also a novel application for deep learning methods.",
"title": ""
},
{
"docid": "98df90734e276e0cf020acfdcaa9b4b4",
"text": "High parallel framework has been proved to be very suitable for graph processing. There are various work to optimize the implementation in FPGAs, a pipeline parallel device. The key to make use of the parallel performance of FPGAs is to process graph data in pipeline model and take advantage of on-chip memory to realize necessary locality process. This paper proposes a modularize graph processing framework, which focus on the whole executing procedure with the extremely different degree of parallelism. The framework has three contributions. First, the combination of vertex-centric and edge-centric processing framework can been adjusting in the executing procedure to accommodate top-down algorithm and bottom-up algorithm. Second, owing to the pipeline parallel and finite on-chip memory accelerator, the novel edge-block, a block consist of edges vertex, achieve optimizing the way to utilize the on-chip memory to group the edges and stream the edges in a block to realize the stream pattern to pipeline parallel processing. Third, depending to the analysis of the block structure of nature graph and the executing characteristics during graph processing, we design a novel conversion dispatcher to change processing module, to match the corresponding exchange point. Our evaluation with four graph applications on five diverse scale graph shows that .",
"title": ""
},
{
"docid": "75060c7027db4e75bc42f3f3c84cad9b",
"text": "In this paper, we investigate whether superior performance on corporate social responsibility (CSR) strategies leads to better access to finance. We hypothesize that better access to finance can be attributed to a) reduced agency costs due to enhanced stakeholder engagement and b) reduced informational asymmetry due to increased transparency. Using a large cross-section of firms, we find that firms with better CSR performance face significantly lower capital constraints. Moreover, we provide evidence that both of the hypothesized mechanisms, better stakeholder engagement and transparency around CSR performance, are important in reducing capital constraints. The results are further confirmed using several alternative measures of capital constraints, a paired analysis based on a ratings shock to CSR performance, an instrumental variables and also a simultaneous equations approach. Finally, we show that the relation is driven by both the social and the environmental dimension of CSR.",
"title": ""
},
{
"docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44",
"text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2",
"title": ""
},
{
"docid": "302079b366d2bc0c951e3c7d8eb30815",
"text": "The rapid traffic growth and ubiquitous access requirements make it essential to explore the next generation (5G) wireless communication networks. In the current 5G research area, non-orthogonal multiple access has been proposed as a paradigm shift of physical layer technologies. Among all the existing non-orthogonal technologies, the recently proposed sparse code multiple access (SCMA) scheme is shown to achieve a better link level performance. In this paper, we extend the study by proposing an unified framework to analyze the energy efficiency of SCMA scheme and a low complexity decoding algorithm which is critical for prototyping. We show through simulation and prototype measurement results that SCMA scheme provides extra multiple access capability with reasonable complexity and energy consumption, and hence, can be regarded as an energy efficient approach for 5G wireless communication systems.",
"title": ""
},
{
"docid": "cd71e990546785bd9ba0c89620beb8d2",
"text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. In this work, we use various clustering approaches of data mining to analyse the crime data of Tamilnadu. The crime data is extracted from National Crime Records Bureau (NCRB) of India. It consists of crime information about six cities namely Chennai, Coimbatore, Salem, Madurai, Thirunelvelli and Thiruchirapalli from the year 2000–2014 with 1760 instances and 9 attributes to represent the instances. K-Means clustering, Agglomerative clustering and Density Based Spatial Clustering with Noise (DBSCAN) algorithms are used to cluster crime activities based on some predefined cases and the results of these clustering are compared to find the best suitable clustering algorithm for crime detection. The result of K-Means clustering algorithm is visualized using Google Map for interactive and easy understanding. The K-Nearest Neighbor (KNN) classification is used for crime prediction. The performance of each clustering algorithms are evaluated using the metrics such as precision, recall and F-measure, and the results are compared. This work helps the law enforcement agencies to predict and detect crimes in Tamilnadu with improved accuracy and thus reduces the crime rate.",
"title": ""
},
{
"docid": "36b46a2bf4b46850f560c9586e91d27b",
"text": "Promoting pro-environmental behaviour amongst urban dwellers is one of today's greatest sustainability challenges. The aim of this study is to test whether an information intervention, designed based on theories from environmental psychology and behavioural economics, can be effective in promoting recycling of food waste in an urban area. To this end we developed and evaluated an information leaflet, mainly guided by insights from nudging and community-based social marketing. The effect of the intervention was estimated through a natural field experiment in Hökarängen, a suburb of Stockholm city, Sweden, and was evaluated using a difference-in-difference analysis. The results indicate a statistically significant increase in food waste recycled compared to a control group in the research area. The data analysed was on the weight of food waste collected from sorting stations in the research area, and the collection period stretched for almost 2 years, allowing us to study the short- and long term effects of the intervention. Although the immediate positive effect of the leaflet seems to have attenuated over time, results show that there was a significant difference between the control and the treatment group, even 8 months after the leaflet was distributed. Insights from this study can be used to guide development of similar pro-environmental behaviour interventions for other urban areas in Sweden and abroad, improving chances of reaching environmental policy goals.",
"title": ""
},
{
"docid": "fcbddff6b048bc93fd81e363d08adc6d",
"text": "Question Answering (QA) system is the task where arbitrary question IS posed in the form of natural language statements and a brief and concise text returned as an answer. Contrary to search engines where a long list of relevant documents returned as a result of a query, QA system aims at providing the direct answer or passage containing the answer. We propose a general purpose question answering system which can answer wh-interrogated questions. This system is using Wikipedia data as its knowledge source. We have implemented major components of a QA system which include challenging tasks of Named Entity Tagging, Question Classification, Information Retrieval and Answer Extraction. Implementation of state-of-the-art Entity Tagging mechanism has helped identify entities where systems like OpenEphyra or DBpedia spotlight have failed. The information retrieval task includes development of a framework to extract tabular information known as Infobox from Wikipedia pages which has ensured availability of latest updated information. Answer Extraction module has implemented an attributes mapping mechanism which is helpful to extract answer from data. The system is comparable in results with other available general purpose QA systems.",
"title": ""
},
{
"docid": "da61794b9ffa1f6f4bc39cef9655bf77",
"text": "This manuscript analyzes the effects of design parameters, such as aspect ratio, doping concentration and bias, on the performance of a general CMOS Hall sensor, with insight on current-related sensitivity, power consumption, and bandwidth. The article focuses on rectangular-shaped Hall probes since this is the most general geometry leading to shape-independent results. The devices are analyzed by means of 3D-TCAD simulations embedding galvanomagnetic transport model, which takes into account the Lorentz force acting on carriers due to a magnetic field. Simulation results define a set of trade-offs and design rules that can be used by electronic designers to conceive their own Hall probes.",
"title": ""
},
{
"docid": "3ead21bbf988910cf35f39a8aeba9934",
"text": "The description of the operation technique and retrospective review of 15 consecutive patients who were treated by posterior sacral dome resection and single-stage reduction with pedicle screw fixation for high-grade, high-dysplastic spondylolisthesis. All the patients had high-grade, high-dysplatic spondylolisthesis L5 and were treated by posterior sacral dome resection and posterior single-stage reduction from L4–S1. The average age at the time of surgery was 17.3 (11–28) years. The average follow-up time is 5.5 (2–11.6) years. Clinical and radiologica data were retrospectively reviewed. Spondylolisthesis was reduced from average 99% preoperative to 29% at the last follow-up. L5 incidence improved from 74° to 56°, the lumbosacral angle improved from 15° kyphosis to 6° lordosis, lumbar lordosis decreased from 69° to 53° from preoperative to the last follow-up. While pelvic incidence of 77° remained unchanged, sacral slope decreased from 51° to 46° and pelvic tilt increased from 25° to 30°. Clinical outcome was subjectively rated to be much better than before surgery by 14 out of 15 patients. Four out of 15 patients had temporary sensory impairment of the L5 nerve root which resolved completely within 12 weeks. There were no permanent neurological complications or no pseudarthrosis. The sacral dome resection is a shortening osteotomy of the lumbosacral spine which allows a single-stage reduction of L5 without lengthening of lumbosacral region in high-grade spondylolisthesis, which helps to avoid neurological complications. This is a safe surgical technique resulting in a good multidimensional deformity correction and restoration of spino-pelvic alignment towards normal values with a satisfactory clinical outcome.",
"title": ""
},
{
"docid": "4dda22757c56723b434afeab7457a6d4",
"text": "The treatment of incomplete data is an important step in the pre-processing of data. We propose a novel nonparametric algorithm Generalized regression neural network Ensemble for Multiple Imputation (GEMI). We also developed a single imputation (SI) version of this approach—GESI. We compare our algorithms with 25 popular missing data imputation algorithms on 98 real-world and synthetic terms of (i) the accuracy of output classification: three classifiers (a generalized regression neural network, a multilayer perceptron and a logistic regression technique) are separately trained and tested on the dataset imputed with each imputation algorithm, (ii) interval analysis with missing observations and (iii) point estimation accuracy of the missing value imputation. GEMI outperformed GESI and all the conventional imputation algorithms in terms of all three criteria considered. & 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f24bba45a1905cd4658d52bc7e9ee046",
"text": "In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, QualityDiversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments. Supplementary videos and discussion can be found at frama.link/gep_pg, the code at github.com/flowersteam/geppg.",
"title": ""
},
{
"docid": "72138b8acfb7c9e11cfd92c0b78a737c",
"text": "We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "6cbcd5288423895c4aeff8524ca5ac6c",
"text": "We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for assessing the precision and recall of an automatically acquired generative grammar without recourse to human judgment. The present work sets the stage for the eventual development of more powerful unsupervised algorithms for language acquisition, which would make use of the coordination structures present in natural child-directed speech.",
"title": ""
},
{
"docid": "ce1c06b2e0fde07f29a19bdbdd20a894",
"text": "Incumbent firms struggle with new forms of competition in today’s increasingly digital environments. To leverage the benefits of innovation ecosystems they often shift focus from products to platforms. However, existing research provides limited insight into how firms actually implement this shift. Addressing this void, we have conducted a comparative case study where we adopt the concept of platform thinking to comprehend what capabilities incumbents need when engaging in innovation ecosystems and how those capabilities are developed.",
"title": ""
},
{
"docid": "2c79e4e8563b3724014a645340b869ce",
"text": "Development of linguistic technologies and penetration of social media provide powerful possibilities to investigate users' moods and psychological states of people. In this paper we discussed possibility to improve accuracy of stock market indicators predictions by using data about psychological states of Twitter users. For analysis of psychological states we used lexicon-based approach, which allow us to evaluate presence of eight basic emotions in more than 755 million tweets. The application of Support Vectors Machine and Neural Networks algorithms to predict DJIA and S&P500 indicators are discussed.",
"title": ""
},
{
"docid": "152d1db97d048e1e9d0be1ab2ffe9e7d",
"text": "Recently, distributed processing of large dynamic graphs has become very popular, especially in certain domains such as social network analysis, Web graph analysis and spatial network analysis. In this context, many distributed/parallel graph processing systems have been proposed, such as Pregel, GraphLab, and Trinity. These systems can be divided into two categories: (1) vertex-centric and (2) block-centric approaches. In vertex-centric approaches, each vertex corresponds to a process, and message are exchanged among vertices. In block-centric approaches, the unit of computation is a block, a connected subgraph of the graph, and message exchanges occur among blocks. In this paper, we are considering the issues of scale and dynamism in the case of block-centric approaches. We present BLADYG, a block-centric framework that addresses the issue of dynamism in large-scale graphs. We present an implementation of BLADYG on top of AKKA framework. We experimentally evaluate the performance of the proposed framework.",
"title": ""
},
{
"docid": "172aaf47ee3f89818abba35a463ecc76",
"text": "I examined the relationship of recalled and diary recorded frequency of penile-vaginal intercourse (FSI), noncoital partnered sexual activity, and masturbation to measured waist and hip circumference in 120 healthy adults aged 19-38. Slimmer waist (in men and in the sexes combined) and slimmer hips (in men and women) were associated with greater FSI. Slimmer waist and hips were associated with rated importance of intercourse for men. Noncoital partnered sexual activity had a less consistent association with slimness. Slimmer waist and hips were associated with less masturbation (in men and in the sexes combined). I discuss the results in terms of differences between different sexual behaviors, attractiveness, emotional relatedness, physical sensitivity, sexual dysfunction, sociobiology, psychopharmacological aspects of excess fat and carbohydrate consumption, and implications for sex therapy.",
"title": ""
},
{
"docid": "394fa55cbbaa5afc7b4cf9b316b4d2ff",
"text": "Paralysis following spinal cord injury, brainstem stroke, amyotrophic lateral sclerosis and other disorders can disconnect the brain from the body, eliminating the ability to perform volitional movements. A neural interface system could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices. Able-bodied monkeys have used a neural interface system to control a robotic arm, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here we demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm and hand over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.",
"title": ""
}
] |
scidocsrr
|
45e59938a83ec258ff5663a5b01a77f8
|
Control of Tendon-Driven Soft Foam Robot Hands
|
[
{
"docid": "f4abfe0bb969e2a6832fa6317742f202",
"text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.",
"title": ""
},
{
"docid": "fd9411cfa035139010be0935d9e52865",
"text": "This paper presents a robotic manipulation system capable of autonomously positioning a multi-segment soft fluidic elastomer robot in three dimensions. Specifically, we present an extremely soft robotic manipulator morphology that is composed entirely from low durometer elastomer, powered by pressurized air, and designed to be both modular and durable. To understand the deformation of a single arm segment, we develop and experimentally validate a static deformation model. Then, to kinematically model the multi-segment manipulator, we use a piece-wise constant curvature assumption consistent with more traditional continuum manipulators. In addition, we define a complete fabrication process for this new manipulator and use this process to make multiple functional prototypes. In order to power the robot’s spatial actuation, a high capacity fluidic drive cylinder array is implemented, providing continuously variable, closed-circuit gas delivery. Next, using real-time data from a vision system, we develop a processing and control algorithm that generates realizable kinematic curvature trajectories and controls the manipulator’s configuration along these trajectories. Lastly, we experimentally demonstrate new capabilities offered by this soft fluidic elastomer manipulation system such as entering and advancing through confined three-dimensional environments as well as conforming to goal shape-configurations within a sagittal plane under closed-loop control.",
"title": ""
},
{
"docid": "19695936a91f2632911c9f1bee48c11d",
"text": "The purpose of this technical report is two-fold. First of all, it introduces a suite of challenging continuous control tasks (integrated with OpenAI Gym) based on currently existing robotics hardware. The tasks include pushing, sliding and pick & place with a Fetch robotic arm as well as in-hand object manipulation with a Shadow Dexterous Hand. All tasks have sparse binary rewards and follow a Multi-Goal Reinforcement Learning (RL) framework in which an agent is told what to do using an additional input. The second part of the paper presents a set of concrete research ideas for improving RL algorithms, most of which are related to Multi-Goal RL and Hindsight Experience Replay. 1 Environments All environments are released as part of OpenAI Gym1 (Brockman et al., 2016) and use the MuJoCo (Todorov et al., 2012) physics engine for fast and accurate simulation. A video presenting the new environments can be found at https://www.youtube.com/watch?v=8Np3eC_PTFo. 1.1 Fetch environments The Fetch environments are based on the 7-DoF Fetch robotics arm,2 which has a two-fingered parallel gripper. They are very similar to the tasks used in Andrychowicz et al. (2017) but we have added an additional reaching task and the pick & place task is a bit different.3 In all Fetch tasks, the goal is 3-dimensional and describes the desired position of the object (or the end-effector for reaching). Rewards are sparse and binary: The agent obtains a reward of −1 if the object is not at the target location (within a tolerance of 5 cm) and 0 otherwise. Actions are 4-dimensional: 3 dimensions specify the desired gripper movement in Cartesian coordinates and the last dimension controls opening and closing of the gripper. We apply the same action in 20 subsequent simulator steps (with ∆t = 0.002 each) before returning control to the agent, i.e. the agent’s action frequency is f = 25 Hz. Observations include the Cartesian position of the gripper, its linear velocity as well as the position and linear velocity of the robot’s gripper. If an object is present, we also include the object’s Cartesian position and rotation using Euler angles, its linear and angular velocities, as well as its position and linear velocities relative to gripper. https://github.com/openai/gym http://fetchrobotics.com/ In Andrychowicz et al. (2017) training on this task relied on starting some of the training episodes from a state in which the box is already grasped. This is not necessary for successful training if the target position of the box is sometimes in the air and sometimes on the table and we do not use this technique anymore. ar X iv :1 80 2. 09 46 4v 1 [ cs .L G ] 2 6 Fe b 20 18 Figure 1: The four proposed Fetch environments: FetchReach, FetchPush, FetchSlide, and FetchPickAndPlace. Reaching (FetchReach) The task is to move the gripper to a target position. This task is very easy to learn and is therefore a suitable benchmark to ensure that a new idea works at all.4 Pushing (FetchPush) A box is placed on a table in front of the robot and the task is to move it to a target location on the table. The robot fingers are locked to prevent grasping. The learned behavior is usually a mixture of pushing and rolling. Sliding (FetchSlide) A puck is placed on a long slippery table and the target position is outside of the robot’s reach so that it has to hit the puck with such a force that it slides and then stops at the target location due to friction. Pick & Place (FetchPickAndPlace) The task is to grasp a box and move it to the target location which may be located on the table surface or in the air above it. 1.2 Hand environments These environments are based on the Shadow Dexterous Hand,5 which is an anthropomorphic robotic hand with 24 degrees of freedom. Of those 24 joints, 20 can be can be controlled independently whereas the remaining ones are coupled joints. In all hand tasks, rewards are sparse and binary: The agent obtains a reward of −1 if the goal has been achieved (within some task-specific tolerance) and 0 otherwise. Actions are 20-dimensional: We use absolute position control for all non-coupled joints of the hand. We apply the same action in 20 subsequent simulator steps (with ∆t = 0.002 each) before returning control to the agent, i.e. the agent’s action frequency is f = 25 Hz. Observations include the 24 positions and velocities of the robot’s joints. In case of an object that is being manipulated, we also include its Cartesian position and rotation represented by a quaternion (hence 7-dimensional) as well as its linear and angular velocities. In the reaching task, we include the Cartesian position of all 5 fingertips. Reaching (HandReach) A simple task in which the goal is 15-dimensional and contains the target Cartesian position of each fingertip of the hand. Similarly to the FetchReach task, this task is relatively easy to learn. A goal is considered achieved if the mean distance between fingertips and their desired position is less than 1 cm. Block manipulation (HandManipulateBlock) In the block manipulation task, a block is placed on the palm of the hand. The task is to then manipulate the block such that a target pose is achieved. The goal is 7-dimensional and includes the target position (in Cartesian coordinates) and target rotation (in quaternions). We include multiple variants with increasing levels of difficulty: • HandManipulateBlockRotateZ Random target rotation around the z axis of the block. No target position. • HandManipulateBlockRotateParallel Random target rotation around the z axis of the block and axis-aligned target rotations for the x and y axes. No target position. • HandManipulateBlockRotateXYZ Random target rotation for all axes of the block. No target position. That being said, we have found that is so easy that even partially broken implementations sometimes learn successful policies, so no conclusions should be drawn from this task alone. https://www.shadowrobot.com/products/dexterous-hand/",
"title": ""
},
{
"docid": "9970c9a191d9223448d205f0acec6976",
"text": "This paper presents the complete development and analysis of a soft robotic platform that exhibits peristaltic locomotion. The design principle is based on the antagonistic arrangement of circular and longitudinal muscle groups of Oligochaetes. Sequential antagonistic motion is achieved in a flexible braided mesh-tube structure using a nickel titanium (NiTi) coil actuators wrapped in a spiral pattern around the circumference. An enhanced theoretical model of the NiTi coil spring describes the combination of martensite deformation and spring elasticity as a function of geometry. A numerical model of the mesh structures reveals how peristaltic actuation induces robust locomotion and details the deformation by the contraction of circumferential NiTi actuators. Several peristaltic locomotion modes are modeled, tested, and compared on the basis of speed. Utilizing additional NiTi coils placed longitudinally, steering capabilities are incorporated. Proprioceptive potentiometers sense segment contraction, which enables the development of closed-loop controllers. Several appropriate control algorithms are designed and experimentally compared based on locomotion speed and energy consumption. The entire mechanical structure is made of flexible mesh materials and can withstand significant external impact during operation. This approach allows a completely soft robotic platform by employing a flexible control unit and energy sources.",
"title": ""
}
] |
[
{
"docid": "5fe851a0bd4a152e162f9c991fb74f6f",
"text": "Input-output examples have emerged as a practical and user-friendly specification mechanism for program synthesis in many environments. While example-driven tools have demonstrated tangible impact that has inspired adoption in industry, their underlying semantics are less well-understood: what are \"examples\" and how do they relate to other kinds of specifications? This paper demonstrates that examples can, in general, be interpreted as refinement types. Seen in this light, program synthesis is the task of finding an inhabitant of such a type. This insight provides an immediate semantic interpretation for examples. Moreover, it enables us to exploit decades of research in type theory as well as its correspondence with intuitionistic logic rather than designing ad hoc theoretical frameworks for synthesis from scratch. We put this observation into practice by formalizing synthesis as proof search in a sequent calculus with intersection and union refinements that we prove to be sound with respect to a conventional type system. In addition, we show how to handle negative examples, which arise from user feedback or counterexample-guided loops. This theory serves as the basis for a prototype implementation that extends our core language to support ML-style algebraic data types and structurally inductive functions. Users can also specify synthesis goals using polymorphic refinements and import monomorphic libraries. The prototype serves as a vehicle for empirically evaluating a number of different strategies for resolving the nondeterminism of the sequent calculus---bottom-up theorem-proving, term enumeration with refinement type checking, and combinations of both---the results of which classify, explain, and validate the design choices of existing synthesis systems. It also provides a platform for measuring the practical value of a specification language that combines \"examples\" with the more general expressiveness of refinements.",
"title": ""
},
{
"docid": "7d35f3afeb9a8e1dc6f99e4d241273c7",
"text": "In this paper, we propose Motion Dense Sampling (MDS) for action recognition, which detects very informative interest points from video frames. MDS has three advantages compared to other existing methods. The first advantage is that MDS detects only interest points which belong to action regions of all regions of a video frame. The second one is that it can detect the constant number of points even when the size of action region in an image drastically changes. The Third one is that MDS enables to describe scale invariant features by computing sampling scale for each frame based on the size of action regions. Thus, our method detects much more informative interest points from videos unlike other methods. We also propose Category Clustering and Component Clustering, which generate the very effective codebook for action recognition. Experimental results show a significant improvement over existing methods on YouTube dataset. Our method achieves 87.5 % accuracy for video classification by using only one descriptor.",
"title": ""
},
{
"docid": "89cba76ab33c66a3687481ea56e1e556",
"text": "With sustained growth of software complexity, finding security vulnerabilities in operating systems has become an important necessity. Nowadays, OS are shipped with thousands of binary executables. Unfortunately, methodologies and tools for an OS scale program testing within a limited time budget are still missing.\n In this paper we present an approach that uses lightweight static and dynamic features to predict if a test case is likely to contain a software vulnerability using machine learning techniques. To show the effectiveness of our approach, we set up a large experiment to detect easily exploitable memory corruptions using 1039 Debian programs obtained from its bug tracker, collected 138,308 unique execution traces and statically explored 76,083 different subsequences of function calls. We managed to predict with reasonable accuracy which programs contained dangerous memory corruptions.\n We also developed and implemented VDiscover, a tool that uses state-of-the-art Machine Learning techniques to predict vulnerabilities in test cases. Such tool will be released as open-source to encourage the research of vulnerability discovery at a large scale, together with VDiscovery, a public dataset that collects raw analyzed data.",
"title": ""
},
{
"docid": "bb0b9b679444291bceecd68153f6f480",
"text": "Path planning is one of the most significant and challenging subjects in robot control field. In this paper, a path planning method based on an improved shuffled frog leaping algorithm is proposed. In the proposed approach, a novel updating mechanism based on the median strategy is used to avoid local optimal solution problem in the general shuffled frog leaping algorithm. Furthermore, the fitness function is modified to make the path generated by the shuffled frog leaping algorithm smoother. In each iteration, the globally best frog is obtained and its position is used to lead the movement of the robot. Finally, some simulation experiments are carried out. The experimental results show the feasibility and effectiveness of the proposed algorithm in path planning for mobile robots.",
"title": ""
},
{
"docid": "c2e7425f719dd51eec0d8e180577269e",
"text": "Most important way of communication among humans is language and primary medium used for the said is speech. The speech recognizers make use of a parametric form of a signal to obtain the most important distinguishable features of speech signal for recognition purpose. In this paper, Linear Prediction Cepstral Coefficient (LPCC), Mel Frequency Cepstral Coefficient (MFCC) and Bark frequency Cepstral coefficient (BFCC) feature extraction techniques for recognition of Hindi Isolated, Paired and Hybrid words have been studied and the corresponding recognition rates are compared. Artifical Neural Network is used as back end processor. The experimental results show that the better recognition rate is obtained for MFCC as compared to LPCC and BFCC for all the three types of words.",
"title": ""
},
{
"docid": "c92807c973f51ac56fe6db6c2bb3f405",
"text": "Machine learning relies on the availability of a vast amount of data for training. However, in reality, most data are scattered across different organizations and cannot be easily integrated under many legal and practical constraints. In this paper, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical models under a data federation. The federation allows knowledge to be shared without compromising user privacy, and enables complimentary knowledge to be transferred in the network. As a result, a target-domain party can build more flexible and powerful models by leveraging rich labels from a source-domain party. A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation. The framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving approach. This framework is very flexible and can be effectively adapted to various secure multi-party machine learning tasks.",
"title": ""
},
{
"docid": "6f6042046ef1c1642bb95bc47f38cdbb",
"text": "Jean-Jacques Rousseau's concepts of self-love (amour propre) and love of self (amour de soi même) are applied to the psychology of terrorism. Self-love is concern with one's image in the eyes of respected others, members of one's group. It denotes one's feeling of personal significance, the sense that one's life has meaning in accordance with the values of one's society. Love of self, in contrast, is individualistic concern with self-preservation, comfort, safety, and the survival of self and loved ones. We suggest that self-love defines a motivational force that when awakened arouses the goal of a significance quest. When a group perceives itself in conflict with dangerous detractors, its ideology may prescribe violence and terrorism against the enemy as a means of significance gain that gratifies self-love concerns. This may involve sacrificing one's self-preservation goals, encapsulated in Rousseau's concept of love of self. The foregoing notions afford the integration of diverse quantitative and qualitative findings on individuals' road to terrorism and back. Understanding the significance quest and the conditions of its constructive fulfillment may be crucial to reversing the current tide of global terrorism.",
"title": ""
},
{
"docid": "e3e9532e873739e8024ba7d55de335c3",
"text": "We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically. We demonstrate the model selection capabilities of the algorithm in a range of experiments. In line with the development of our method, we present a simple view on sparse approximations for GP models and their underlying assumptions and show relations to other methods.",
"title": ""
},
{
"docid": "bc6f9ef52c124675c62ccb8a1269a9b8",
"text": "We explore 3D printing physical controls whose tactile response can be manipulated programmatically through pneumatic actuation. In particular, by manipulating the internal air pressure of various pneumatic elements, we can create mechanisms that require different levels of actuation force and can also change their shape. We introduce and discuss a series of example 3D printed pneumatic controls, which demonstrate the feasibility of our approach. This includes conventional controls, such as buttons, knobs and sliders, but also extends to domains such as toys and deformable interfaces. We describe the challenges that we faced and the methods that we used to overcome some of the limitations of current 3D printing technology. We conclude with example applications and thoughts on future avenues of research.",
"title": ""
},
{
"docid": "e468fd0e6c14fee379cd1825afd018eb",
"text": "Bionic implants for the deaf require wide-dynamicrange low-power microphone preamplifiers with good wide-band rejection of the supply noise. Widely used low-cost implementations of such preamplifiers typically use the buffered voltage output of an electret capacitor with a built-in JFET source follower. We describe a design in which the JFET microphone buffer’s output current, rather than its output voltage, is transduced via a sense-amplifier topology allowing good in-band power-supply rejection. The design employs a low-frequency feedback loop to subtract the dc bias current of the microphone and prevent it from causing saturation. Wide-band power-supply rejection is achieved by integrating a novel filter on all current-source biasing. Our design exhibits 80 dB of dynamic range with less than 5 Vrms of input noise while operating from a 2.8 V supply. The power consumption is 96 W which includes 60 W for the microphone built-in buffer. The in-band power-supply rejection ratio varies from 50 to 90 dB while out-of-band supply attenuation is greater than 60 dB until 25 MHz. Fabrication was done in a 1.5m CMOS process with gain programmability for both microphone and auxiliary channel inputs.",
"title": ""
},
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "b32286014bb7105e62fba85a9aab9019",
"text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.",
"title": ""
},
{
"docid": "85c5746b7ead047f34cbf11c42f0890e",
"text": "Depression is a serious mental health problem affecting a significant segment of American society today, and in particular college students. In a survey by the U.S. Centers for Disease Control (CDC) in 2009, 26.1% of U.S. students nationwide reported feeling so sad or hopeless almost every day for 2 or more weeks in a row that they stopped doing some usual activities. Similar statistics are also reported in mental health studies by the American College Health Association, and by independent surveys. In this article, the author report their findings from a month-long experiment conducted at Missouri University of Science and Technology on studying depressive symptoms among college students who use the Internet. This research was carried out using real campus Internet data collected continuously, unobtrusively, and while preserving privacy.",
"title": ""
},
{
"docid": "abc160fc578bb40935afa7aea93cf6ca",
"text": "This study investigates the effect of leader and follower behavior on employee voice, team task responsibility and team effectiveness. This study distinguishes itself by including both leader and follower behavior as predictors of team effectiveness. In addition, employee voice and team task responsibility are tested as potential mediators of the relationship between task-oriented behaviors (informing, directing, verifying) and team effectiveness as well as the relationship between relation-oriented behaviors (positive feedback, intellectual stimulation, individual consideration) and team effectiveness. This cross-sectional exploratory study includes four methods: 1) inter-reliable coding of leader and follower behavior during staff meetings; 2) surveys of 57 leaders; 3) surveys of643 followers; 4) survey of 56 lean coaches. Regression analyses showed that both leaders and followers display more task-oriented behaviors opposed to relation-oriented behaviors during staff meetings. Contrary to the hypotheses, none of the observed leader behaviors positively influences employee voice, team task responsibility or team effectiveness. However, all three task-oriented follower behaviors indirectly influence team effectiveness. The findings from this research illustrate that follower behaviors has more influence on team effectiveness compared to leader behavior. Practical implications, strengths and limitations of the research are discussed. Moreover, future research directions including the mediating role of culture and psychological safety are proposed as well.",
"title": ""
},
{
"docid": "7267e5082c890dfa56a745d3b28425cc",
"text": "Natural Orifice Translumenal Endoscopic Surgery (NOTES) has recently attracted lots of attention, promising surgical procedures with fewer complications, better cosmesis, lower pains and faster recovery. Several robotic systems were developed aiming to enable abdominal surgeries in a NOTES manner. Although these robotic systems demonstrated the surgical concept, characteristics which could fully enable NOTES procedures remain unclear. This paper presents the development of an endoscopic continuum testbed for finalizing system characteristics of a surgical robot for NOTES procedures, which include i) deployability (the testbed can be deployed in a folded endoscope configuration and then be unfolded into a working configuration), ii) adequate workspace, iii) sufficient distal dexterity (e.g. suturing capability), and iv) desired mechanics properties (e.g. enough load carrying capability). Continuum mechanisms were implemented in the design and a diameter of 12mm of this testbed in its endoscope configuration was achieved. Results of this paper could be used to form design references for future development of NOTES robots.",
"title": ""
},
{
"docid": "6f049f55c1b6f65284c390bd9a2d7511",
"text": "Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve these results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.",
"title": ""
},
{
"docid": "844dcf80b2feba89fced99a0f8cbe9bf",
"text": "Communication could potentially be an effective way for multi-agent cooperation. However, information sharing among all agents or in predefined communication architectures that existing methods adopt can be problematic. When there is a large number of agents, agents cannot differentiate valuable information that helps cooperative decision making from globally shared information. Therefore, communication barely helps, and could even impair the learning of multi-agent cooperation. Predefined communication architectures, on the other hand, restrict communication among agents and thus restrain potential cooperation. To tackle these difficulties, in this paper, we propose an attentional communication model that learns when communication is needed and how to integrate shared information for cooperative decision making. Our model leads to efficient and effective communication for large-scale multi-agent cooperation. Empirically, we show the strength of our model in a variety of cooperative scenarios, where agents are able to develop more coordinated and sophisticated strategies than existing methods.",
"title": ""
},
{
"docid": "d514fdfa92b4aba95922f9b200d71b5a",
"text": "In space-borne applications, reduction of size, weight, and power can be critical. Pursuant to this goal, we present an ultrawideband, tightly coupled dipole array (TCDA) capable of supporting numerous satellite communication bands, simultaneously. Such antennas enable weight reduction by replacing multiple antennas. In addition, it provides spectral efficiency by reusing intermediate frequencies for inter-satellite communication. For ease of fabrication, the array is initially designed for operation across the UHF, L, S, and lower-C bands (0.6-3.6 GHz), with emphasis on dual-linear polarization, and wide-angle scanning. The array achieves a minimum 6:1 bandwidth for VSWR less than 1.8, 2.4, and 3.1 for 0°, 45°, and 60° scans, respectively. The presented design represents the first practical realization of dual polarizations using a TCDA topology. This is accomplished through a dual-offset, split unit cell with minimized inter-feed coupling. Array simulations are verified with measured results of an 8 × 8 prototype, exhibiting very low cross polarization and near-theoretical gain across the band. Further, we present a TCDA design operating across the upper-S, C, X, and Ku bands (3-18 GHz). The array achieves this 6:1 bandwidth for VSWR <; 2 at broadside, and VSWR <; 2.6 at 45°. A discussion on design and fabrication for low-cost arrays operating at these frequencies is included.",
"title": ""
},
{
"docid": "a6e71e4be58c51b580fcf08e9d1a100a",
"text": "This dissertation is concerned with the processing of high velocity text using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing, in order to analyse fast and voluminous streams of text. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using eventdriven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information filtering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems.",
"title": ""
}
] |
scidocsrr
|
fa736f07a173a8133e71779868200e3d
|
Interest cash: an application-based countermeasure against interest flooding for dynamic content in named data networking
|
[
{
"docid": "e253fe7f481dc9fbd14a69e4c7d3bf23",
"text": "Current Internet is reaching the limits of its capabilities due to its function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) - an instantiation of Content-Centric Networking approach, embraces this shift by stressing the content itself, rather than where it locates. NDN tries to provide better security and privacy than current Internet does, and resilience to Distributed Denial of Service (DDoS) is a significant issue. In this paper, we present a specific and concrete scenario of DDoS attack in NDN, where perpetrators make use of NDN's packet forwarding rules to send out Interest packets with spoofed names as attacking packets. Afterwards, we identify the victims of NDN DDoS attacks include both the hosts and routers. But the largest victim is not the hosts, but the routers, more specifically, the Pending Interest Table (PIT) within the router. PIT brings NDN many elegant features, but it suffers from vulnerability. We propose Interest traceback as a counter measure against the studied NDN DDoS attacks, which traces back to the originator of the attacking Interest packets. At last, we assess the harmful consequences brought by these NDN DDoS attacks and evaluate the Interest traceback counter measure. Evaluation results reveal that the Interest traceback method effectively mitigates the NDN DDoS attacks studied in this paper.",
"title": ""
}
] |
[
{
"docid": "8543e4cd67ef3f23efabd0b130bfe9f9",
"text": "A promising way of software reuse is Component-Based Software Development (CBSD). There is an increasing number of OSS products available that can be freely used in product development. However, OSS communities themselves have not yet taken full advantage of the “reuse mechanism”. Many OSS projects duplicate effort and code, even when sharing the same application domain and topic. One successful counter-example is the FFMpeg multimedia project, since several of its components are widely and consistently reused into other OSS projects. This paper documents the history of the libavcodec library of components from the FFMpeg project, which at present is reused in more than 140 OSS projects. Most of the recipients use it as a blackbox component, although a number of OSS projects keep a copy of it in their repositories, and modify it as such. In both cases, we argue that libavcodec is a successful example of reusable OSS library of compo-",
"title": ""
},
{
"docid": "8b5ca0f4b12aa5d07619078d44dbb337",
"text": "Crimeware-as-a-service (CaaS) has become a prominent component of the underground economy. CaaS provides a new dimension to cyber crime by making it more organized, automated, and accessible to criminals with limited technical skills. This paper dissects CaaS and explains the essence of the underground economy that has grown around it. The paper also describes the various crimeware services that are provided in the underground",
"title": ""
},
{
"docid": "bb47e6b493a204a9e0fbe97aa14fec06",
"text": "Intelligent artificial agents need to be able to explain and justify their actions. They must therefore understand the rationales for their own actions. This paper describes a technique for acquiring this understanding, implemented in a multimedia explanation system. The system determines the motivation for a decision by recalling the situation in which the decision was made, and replaying the decision under variants of the original situation. Through experimentation the agent is able to discover what factors led to the decisions, and what alternatives might have been chosen had the situation been slightly different. The agent learns to recognize similar situations where the same decision would be made for the same reasons. This approach is implemented in an artificial fighter pilot that can explain the motivations for its actions, situation assessments,",
"title": ""
},
{
"docid": "df56d2914cdfbc31dff9ecd9a3093379",
"text": "In this paper, square slot (SS) upheld by the substrate integrated waveguide (SIW) cavity is presented. A simple 50 Ω microstrip line is employed to feed this cavity. Then slot matched cavity modes are coupled to the slot and radiated efficiently. The proposed antenna features the following structural advantages, compact size, light weight and easy low cost fabrication. Concerning the electrical performance, it exhibits 15% impedance bandwidth for the reflection coefficient less than -10 dB and the realized gain touches 8.5 dB frontier.",
"title": ""
},
{
"docid": "737231466c50ac647f247b60852026e2",
"text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people are accessing key-based security systems. Existing methods of obtaining such secret information rely on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user’s fine-grained hand movements, which enable attackers to reproduce the trajectories of the user’s hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user’s hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 7,000 key entry traces collected from 20 adults for key-based security systems (i.e., ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80 percent accuracy with only one try and more than 90 percent accuracy with three tries. Moreover, the performance of our system is consistently good even under low sampling rate and when inferring long PIN sequences. To the best of our knowledge, this is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.",
"title": ""
},
{
"docid": "05db9a684a537fdf1234e92047618e18",
"text": "Globally the internet is been accessed by enormous people within their restricted domains. When the client and server exchange messages among each other, there is an activity that can be observed in log files. Log files give a detailed description of the activities that occur in a network that shows the IP address, login and logout durations, the user's behavior etc. There are several types of attacks occurring from the internet. Our focus of research in this paper is Denial of Service (DoS) attacks with the help of pattern recognition techniques in data mining. Through which the Denial of Service attack is identified. Denial of service is a very dangerous attack that jeopardizes the IT resources of an organization by overloading with imitation messages or multiple requests from unauthorized users.",
"title": ""
},
{
"docid": "eb96cd38e634ddb298063dbc26163f52",
"text": "A good representation for arbitrarily complicated data should have the capability of semantic generation, clustering and reconstruction. Previous research has already achieved impressive performance on either one. This paper aims at learning a disentangled representation effective for all of them in an unsupervised way. To achieve all the three tasks together, we learn the forward and inverse mapping between data and representation on the basis of a symmetric adversarial process. In theory, we minimize the upper bound of the two conditional entropy loss between the latent variables and the observations together to achieve the cycle consistency. The newly proposed RepGAN is tested on MNIST, fashionMNIST, CelebA, and SVHN datasets to perform unsupervised or semi-supervised classification, generation and reconstruction tasks. The result demonstrates that RepGAN is able to learn a useful and competitive representation. To the author’s knowledge, our work is the first one to achieve both a high unsupervised classification accuracy and low reconstruction error on MNIST.",
"title": ""
},
{
"docid": "c929a8b6ff4d654a488b5e189b2b61dc",
"text": "Human neural progenitors derived from pluripotent stem cells develop into electrophysiologically active neurons at heterogeneous rates, which can confound disease-relevant discoveries in neurology and psychiatry. By combining patch clamping, morphological and transcriptome analysis on single-human neurons in vitro, we defined a continuum of poor to highly functional electrophysiological states of differentiated neurons. The strong correlations between action potentials, synaptic activity, dendritic complexity and gene expression highlight the importance of methods for isolating functionally comparable neurons for in vitro investigations of brain disorders. Although whole-cell electrophysiology is the gold standard for functional evaluation, it often lacks the scalability required for disease modeling studies. Here, we demonstrate a multimodal machine-learning strategy to identify new molecular features that predict the physiological states of single neurons, independently of the time spent in vitro. As further proof of concept, we selected one of the potential neurophysiological biomarkers identified in this study—GDAP1L1—to isolate highly functional live human neurons in vitro.",
"title": ""
},
{
"docid": "6e05c3e76e87317db05c43a1f564724a",
"text": "Data science or \"data-driven research\" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.",
"title": ""
},
{
"docid": "80666930dbabe1cd9d65af762cc4b150",
"text": "Accurate electronic health records are important for clinical care and research as well as ensuring patient safety. It is crucial for misspelled words to be corrected in order to ensure that medical records are interpreted correctly. This paper describes the development of a spelling correction system for medical text. Our spell checker is based on Shannon's noisy channel model, and uses an extensive dictionary compiled from many sources. We also use named entity recognition, so that names are not wrongly corrected as misspellings. We apply our spell checker to three different types of free-text data: clinical notes, allergy entries, and medication orders; and evaluate its performance on both misspelling detection and correction. Our spell checker achieves detection performance of up to 94.4% and correction accuracy of up to 88.2%. We show that high-performance spelling correction is possible on a variety of clinical documents.",
"title": ""
},
{
"docid": "df997cfc15654a0c9886d52c4166f649",
"text": "Network embedding aims to represent each node in a network as a low-dimensional feature vector that summarizes the given node’s (extended) network neighborhood. The nodes’ feature vectors can then be used in various downstream machine learning tasks. Recently, many embedding methods that automatically learn the features of nodes have emerged, such as node2vec and struc2vec, which have been used in tasks such as node classification, link prediction, and node clustering, mainly in the social network domain. There are also other embedding methods that explicitly look at the connections between nodes, i.e., the nodes’ network neighborhoods, such as graphlets. Graphlets have been used in many tasks such as network comparison, link prediction, and network clustering, mainly in the computational biology domain. Even though the two types of embedding methods (node2vec/struct2vec versus graphlets) have a similar goal – to represent nodes as features vectors, no comparisons have been made between them, possibly because they have originated in the different domains. Therefore, in this study, we compare graphlets to node2vec and struc2vec, and we do so in the task of network alignment. In evaluations on synthetic and real-world biological networks, we find that graphlets are both more accurate and faster than node2vec and struc2vec.",
"title": ""
},
{
"docid": "a54f912c14b44fc458ed8de9e19a5e82",
"text": "Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.",
"title": ""
},
{
"docid": "57c7b5048517c81aa70eaa0e75f0e4ad",
"text": "We present a case study of a difficult real-world pattern recognition problem: predicting hard drive failure using attributes monitored internally by individual drives. We compare the performance of support vector machines (SVMs), unsupervised clustering, and non-parametric statistical tests (rank-sum and reverse arrangements). Somewhat surprisingly, the rank-sum method outperformed the other methods, including SVMs. We also show the utility of using non-parametric tests for feature set selection. Keywords— failure prediction, hard drive reliability, ranksum, reverse arrangements, support vector machines,",
"title": ""
},
{
"docid": "f481f0ba70ce16587f7c5639360bc2f9",
"text": "We describe the design and implementation of Walter, a key-value store that supports transactions and replicates data across distant sites. A key feature behind Walter is a new property called Parallel Snapshot Isolation (PSI). PSI allows Walter to replicate data asynchronously, while providing strong guarantees within each site. PSI precludes write-write conflicts, so that developers need not worry about conflict-resolution logic. To prevent write-write conflicts and implement PSI, Walter uses two new and simple techniques: preferred sites and counting sets. We use Walter to build a social networking application and port a Twitter-like application.",
"title": ""
},
{
"docid": "561320dd717f1a444735dfa322dfbd31",
"text": "IEEE 802.11 based WLAN systems have gained interest to be used in the military and public authority environments, where the radio conditions can be harsh due to intentional jamming. The radio environment can be difficult also in commercial and civilian deployments since the unlicensed frequency bands are crowded. To study these problems, we built a test bed with a controlled signal path to measure the effects of different interfering signals to WLAN communications. We use continuous wideband noise jamming as the point of comparison, and focus on studying the effect of pulsed jamming and frequency sweep jamming. In addition, we consider also medium access control (MAC) interference. Based on the results, WLAN systems do not seem to be sensitive to the tested short noise jamming pulses. Under longer pulses, the effects are seen, and long data frames are more vulnerable to jamming than short ones. In fact, even a small amount of long frames in a data stream can ruin the performance of the whole link. Under frequency sweep jamming, slow sweeps with narrowband jamming signals can be quite harmful to WLAN communications. The results of MAC jamming show significant variation in performance between the different devices: The clear channel assessment (CCA) mechanism of some devices can be jammed very easily by using WLAN-like jamming signals. As a side product, the study also revealed some countermeasures against jamming.",
"title": ""
},
{
"docid": "85719d4bc86c7c8bbe5799a716d6533b",
"text": "We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on ”expander-like” properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections. 1 ar X iv :1 70 6. 05 68 3v 1 [ cs .L G ] 1 8 Ju n 20 17",
"title": ""
},
{
"docid": "fd4f15cea3b3690508a254e4d052dccd",
"text": "Sentence auto-completion is an important feature that saves users many keystrokes in typing the entire sentence by providing suggestions as they type. Despite its value, the existing sentence auto-completion methods, such as query completion models, can hardly be applied to solving the object completion problem in sentences with the form of (subject, verb, object), due to the complex natural language description and the data deficiency problem. Towards this goal, we treat an SVO sentence as a three-element triple (subject, sentence pattern, object), and cast the sentence object completion problem as an element inference problem. These elements in all triples are encoded into a unified low-dimensional embedding space by our proposed TRANSFER model, which leverages the external knowledge base to strengthen the representation learning performance. With such representations, we can provide reliable candidates for the desired missing element by a linear model. Extensive experiments on a real-world dataset have well-validated our model. Meanwhile, we have successfully applied our proposed model to factoid question answering systems for answer candidate selection, which further demonstrates the applicability of the TRANSFER model.",
"title": ""
},
{
"docid": "49445cfa92b95045d23a54eca9f9a592",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. In the past decade, several data mining techniques have been proposed in the literature for predicting the churners using heterogeneous customer records. This paper reviews the different categories of customer data available in open datasets, predictive models and performance metrics used in the literature for churn prediction in telecom industry.",
"title": ""
},
{
"docid": "bb2b3944f72c0d1a530f971ddf6dc6fb",
"text": "UNLABELLED\nAny suture material, absorbable or nonabsorbable, elicits a kind of inflammatory reaction within the tissue. Nonabsorbable black silk suture and absorbable polyglycolic acid suture were compared clinically and histologically on various parameters.\n\n\nMATERIALS AND METHODS\nThis study consisted of 50 patients requiring minor surgical procedure, who were referred to the Department of Oral and Maxillofacial Surgery. Patients were selected randomly and sutures were placed in the oral cavity 7 days preoperatively. Polyglycolic acid was placed on one side and black silk suture material on the other. Seven days later, prior to surgical procedure the sutures will be assessed. After the surgical procedure the sutures will be placed postoperatively in the same way for 7 days, after which the sutures will be assessed clinically and histologically.\n\n\nRESULTS\nThe results of this study showed that all the sutures were retained in case of polyglycolic acid suture whereas four cases were not retained in case of black silk suture. As far as polyglycolic acid suture is concerned 25 cases were mild, 18 cases moderate and seven cases were severe. Black silk showed 20 mild cases, 21 moderate cases and six severe cases. The histological results showed that 33 cases showed mild, 14 cases moderate and three cases severe in case of polyglycolic acid suture. Whereas in case of black silk suture 41 cases were mild. Seven cases were moderate and two cases were severe. Black silk showed milder response than polyglycolic acid suture histologically.\n\n\nCONCLUSION\nThe polyglycolic acid suture was more superior because in all 50 patients the suture was retained. It had less tissue reaction, better handling characteristics and knotting capacity.",
"title": ""
}
] |
scidocsrr
|
d603cce8a4de260416da2690c9c53227
|
Filter Bank Common Spatial Pattern (FBCSP) algorithm using online adaptive and semi-supervised learning
|
[
{
"docid": "867d6a1aa9699ba7178695c45a10d23e",
"text": "A study of different on-line adaptive classifiers, using various feature types is presented. Motor imagery brain computer interface (BCI) experiments were carried out with 18 naive able-bodied subjects. Experiments were done with three two-class, cue-based, electroencephalogram (EEG)-based systems. Two continuously adaptive classifiers were tested: adaptive quadratic and linear discriminant analysis. Three feature types were analyzed, adaptive autoregressive parameters, logarithmic band power estimates and the concatenation of both. Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all. Also, a comparison of the latter with a discontinuously updated linear discriminant analysis, carried out in on-line experiments with six subjects, showed that on-line adaptation performed significantly better than a discontinuous update. Finally a static subject-specific baseline was also provided and used to compare performance measurements of both types of adaptation",
"title": ""
},
{
"docid": "1b3b2b8872d3b846120502a7a40e03d0",
"text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.",
"title": ""
}
] |
[
{
"docid": "c47c1e991cd090c7e92ae61419ca823b",
"text": "In recent years many tone mapping operators (TMOs) have been presented in order to display high dynamic range images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The inverse of tone mapping, inverse tone mapping, expands a low dynamic range image (LDRI) into an HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. We propose a new framework that approximates a solution to this problem. Our framework uses importance sampling of light sources to find the areas considered to be of high luminance and subsequently applies density estimation to generate an expand map in order to extend the range in the high luminance areas using an inverse tone mapping operator. The majority of today’s media is stored in the low dynamic range. Inverse tone mapping operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image based lighting (IBL). Moreover, we show another application that benefits quick capture of HDRIs for use in IBL.",
"title": ""
},
{
"docid": "5314dc130e963288d181ad6d6d0e6434",
"text": "Compressive sensing (CS) is an emerging field that provides a framework for image recovery using sub-Nyquist sampling rates. The CS theory shows that a signal can be reconstructed from a small set of random projections, provided that the signal is sparse in some basis, e.g., wavelets. In this paper, we describe a method to directly recover background subtracted images using CS and discuss its applications in some communication constrained multi-camera computer vision problems. We show how to apply the CS theory to recover object silhouettes (binary background subtracted images) when the objects of interest occupy a small portion of the camera view, i.e., when they are sparse in the spatial domain. We cast the background subtraction as a sparse approximation problem and provide different solutions based on convex optimization and total variation. In our method, as opposed to learning the background, we learn and adapt a low dimensional compressed representation of it, which is sufficient to determine spatial innovations; object silhouettes are then estimated directly using the compressive samples without any auxiliary image reconstruction. We also discuss simultaneous appearance recovery of the objects using compressive measurements. In this case, we show that it may be necessary to reconstruct one auxiliary image. To demonstrate the performance of the proposed algorithm, we provide results on data captured using a compressive single-pixel camera. We also illustrate that our approach is suitable for image coding in communication constrained problems by using data captured by multiple conventional cameras to provide 2D tracking and 3D shape reconstruction results with compressive measurements.",
"title": ""
},
{
"docid": "fd22f81af03d9dbcd746ebdfed5277c6",
"text": "Numerous NLP applications rely on search-engine queries, both to extract information from and to compute statistics over the Web corpus. But search engines often limit the number of available queries. As a result, query-intensive NLP applications such as Information Extraction (IE) distribute their query load over several days, making IE a slow, offline process. This paper introduces a novel architecture for IE that obviates queries to commercial search engines. The architecture is embodied in a system called KNOWITNOW that performs high-precision IE in minutes instead of days. We compare KNOWITNOW experimentally with the previouslypublished KNOWITALL system, and quantify the tradeoff between recall and speed. KNOWITNOW’s extraction rate is two to three orders of magnitude higher than KNOWITALL’s. 1 Background and Motivation Numerous modern NLP applications use the Web as their corpus and rely on queries to commercial search engines to support their computation (Turney, 2001; Etzioni et al., 2005; Brill et al., 2001). Search engines are extremely helpful for several linguistic tasks, such as computing usage statistics or finding a subset of web documents to analyze in depth; however, these engines were not designed as building blocks for NLP applications. As a result, the applications are forced to issue literally millions of queries to search engines, which limits the speed, scope, and scalability of the applications. Further, the applications must often then fetch some web documents, which at scale can be very time-consuming. In response to heavy programmatic search engine use, Google has created the “Google API” to shunt programmatic queries away from Google.com and has placed hard quotas on the number of daily queries a program can issue to the API. Other search engines have also introduced mechanisms to limit programmatic queries, forcing applications to introduce “courtesy waits” between queries and to limit the number of queries they issue. To understand these efficiency problems in more detail, consider the KNOWITALL information extraction system (Etzioni et al., 2005). KNOWITALL has a generateand-test architecture that extracts information in two stages. First, KNOWITALL utilizes a small set of domainindependent extraction patterns to generate candidate facts (cf. (Hearst, 1992)). For example, the generic pattern “NP1 such as NPList2” indicates that the head of each simple noun phrase (NP) in NPList2 is a member of the class named in NP1. By instantiating the pattern for class City, KNOWITALL extracts three candidate cities from the sentence: “We provide tours to cities such as Paris, London, and Berlin.” Note that it must also fetch each document that contains a potential candidate. Next, extending the PMI-IR algorithm (Turney, 2001), KNOWITALL automatically tests the plausibility of the candidate facts it extracts using pointwise mutual information (PMI) statistics computed from search-engine hit counts. For example, to assess the likelihood that “Yakima” is a city, KNOWITALL will compute the PMI between Yakima and a set of k discriminator phrases that tend to have high mutual information with city names (e.g., the simple phrase “city”). Thus, KNOWITALL requires at least k search-engine queries for every candidate extraction it assesses. Due to KNOWITALL’s dependence on search-engine queries, large-scale experiments utilizing KNOWITALL take days and even weeks to complete, which makes research using KNOWITALL slow and cumbersome. Private access to Google-scale infrastructure would provide sufficient access to search queries, but at prohibitive cost, and the problem of fetching documents (even if from a cached copy) would remain (as we discuss in Section 2.1). Is there a feasible alternative Web-based IE system? If so, what size Web index and how many machines are required to achieve reasonable levels of precision/recall? What would the architecture of this IE system look like, and how fast would it run? To address these questions, this paper introduces a novel architecture for web information extraction. It consists of two components that supplant the generateand-test mechanisms in KNOWITALL. To generate extractions rapidly we utilize our own specialized search engine, called the Bindings Engine (or BE), which efficiently returns bindings in response to variabilized queries. For example, in response to the query “Cities such as ProperNoun(Head(〈NounPhrase〉))”, BE will return a list of proper nouns likely to be city names. To assess these extractions, we use URNS, a combinatorial model, which estimates the probability that each extraction is correct without using any additional search engine queries.1 For further efficiency, we introduce an approximation to URNS, based on frequency of extractions’ occurrence in the output of BE, and show that it achieves comparable precision/recall to URNS. Our contributions are as follows: 1. We present a novel architecture for Information Extraction (IE), embodied in the KNOWITNOW system, which does not depend on Web search-engine queries. 2. We demonstrate experimentally that KNOWITNOW is the first system able to extract tens of thousands of facts from the Web in minutes instead of days. 3. We show that KNOWITNOW’s extraction rate is two to three orders of magnitude greater than KNOWITALL’s, but this increased efficiency comes at the cost of reduced recall. We quantify this tradeoff for KNOWITNOW’s 60,000,000 page index and extrapolate how the tradeoff would change with larger indices. Our recent work has described the BE search engine in detail (Cafarella and Etzioni, 2005), and also analyzed the URNS model’s ability to compute accurate probability estimates for extractions (Downey et al., 2005). However, this is the first paper to investigate the composition of these components to create a fast IE system, and to compare it experimentally to KNOWITALL in terms of time, In contrast, PMI-IR, which is built into KNOWITALL, requires multiple search engine queries to assess each potential extraction. recall, precision, and extraction rate. The frequencybased approximation to URNS and the demonstration of its success are also new. The remainder of the paper is organized as follows. Section 2 provides an overview of BE’s design. Section 3 describes the URNS model and introduces an efficient approximation to URNS that achieves similar precision/recall. Section 4 presents experimental results. We conclude with related and future work in Sections 5 and 6. 2 The Bindings Engine This section explains how relying on standard search engines leads to a bottleneck for NLP applications, and provides a brief overview of the Bindings Engine (BE)—our solution to this problem. A comprehensive description of BE appears in (Cafarella and Etzioni, 2005). Standard search engines are computationally expensive for IE and other NLP tasks. IE systems issue multiple queries, downloading all pages that potentially match an extraction rule, and performing expensive processing on each page. For example, such systems operate roughly as follows on the query (“cities such as 〈NounPhrase〉”): 1. Perform a traditional search engine query to find all URLs containing the non-variable terms (e.g., “cities such as”) 2. For each such URL: (a) obtain the document contents, (b) find the searched-for terms (“cities such as”) in the document text, (c) run the noun phrase recognizer to determine whether text following “cities such as” satisfies the linguistic type requirement, (d) and if so, return the string We can divide the algorithm into two stages: obtaining the list of URLs from a search engine, and then processing them to find the 〈NounPhrase〉 bindings. Each stage poses its own scalability and speed challenges. The first stage makes a query to a commercial search engine; while the number of available queries may be limited, a single one executes relatively quickly. The second stage fetches a large number of documents, each fetch likely resulting in a random disk seek; this stage executes slowly. Naturally, this disk access is slow regardless of whether it happens on a locally-cached copy or on a remote document server. The observation that the second stage is slow, even if it is executed locally, is important because it shows that merely operating a “private” search engine does not solve the problem (see Section 2.1). The Bindings Engine supports queries containing typed variables (such as NounPhrase) and string-processing functions (such as “head(X)” or “ProperNoun(X)”) as well as standard query terms. BE processes a variable by returning every possible string in the corpus that has a matching type, and that can be substituted for the variable and still satisfy the user’s query. If there are multiple variables in a query, then all of them must simultaneously have valid substitutions. (So, for example, the query “<NounPhrase> is located in <NounPhrase>” only returns strings when noun phrases are found on both sides of “is located in”.) We call a string that meets these requirements a binding for the variable in question. These queries, and the bindings they elicit, can usefully serve as part of an information extraction system or other common NLP tasks (such as gathering usage statistics). Figure 1 illustrates some of the queries that BE can handle. president Bush <Verb> cities such as ProperNoun(Head(<NounPhrase>)) <NounPhrase> is the CEO of <NounPhrase> Figure 1: Examples of queries that can be handled by BE. Queries that include typed variables and stringprocessing functions allow NLP tasks to be done efficiently without downloading the original document during query processing. BE’s novel neighborhood index enables it to process these queries with O(k) random disk seeks and O(k) serial disk reads, where k is the number of non-variable terms in its query. As a result, BE can yield orders of magnitude speedup as shown in the asymptotic analysis later in this section. The neighborhood index is an augme",
"title": ""
},
{
"docid": "1b9778fd4238c4d562b01b875d2f72de",
"text": "In this paper a stain sensor to measure large strain (80%) in textiles is presented. It consists of a mixture of 50wt-% thermoplastic elastomer (TPE) and 50wt-% carbon black particles and is fiber-shaped with a diameter of 0.315mm. The attachment of the sensor to the textile is realized using a silicone film. This sensor configuration was characterized using a strain tester and measuring the resistance (extension-retraction cycles): It showed a linear resistance response to strain, a small hysteresis, no ageing effects and a small dependance on the strain velocity. The total mean error caused by all these effects was +/-5.5% in strain. Washing several times in a conventional washing machine did not influence the sensor properties. The paper finishes by showing an example application where 21 strain sensors were integrated into a catsuit. With this garment, 27 upper body postures could be recognized with an accuracy of 97%.",
"title": ""
},
{
"docid": "265884122a08918e6d271b4cea3a455d",
"text": "This study exploits the CMOS-MEMS technology to demonstrate a condenser microphone without back-plate. The reference sensing electrodes are fixed to the substrate, and thus no back-plate is required. To reduce the unwanted deformations resulted from the thin-film residual-stresses and temperature variation for the suspended CMOS-MEMS structures, the suspended acoustic diaphragm and sensing electrodes are respectively formed by the pure-dielectric and symmetric metal-dielectric layers. The design was implemented using TSMC 0.18μm 1P6M standard CMOS process, and the in-house post-CMOS releasing. Typical microphone with acoustic-diaphragm of 300μm-diameter and sensing-electrode of 50μm-long is fabricated and tested. Measurements indicate the sensitivity is −64dBV/Pa at 1kHz under 13.5V bias-voltage. The design enables the CMOS-MEMS microphone having good temperature stability between 30∼90°C.",
"title": ""
},
{
"docid": "d7e2654767d1178871f3f787f7616a94",
"text": "We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans - with manually segmented white matter, cerebral cortex, ventricles and subcortical structures - to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.",
"title": ""
},
{
"docid": "2590725b2b99a6acd2bc8b9f81ad46ee",
"text": "The Internet of Things (IoT) provides the ability for humans and computers to learn and interact from billions of things that include sensors, actuators, services, and other Internet-connected objects. The realization of IoT systems will enable seamless integration of the cyber world with our physical world and will fundamentally change and empower human interaction with the world. A key technology in the realization of IoT systems is middleware, which is usually described as a software system designed to be the intermediary between IoT devices and applications. In this paper, we first motivate the need for an IoT middleware via an IoT application designed for real-time prediction of blood alcohol content using smartwatch sensor data. This is then followed by a survey on the capabilities of the existing IoT middleware. We further conduct a thorough analysis of the challenges and the enabling technologies in developing an IoT middleware that embraces the heterogeneity of IoT devices and also supports the essential ingredients of composition, adaptability, and security aspects of an IoT system.",
"title": ""
},
{
"docid": "764f05288ff0a0bbf77f264fcefb07eb",
"text": "Recent advances in energy harvesting have been intensified due to urgent needs of portable, wireless electronics with extensive life span. The idea of energy harvesting is applicable to sensors that are placed and operated on some entities for a long time, or embedded into structures or human bodies, in which it is troublesome or detrimental to replace the sensor module batteries. Such sensors are commonly called “self-powered sensors.” The energy harvester devices are capable of capturing environmental energy and supplanting the battery in a standalone module, or working along with the battery to extend substantially its life. Vibration is considered one of the most high power and efficient among other ambient energy sources, such as solar energy and temperature difference. Piezoelectric and electromagnetic devices are mostly used to convert vibration to ac electric power. For vibratory harvesting, a delicately designed power conditioning circuit is required to store as much as possible of the device-output power into a battery. The design for this power conditioning needs to be consistent with the electric characteristics of the device and battery to achieve maximum power transfer and efficiency. This study offers an overview on various power conditioning electronic circuits designed for vibratory harvester devices and their applications to self-powered sensors. Comparative comments are provided in terms of circuit topology differences, conversion efficiencies and applicability to a sensor module.",
"title": ""
},
{
"docid": "3076b9f747b1851f5ead6ca46e41970a",
"text": "This paper applies dimensional synthesis to explore the geometric design of dexterous three-fingered robotic hands for maximizing precision manipulation workspace, in which the hand stably moves an object with respect to the palm of the hand, with contacts only on the fingertips. We focus primarily on the tripod grasp, which is the most commonly used grasp for precision manipulation. We systematically explore the space of design parameters, with two main objectives: maximize the workspace of a fully actuated hand and explore how under-actuation modifies it. We use a mathematical framework that models the hand-plus-object system and examine how the workspace varies with changes in nine hand and object parameters such as link length and finger arrangement on the palm. Results show that to achieve the largest workspaces the palm radius should be approximately half of a finger length larger than the target object radius, that the distal link of the two-link fingers should be around 1–1.2 times the length of the proximal link, and that fingers should be arranged symmetrically about the palm with object contacts also symmetric. Furthermore, a proper parameter design for an under-actuated hand can achieve up to 50% of the workspace of a fully actuated hand. When compared to the system parameters of existing popular hand designs, larger palms and longer distal links are needed to maximize the manipulation workspace of the studied design.",
"title": ""
},
{
"docid": "bd7f4a27628506eb707918c990704405",
"text": "A multi database model of distributed information retrieval is presented in which people are assumed to have access to many searchable text databases In such an environment full text information retrieval consists of discovering database contents ranking databases by their expected ability to satisfy the query searching a small number of databases and merging results returned by di erent databases This paper presents algorithms for each task It also discusses how to reorganize conventional test collections into multi database testbeds and evaluation methodologies for multi database experiments A broad and diverse group of experimental results is presented to demonstrate that the algorithms are e ective e cient robust and scalable",
"title": ""
},
{
"docid": "f1325dd1350acf612dc1817db693a3d6",
"text": "Software for the measurement of genetic diversity (SMOGD) is a web-based application for the calculation of the recently proposed genetic diversity indices G'(ST) and D(est) . SMOGD includes bootstrapping functionality for estimating the variance, standard error and confidence intervals of estimated parameters, and SMOGD also generates genetic distance matrices from pairwise comparisons between populations. SMOGD accepts standard, multilocus Genepop and Arlequin formatted input files and produces HTML and tab-delimited output. This allows easy data submission, quick visualization, and rapid import of results into spreadsheet or database programs.",
"title": ""
},
{
"docid": "a6e6cf1473adb05f33b55cb57d6ed6d3",
"text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.",
"title": ""
},
{
"docid": "42a412b11300ec8d7721c1f532dadfb9",
"text": " Most data-driven dependency parsing approaches assume that sentence structure is represented as trees. Although trees have several desirable properties from both computational and linguistic perspectives, the structure of linguistic phenomena that goes beyond shallow syntax often cannot be fully captured by tree representations. We present a parsing approach that is nearly as simple as current data-driven transition-based dependency parsing frameworks, but outputs directed acyclic graphs (DAGs). We demonstrate the benefits of DAG parsing in two experiments where its advantages over dependency tree parsing can be clearly observed: predicate-argument analysis of English and syntactic analysis of Danish with a representation that includes long-distance dependencies and anaphoric reference links.",
"title": ""
},
{
"docid": "d15e7e655e7afc86e30e977516de7720",
"text": "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection/localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"title": ""
},
{
"docid": "fab36d134562d6c3a768841ff7e675b7",
"text": "A submicron of Ni/Sn transient liquid phase bonding at low temperature was investigated to surmount nowadays fine-pitch Cu/Sn process challenge. After bonding process, only uniform and high-temperature stable Ni3Sn4 intermetallic compound was existed. In addition, the advantages of this scheme showed excellent electrical and reliability performance and mechanical strength.",
"title": ""
},
{
"docid": "eb0a907ad08990b0fe5e2374079cf395",
"text": "We examine whether tolerance for failure spurs corporate innovation based on a sample of venture capital (VC) backed IPO firms. We develop a novel measure of VC investors’ failure tolerance by examining their tendency to continue investing in a venture conditional on the venture not meeting milestones. We find that IPO firms backed by more failure-tolerant VC investors are significantly more innovative. A rich set of empirical tests shows that this result is not driven by the endogenous matching between failure-tolerant VCs and startups with high exante innovation potentials. Further, we find that the marginal impact of VC failure tolerance on startup innovation varies significantly in the cross section. Being financed by a failure-tolerant VC is much more important for ventures that are subject to high failure risk. Finally, we examine the determinants of the cross-sectional heterogeneity in VC failure tolerance. We find that both capital constraints and career concerns can negatively distort VC failure tolerance. We also show that younger and less experienced VCs are more exposed to these distortions, making them less failure tolerant than more established VCs.",
"title": ""
},
{
"docid": "516ef94fad7f7e5801bf1ef637ffb136",
"text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1",
"title": ""
},
{
"docid": "98f76e0ea0f028a1423e1838bdebdccb",
"text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.",
"title": ""
},
{
"docid": "ca29fee64e9271e8fce675e970932af1",
"text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.",
"title": ""
},
{
"docid": "8ccb6c767704bc8aee424d17cf13d1e3",
"text": "In this paper, we present a page classification application in a banking workflow. The proposed architecture represents administrative document images by merging visual and textual descriptions. The visual description is based on a hierarchical representation of the pixel intensity distribution. The textual description uses latent semantic analysis to represent document content as a mixture of topics. Several off-the-shelf classifiers and different strategies for combining visual and textual cues have been evaluated. A final step uses an $$n$$ n -gram model of the page stream allowing a finer-grained classification of pages. The proposed method has been tested in a real large-scale environment and we report results on a dataset of 70,000 pages.",
"title": ""
}
] |
scidocsrr
|
c9ec7ff2118ca5018bc3a48f81f785e3
|
Overview of Small Scale Electric Energy Storage Systems suitable for dedicated coupling with Renewable Micro Sources
|
[
{
"docid": "2d146e411e1a1068f6e907709d542a4f",
"text": "Plug-in vehicles can behave either as loads or as a distributed energy and power resource in a concept known as vehicle-to-grid (V2G) connection. This paper reviews the current status and implementation impact of V2G/grid-to-vehicle (G2V) technologies on distributed systems, requirements, benefits, challenges, and strategies for V2G interfaces of both individual vehicles and fleets. The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural, and technical obstacles. Although V2G operation can reduce the lifetime of vehicle batteries, it is projected to become economical for vehicle owners and grid operators. Components and unidirectional/bidirectional power flow technologies of V2G systems, individual and aggregated structures, and charging/recharging frequency and strategies (uncoordinated/coordinated smart) are addressed. Three elements are required for successful V2G operation: power connection to the grid, control and communication between vehicles and the grid operator, and on-board/off-board intelligent metering. Success of the V2G concept depends on standardization of requirements and infrastructure decisions, battery technology, and efficient and smart scheduling of limited fast-charge infrastructure. A charging/discharging infrastructure must be deployed. Economic benefits of V2G technologies depend on vehicle aggregation and charging/recharging frequency and strategies. The benefits will receive increased attention from grid operators and vehicle owners in the future.",
"title": ""
}
] |
[
{
"docid": "721d26f8ea042c2fb3a87255a69e85f5",
"text": "The Time-Triggered Protocol (TTP), which is intended for use in distributed real-time control applications that require a high dependability and guaranteed timeliness, is discussed. It integrates all services that are required in the design of a fault-tolerant real-time system, such as predictable message transmission, message acknowledgment in group communication, clock synchronization, membership, rapid mode changes, redundancy management, and temporary blackout handling. It supports fault-tolerant configurations with replicated nodes and replicated communication channels. TTP provides these services with a small overhead so it can be used efficiently on twisted pair channels as well as on fiber optic networks.",
"title": ""
},
{
"docid": "e8d2bad4083a4a6cf5f96aedd5112f3f",
"text": "Mechanic's hands is a poorly defined clinical finding that has been reported in a variety of rheumatologic diseases. Morphologic descriptions include hyperkeratosis on the sides of the digits that sometimes extends to the distal tips, diffuse palmar scale, and (more recently observed) linear discrete scaly papules in a similar lateral distribution. The association of mechanic's hands with dermatomyositis, although recognized, is still debatable. In this review, most studies have shown that mechanic's hands is commonly associated with dermatomyositis and displays histopathologic findings of interface dermatitis, colloid bodies, and interstitial mucin, which are consistent with a cutaneous connective tissue disease. A more specific definition of this entity would help to determine its usefulness in classifying and clinically identifying patients with dermatomyositis, with implications related to subsequent screening for associated comorbidities in this setting.",
"title": ""
},
{
"docid": "ed9f79cab2dfa271ee436b7d6884bc13",
"text": "This study conducts a phylogenetic analysis of extant African papionin craniodental morphology, including both quantitative and qualitative characters. We use two different methods to control for allometry: the previously described narrow allometric coding method, and the general allometric coding method, introduced herein. The results of this study strongly suggest that African papionin phylogeny based on molecular systematics, and that based on morphology, are congruent and support a Cercocebus/Mandrillus clade as well as a Papio/Lophocebus/Theropithecus clade. In contrast to previous claims regarding papionin and, more broadly, primate craniodental data, this study finds that such data are a source of valuable phylogenetic information and removes the basis for considering hard tissue anatomy \"unreliable\" in phylogeny reconstruction. Among highly sexually dimorphic primates such as papionins, male morphologies appear to be particularly good sources of phylogenetic information. In addition, we argue that the male and female morphotypes should be analyzed separately and then added together in a concatenated matrix in future studies of sexually dimorphic taxa. Character transformation analyses identify a series of synapomorphies uniting the various papionin clades that, given a sufficient sample size, should potentially be useful in future morphological analyses, especially those involving fossil taxa.",
"title": ""
},
{
"docid": "5c2c7faab9ba34058057cea35bcc6b92",
"text": "Today, there are a large number of online discussion fora on the internet which are meant for users to express, discuss and exchange their views and opinions on various topics. For example, news portals, blogs, social media channels such as youtube. typically allow users to express their views through comments. In such fora, it has been often observed that user conversations sometimes quickly derail and become inappropriate such as hurling abuses, passing rude and discourteous comments on individuals or certain groups/communities. Similarly, some virtual agents or bots have also been found to respond back to users with inappropriate messages. As a result, inappropriate messages or comments are turning into an online menace slowly degrading the effectiveness of user experiences. Hence, automatic detection and filtering of such inappropriate language has become an important problem for improving the quality of conversations with users as well as virtual agents. In this paper, we propose a novel deep learning-based technique for automatically identifying such inappropriate language. We especially focus on solving this problem in two application scenarios—(a) Query completion suggestions in search engines and (b) Users conversations in messengers. Detecting inappropriate language is challenging due to various natural language phenomenon such as spelling mistakes and variations, polysemy, contextual ambiguity and semantic variations. For identifying inappropriate query suggestions, we propose a novel deep learning architecture called “Convolutional Bi-Directional LSTM (C-BiLSTM)\" which combines the strengths of both Convolution Neural Networks (CNN) and Bi-directional LSTMs (BLSTM). For filtering inappropriate conversations, we use LSTM and Bi-directional LSTM (BLSTM) sequential models. The proposed models do not rely on hand-crafted features, are trained end-end as a single model, and effectively capture both local features as well as their global semantics. Evaluating C-BiLSTM, LSTM and BLSTM models on real-world search queries and conversations reveals that they significantly outperform both pattern-based and other hand-crafted feature-based baselines.",
"title": ""
},
{
"docid": "72555dce49865e6aa57574b5ce7d399b",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Solving the Periodic Timetabling Problem using a Genetic Algorithm Diego Arenas, Remy Chevirer, Said Hanafi, Joaquin Rodriguez",
"title": ""
},
{
"docid": "c2ed6ac38a6014db73ba81dd898edb97",
"text": "The ability of personality traits to predict important life outcomes has traditionally been questioned because of the putative small effects of personality. In this article, we compare the predictive validity of personality traits with that of socioeconomic status (SES) and cognitive ability to test the relative contribution of personality traits to predictions of three critical outcomes: mortality, divorce, and occupational attainment. Only evidence from prospective longitudinal studies was considered. In addition, an attempt was made to limit the review to studies that controlled for important background factors. Results showed that the magnitude of the effects of personality traits on mortality, divorce, and occupational attainment was indistinguishable from the effects of SES and cognitive ability on these outcomes. These results demonstrate the influence of personality traits on important life outcomes, highlight the need to more routinely incorporate measures of personality into quality of life surveys, and encourage further research about the developmental origins of personality traits and the processes by which these traits influence diverse life outcomes.",
"title": ""
},
{
"docid": "55903de2bf1c877fac3fdfc1a1db68fc",
"text": "UK small to medium sized enterprises (SMEs) are suffering increasing levels of cybersecurity breaches and are a major point of vulnerability in the supply chain networks in which they participate. A key factor for achieving optimal security levels within supply chains is the management and sharing of cybersecurity information associated with specific metrics. Such information sharing schemes amongst SMEs in a supply chain network, however, would give rise to a certain level of risk exposure. In response, the purpose of this paper is to assess the implications of adopting select cybersecurity metrics for information sharing in SME supply chain consortia. Thus, a set of commonly used metrics in a prototypical cybersecurity scenario were chosen and tested from a survey of 17 UK SMEs. The results were analysed in respect of two variables; namely, usefulness of implementation and willingness to share across supply chains. Consequently, we propose a Cybersecurity Information Sharing Taxonomy for identifying risk exposure categories for SMEs sharing cybersecurity information, which can be applied to developing Information Sharing Agreements (ISAs) within SME supply chain consortia.",
"title": ""
},
{
"docid": "5f8fe83afe6870305536f29fa187e56e",
"text": "Textual grounding, i.e., linking words to objects in images, is a challenging but important task for robotics and human-computer interaction. Existing techniques benefit from recent progress in deep learning and generally formulate the task as a supervised learning problem, selecting a bounding box from a set of possible options. To train these deep net based approaches, access to a large-scale datasets is required, however, constructing such a dataset is time-consuming and expensive. Therefore, we develop a completely unsupervised mechanism for textual grounding using hypothesis testing as a mechanism to link words to detected image concepts. We demonstrate our approach on the ReferIt Game dataset and the Flickr30k data, outperforming baselines by 7.98% and 6.96% respectively.",
"title": ""
},
{
"docid": "0ab28f6fee235eb3e2e0897d7fb2a182",
"text": "Internet of things (IoT) applications have become increasingly popular in recent years, with applications ranging from building energy monitoring to personal health tracking and activity recognition. In order to leverage these data, automatic knowledge extraction – whereby we map from observations to interpretable states and transitions – must be done at scale. As such, we have seen many recent IoT data sets include annotations with a human expert specifying states, recorded as a set of boundaries and associated labels in a data sequence. ese data can be used to build automatic labeling algorithms that produce labels as an expert would. Here, we refer to human-specified boundaries as breakpoints. Traditional changepoint detection methods only look for statistically-detectable boundaries that are defined as abrupt variations in the generative parameters of a data sequence. However, we observe that breakpoints occur on more subtle boundaries that are non-trivial to detect with these statistical methods. In this work, we propose a new unsupervised approach, based on deep learning, that outperforms existing techniques and learns the more subtle, breakpoint boundaries with a high accuracy. rough extensive experiments on various real-world data sets – including human-activity sensing data, speech signals, and electroencephalogram (EEG) activity traces – we demonstrate the effectiveness of our algorithm for practical applications. Furthermore, we show that our approach achieves significantly beer performance than previous methods. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for thirdparty components of this work must be honored. For all other uses, contact the owner/author(s).",
"title": ""
},
{
"docid": "74acfe91e216c8494b7304cff03a8c66",
"text": "Diagnostic accuracy of the talar tilt test is not well established in a chronic ankle instability (CAI) population. Our purpose was to determine the diagnostic accuracy of instrumented and manual talar tilt tests in a group with varied ankle injury history compared with a reference standard of self-report questionnaire. Ninety-three individuals participated, with analysis occurring on 88 (39 CAI, 17 ankle sprain copers, and 32 healthy controls). Participants completed the Cumberland Ankle Instability Tool, arthrometer inversion talar tilt tests (LTT), and manual medial talar tilt stress tests (MTT). The ability to determine CAI status using the LTT and MTT compared with a reference standard was performed. The sensitivity (95% confidence intervals) of LTT and MTT was low [LTT = 0.36 (0.23-0.52), MTT = 0.49 (0.34-0.64)]. Specificity was good to excellent (LTT: 0.72-0.94; MTT: 0.78-0.88). Positive likelihood ratio (+ LR) values for LTT were 1.26-6.10 and for MTT were 2.23-4.14. Negative LR for LTT were 0.68-0.89 and for MTT were 0.58-0.66. Diagnostic odds ratios ranged from 1.43 to 8.96. Both clinical and arthrometer laxity testing appear to have poor overall diagnostic value for evaluating CAI as stand-alone measures. Laxity testing to assess CAI may only be useful to rule in the condition.",
"title": ""
},
{
"docid": "16f1b038f51e614da06ba84ebd175e14",
"text": "This paper explores how to extract argumentation-relevant information automatically from a corpus of legal decision documents, and how to build new arguments using that information. For decision texts, we use the Vaccine/Injury Project (V/IP) Corpus, which contains default-logic annotations of argument structure. We supplement this with presuppositional annotations about entities, events, and relations that play important roles in argumentation, and about the level of confidence that arguments would be successful. We then propose how to integrate these semantic-pragmatic annotations with syntactic and domain-general semantic annotations, such as those generated in the DeepQA architecture, and outline how to apply machine learning and scoring techniques similar to those used in the IBM Watson system for playing the Jeopardy! question-answer game. We replace this game-playing goal, however, with the goal of learning to construct legal arguments.",
"title": ""
},
{
"docid": "f05d7f391d6d805308801d23bc3234f0",
"text": "Identifying patterns in large high dimensional data sets is a challenge. As the number of dimensions increases, the patterns in the data sets tend to be more prominent in the subspaces than the original dimensional space. A system to facilitate presentation of such subspace oriented patterns in high dimensional data sets is required to understand the data.\n Heidi is a high dimensional data visualization system that captures and visualizes the closeness of points across various subspaces of the dimensions; thus, helping to understand the data. The core concept behind Heidi is based on prominence of patterns within the nearest neighbor relations between pairs of points across the subspaces.\n Given a d-dimensional data set as input, Heidi system generates a 2-D matrix represented as a color image. This representation gives insight into (i) how the clusters are placed with respect to each other, (ii) characteristics of placement of points within a cluster in all the subspaces and (iii) characteristics of overlapping clusters in various subspaces.\n A sample of results displayed and discussed in this paper illustrate how Heidi Visualization can be interpreted.",
"title": ""
},
{
"docid": "1abcf9480879b3d29072f09d5be8609d",
"text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.",
"title": ""
},
{
"docid": "dc883936f3cc19008983c9a5bb2883f3",
"text": "Laparoscopic surgery provides patients with less painful surgery but is more demanding for the surgeon. The increased technological complexity and sometimes poorly adapted equipment have led to increased complaints of surgeon fatigue and discomfort during laparoscopic surgery. Ergonomic integration and suitable laparoscopic operating room environment are essential to improve efficiency, safety, and comfort for the operating team. Understanding ergonomics can not only make life of surgeon comfortable in the operating room but also reduce physical strains on surgeon.",
"title": ""
},
{
"docid": "b4d3ad419c5165256f7a2551614e29e8",
"text": "Image-based rendering techniques are a powerful alternative to traditional polygon-based computer graphics. This paper presents a novel light field rendering technique which performs per-pixel depth correction of rays for high-quality reconstruction. Our technique stores combined RGB and depth values in a parabolic 2D texture for every light field sample acquired at discrete positions on a uniform spherical setup. Image synthesis is implemented on the GPU as a fragment program which extracts the correct image information from adjacent cameras for each fragment by applying per-pixel depth correction of rays. We show that the presented image-based rendering technique provides a significant improvement compared to previous approaches. We explain two different rendering implementations which make use of a uniform parametrisation to minimise disparity problems and ensure full six degrees of freedom for virtual view synthesis. While one rendering algorithm implements an iterative refinement approach for rendering light fields with per pixel depth correction, the other approach employs a raycaster, which provides superior rendering quality at moderate frame rates. GPU based per-fragment depth correction of rays, used in both implementations, helps reducing ghosting artifacts to a non-noticeable amount and provides a rendering technique that performs without exhaustive pre-processing for 3D object reconstruction and without real-time ray-object intersection calculations at rendering time.",
"title": ""
},
{
"docid": "2d6627f0cd3b184bae491d7ae003fe82",
"text": "The aim of this paper is to explore the possibility of using geo-referenced satellite or aerial images to augment an Unmanned Aerial Vehicle (UAV) navigation system in case of GPS failure. A vision based navigation system which combines inertial sensors, visual odometer and registration of a UAV on-board video to a given geo-referenced aerial image has been developed and tested on real flight-test data. The experimental results show that it is possible to extract useful position information from aerial imagery even when the UAV is flying at low altitude. It is shown that such information can be used in an automated way to compensate the drift of the UAV state estimation which occurs when only inertial sensors and visual odometer are used.",
"title": ""
},
{
"docid": "cfb811975943276a356f2b7dc95c157f",
"text": "Recent technological trends in mobile/wearable devices and sensors have been enabling an increasing number of people to collect and store their “lifelog” easily in their daily lives. Beyond exercise behavior change of individual users, our research focus is on the behavior change of teams, based on lifelogging technologies and lifelog sharing. In this paper, we propose and evaluate six different types of lifelog sharing models among team members for their exercise promotion, leveraging the concepts of “competition” and “collaboration.” According to our experimental mobile web application for exercise promotion and an extensive user study conducted with a total of 64 participants over a period of three weeks, the model with a “competition” technique resulted in the most effective performance for competitive teams, such as sports teams.",
"title": ""
},
{
"docid": "53749eab6b23c026f9cb3b37a7f639f3",
"text": "This article presents a dual system model (DSM) of decision making under risk and uncertainty according to which the value of a gamble is a combination of the values assigned to it independently by the affective and deliberative systems. On the basis of research on dual process theories and empirical research in Hsee and Rottenstreich (2004) and Rottenstreich and Hsee (2001) among others, the DSM incorporates (a) individual differences in disposition to rational versus emotional decision making, (b) the affective nature of outcomes, and (c) different task construals within its framework. The model has good descriptive validity and accounts for (a) violation of nontransparent stochastic dominance, (b) fourfold pattern of risk attitudes, (c) ambiguity aversion, (d) common consequence effect, (e) common ratio effect, (f) isolation effect, and (g) coalescing and event-splitting effects. The DSM is also used to make several novel predictions of conditions under which specific behavior patterns may or may not occur.",
"title": ""
},
{
"docid": "c40654c3cd0a52bff359212af2dd22b8",
"text": "This survey provides a structured and comprehensive overview of research on security and privacy in computer and communication networks that use game-theoretic approaches. We present a selected set of works to highlight the application of game theory in addressing different forms of security and privacy problems in computer networks and mobile applications. We organize the presented works in six main categories: security of the physical and MAC layers, security of self-organizing networks, intrusion detection systems, anonymity and privacy, economics of network security, and cryptography. In each category, we identify security problems, players, and game models. We summarize the main results of selected works, such as equilibrium analysis and security mechanism designs. In addition, we provide a discussion on the advantages, drawbacks, and future direction of using game theory in this field. In this survey, our goal is to instill in the reader an enhanced understanding of different research approaches in applying game-theoretic methods to network security. This survey can also help researchers from various fields develop game-theoretic solutions to current and emerging security problems in computer networking.",
"title": ""
}
] |
scidocsrr
|
b7fc4ae2c25e5b8abd031a4980887c91
|
Factors Influencing Customer Loyalty Toward Online Shopping
|
[
{
"docid": "5cdc962d9ce66938ad15829f8d0331ed",
"text": "This study aims to provide a picture of how relationship quality can influence customer loyalty or loyalty in the business-to-business context. Building on prior research, we propose relationship quality as a higher construct comprising trust, commitment, satisfaction and service quality. These dimensions of relationship quality can reasonably explain the influence of relationship quality on customer loyalty. This study follows the composite loyalty approach providing both behavioural aspects (purchase intentions) and attitudinal loyalty in order to fully explain the concept of customer loyalty. A literature search is undertaken in the areas of customer loyalty, relationship quality, perceived service quality, trust, commitment and satisfaction. This study then seeks to address the following research issues: Does relationship quality influence both aspects of customer loyalty? Which relationship quality dimensions influence each of the components of customer loyalty? This study was conducted in a business-to-business setting of the courier and freight delivery service industry in Australia. The survey was targeted to Australian Small to Medium Enterprises (SMEs). Two methods were chosen for data collection: mail survey and online survey. The total number of usable respondents who completed both survey was 306. In this study, a two step approach (Anderson and Gerbing 1988) was selected for measurement model and structural model. The results also show that all measurement models of relationship dimensions achieved a satisfactory level of fit to the data. The hypothesized relationships were estimated using structural equation modeling. The overall goodness of fit statistics shows that the structural model fits the data well. As the results show, to maintain customer loyalty to the supplier, a supplier may enhance all four aspects of relationship quality which are trust, commitment, satisfaction and service quality. Specifically, in order to enhance customer’s trust, a supplier should promote the customer’s trust in the supplier. In efforts to emphasize commitment, a supplier should focus on building affective aspects of commitment rather than calculative aspects. Satisfaction appears to be a crucial factor in maintaining purchase intentions whereas service quality will strongly enhance both purchase intentions and attitudinal loyalty.",
"title": ""
},
{
"docid": "a6e35b743c2cfd2cd764e5ad83decaa7",
"text": "An e-vendor’s website inseparably embodies an interaction with the vendor and an interaction with the IT website interface. Accordingly, research has shown two sets of unrelated usage antecedents by customers: 1) customer trust in the e-vendor and 2) customer assessments of the IT itself, specifically the perceived usefulness and perceived ease-of-use of the website as depicted in the technology acceptance model (TAM). Research suggests, however, that the degree and impact of trust, perceived usefulness, and perceived ease of use change with experience. Using existing, validated scales, this study describes a free-simulation experiment that compares the degree and relative importance of customer trust in an e-vendor vis-à-vis TAM constructs of the website, between potential (i.e., new) customers and repeat (i.e., experienced) ones. The study found that repeat customers trusted the e-vendor more, perceived the website to be more useful and easier to use, and were more inclined to purchase from it. The data also show that while repeat customers’ purchase intentions were influenced by both their trust in the e-vendor and their perception that the website was useful, potential customers were not influenced by perceived usefulness, but only by their trust in the e-vendor. Implications of this apparent trust-barrier and guidelines for practice are discussed.",
"title": ""
}
] |
[
{
"docid": "8508162ac44f56aaaa9c521e6628b7b2",
"text": "Pervasive or ubiquitous computing was developed thanks to the technological evolution of embedded systems and computer communication means. Ubiquitous computing has given birth to the concept of smart spaces that facilitate our daily life and increase our comfort where devices provide proactively adpated services. In spite of the significant previous works done in this domain, there still a lot of work and enhancement to do in particular the taking into account of current user's context when providing adaptable services. In this paper we propose an approach for context-aware services adaptation for a smart living room using two machine learning methods.",
"title": ""
},
{
"docid": "096ee0adebc8f8d7284ad55dd9cc9eca",
"text": "Automatically assigning the correct anatomical labels to coronary arteries is an important task that would speed up work flow times of radiographers, radiologists and cardiologists, and also aid the standard assessment of coronary artery disease. However, automatic labelling faces challenges resulting from structures as complex and widely varied as coronary anatomy. A system has been developed which addresses this requirement and is capable of automatically assigning correct anatomical labels to pre-segmented coronary artery centrelines in Cardiac Computed-Tomography Angiographic (CCTA) images with 84% accuracy. The system consists of two major phases: 1) training a multivariate gaussian classifier with labelled anatomies to estimate mean-vectors for each anatomical class and a covariance matrix pooled over all classes, based on a set of features; 2) generating all plausible label combinations per test anatomy based on a set of topological and geometric rules, and returning the most likely based on the parameters generated in 1).",
"title": ""
},
{
"docid": "f7e779114a0eb67fd9e3dfbacf5110c9",
"text": "Online game is an increasingly popular source of entertainment for all ages, with relatively prevalent negative consequences. Addiction is a problem that has received much attention. This research aims to develop a measure of online game addiction for Indonesian children and adolescents. The Indonesian Online Game Addiction Questionnaire draws from earlier theories and research on the internet and game addiction. Its construction is further enriched by including findings from qualitative interviews and field observation to ensure appropriate expression of the items. The measure consists of 7 items with a 5-point Likert Scale. It is validated by testing 1,477 Indonesian junior and senior high school students from several schools in Manado, Medan, Pontianak, and Yogyakarta. The validation evidence is shown by item-total correlation and criterion validity. The Indonesian Online Game Addiction Questionnaire has good item-total correlation (ranging from 0.29 to 0.55) and acceptable reliability (α = 0.73). It is also moderately correlated with the participant's longest time record to play online games (r = 0.39; p<0.01), average days per week in playing online games (ρ = 0.43; p<0.01), average hours per days in playing online games (ρ = 0.41; p<0.01), and monthly expenditure for online games (ρ = 0.30; p<0.01). Furthermore, we created a clinical cut-off estimate by combining criteria and population norm. The clinical cut-off estimate showed that the score of 14 to 21 may indicate mild online game addiction, and the score of 22 and above may indicate online game addiction. Overall, the result shows that Indonesian Online Game Addiction Questionnaire has sufficient psychometric property for research use, as well as limited clinical application.",
"title": ""
},
{
"docid": "8db41c68c77a5e9075a2404e382c0634",
"text": "We propose, WarpGAN, a fully automatic network that can generate caricatures given an input face photo. Besides transferring rich texture styles, WarpGAN learns to automatically predict a set of control points that can warp the photo into a caricature, while preserving identity. We introduce an identity-preserving adversarial loss that aids the discriminator to distinguish between different subjects. Moreover, WarpGAN allows customization of the generated caricatures by controlling the exaggeration extent and the visual styles. Experimental results on a public domain dataset, WebCaricature, show that WarpGAN is capable of generating a diverse set of caricatures while preserving the identities. Five caricature experts suggest that caricatures generated by WarpGAN are visually similar to hand-drawn ones and only prominent facial features are exaggerated. ∗ indicates equal contribution",
"title": ""
},
{
"docid": "d0690dcac9bf28f1fe6e2153035f898c",
"text": "The estimation of the homography between two views is a key step in many applications involving multiple view geometry. The homography exists between two views between projections of points on a 3D plane. A homography exists also between projections of all points if the cameras have purely rotational motion. A number of algorithms have been proposed for the estimation of the homography relation between two images of a planar scene. They use features or primitives ranging from simple points to a complex ones like non-parametric curves. Different algorithms make different assumptions on the imaging setup and what is known about them. This article surveys several homography estimation techniques from the literature. The essential theory behind each method is presented briefly and compared with the others. Experiments aimed at providing a representative analysis and comparison of the methods discussed are also presented in the paper.",
"title": ""
},
{
"docid": "cfcad9de10e7bc3cd0aa2a02f42e371d",
"text": "Ridesharing is a challenging topic in the urban computing paradigm, which utilizes urban sensors to generate a wealth of benefits and thus is an important branch in ubiquitous computing. Traditionally, ridesharing is achieved by mainly considering the received user ridesharing requests and then returns solutions to users. However, there lack research efforts of examining user acceptance to the proposed solutions. To our knowledge, user decisions in accepting/rejecting a rideshare is one of the crucial, yet not well studied, factors in the context of dynamic ridesharing. Moreover, existing research attention is mainly paid to find the nearest taxi, whilst in reality the nearest taxi may not be the optimal answer. In this paper, we tackle the above un-addressed issues while preserving the scalability of the system. We present a scalable framework, namely TRIPS, which supports the probability of accepting each request by the companion passengers and minimizes users’ efforts. In TRIPS, we propose three search techniques to increase the efficiency of the proposed ridesharing service. We also reformulate the criteria for searching and ranking ridesharing alternatives and propose indexing techniques to optimize the process. Our approach is validated using a real, large-scale dataset of 10,357 GPS-equipped taxis in the city of Beijing, China and showcases its effectiveness on the ridesharing task.",
"title": ""
},
{
"docid": "68a826dad7fd3da0afc234bb04505d8a",
"text": "The use of deep syntactic information such as typed dependencies has been shown to be very effective in Information Extraction. Despite this potential, the process of manually creating rule-based information extractors that operate on dependency trees is not intuitive for persons without an extensive NLP background. In this system demonstration, we present a tool and a workflow designed to enable initiate users to interactively explore the effect and expressivity of creating Information Extraction rules over dependency trees. We introduce the proposed five step workflow for creating information extractors, the graph query based rule language, as well as the core features of the PROPMINER tool.",
"title": ""
},
{
"docid": "eb8ad65b29e83dff8f1d588f231ee1d4",
"text": "Rheumatic heart disease (RHD) is an important cause of cardiac morbidity and mortality globally, particularly in the Pacific region. Susceptibility to RHD is thought to be due to genetic factors that are influenced by environmental factors, such as crowding and poverty. However, there are few data relating to these environmental factors in the Pacific region. We conducted a case-control study of 80 cases of RHD with age- and sex-matched controls in Fiji using a questionnaire to investigate associations of RHD with a number of environmental factors. There was a trend toward increased risk of RHD in association with poor-quality housing and lower socioeconomic status, but only one factor, maternal unemployment, reached statistical significance (OR 2.6, 95% confidence interval 1.2–5.8). Regarding crowding, little difference was observed between the two groups. Although our data do not allow firm conclusions, they do suggest that further studies of socioeconomic factors and RHD in the Pacific are warranted. They also suggest that genetic studies would provide an insight into susceptibility to RHD in this population.",
"title": ""
},
{
"docid": "c9e5a1b9c18718cc20344837e10b08f7",
"text": "Reconnaissance is the initial and essential phase of a successful advanced persistent threat (APT). In many cases, attackers collect information from social media, such as professional social networks. This information is used to select members that can be exploited to penetrate the organization. Detecting such reconnaissance activity is extremely hard because it is performed outside the organization premises. In this paper, we propose a framework for management of social network honeypots to aid in detection of APTs at the reconnaissance phase. We discuss the challenges that such a framework faces, describe its main components, and present a case study based on the results of a field trial conducted with the cooperation of a large European organization. In the case study, we analyze the deployment process of the social network honeypots and their maintenance in real social networks. The honeypot profiles were successfully assimilated into the organizational social network and received suspicious friend requests and mail messages that revealed basic indications of a potential forthcoming attack. In addition, we explore the behavior of employees in professional social networks, and their resilience and vulnerability toward social network infiltration.",
"title": ""
},
{
"docid": "443652d4a9d96eedd832c5dbb3b41f0a",
"text": "This paper presents a rigorous analytical model for analyzing the effects of local oscillator output imperfections such as phase/amplitude imbalances and phase noise on M -ary quadrature amplitude modulation (M-QAM) transceiver performance. A closed-form expression of the error vector magnitude (EVM) and an analytic expression of the symbol error rate (SER) are derived considering a single-carrier linear transceiver link with additive white Gaussian noise channel. The proposed analytical model achieves a good agreement with the simulation results based on the Monte Carlo method. The proposed QAM imperfection analysis model provides an efficient means for system and circuit designers to analyze the wireless transceiver performance and specify the transceiver block specifications.",
"title": ""
},
{
"docid": "f1889dbb14d6819426eba1695014ec2d",
"text": "Monoclonal antibodies (MAb) were produced to hexanal-bovine serum albumin conjugates. An indirect competitive ELISA was developed with a detection range of 1-50 ng of hexanal/mL. Hexanal conjugated to three different proteins was recognized, whereas free hexanal and the native proteins were not detected. The antibody cross-reacted with pentanal, heptanal, and 2-trans-hexenal conjugated to chicken serum albumin (CSA) with cross-reactivities of 37.9, 76.6, and 45.0%, respectively. There was no cross-reactivity with propanal, butanal, octanal, and nonanal conjugated to CSA. The hexanal content of a meat model system was determined using MAb and polyclonal antibody-based ELISAs and compared with analysis by a dynamic headspace gas chromatographic (HS-GC) method and a thiobarbituric acid reactive substances (TBARS) assay. Both ELISAs showed strong correlations with the HS-GC and TBARS methods. ELISAs may be a fast and simple alternative to GC for monitoring lipid oxidation in meat.",
"title": ""
},
{
"docid": "70b2bf304c161cd0a5408a813e5d9fc5",
"text": "[1] TheMoscoviense Basin, on the northern portion of the lunar farside, displays topography with a partial peak ring, in addition to rings that are offset to the southeast. These rings do not follow the typical concentric ring spacing that is recognized with other basins, suggesting that they may have formed as a result of an oblique impact or perhaps multiple impacts. In addition to the unusual ring spacing present, the Moscoviense Basin contains diverse mare basalt units covering the basin floor and a few highland mafic exposures within its rings. New analysis of previously mapped mare units suggests that the oldest mare unit is the remnant of the impact melt sheet. The Moscoviense Basin provides a glimpse into the lunar highlands terrain and an opportunity to explore the geologic context of initial lunar crustal development and modification.",
"title": ""
},
{
"docid": "b7b664d1749b61f2f423d7080a240a60",
"text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.",
"title": ""
},
{
"docid": "08260ba76f242725b8a08cbd8e4ec507",
"text": "Vocal singing (singing with lyrics) shares features common to music and language but it is not clear to what extent they use the same brain systems, particularly at the higher cortical level, and how this varies with expertise. Twenty-six participants of varying singing ability performed two functional imaging tasks. The first examined covert generative language using orthographic lexical retrieval while the second required covert vocal singing of a well-known song. The neural networks subserving covert vocal singing and language were found to be proximally located, and their extent of cortical overlap varied with singing expertise. Nonexpert singers showed greater engagement of their language network during vocal singing, likely accounting for their less tuneful performance. In contrast, expert singers showed a more unilateral pattern of activation associated with reduced engagement of the right frontal lobe. The findings indicate that singing expertise promotes independence from the language network with decoupling producing more tuneful performance. This means that the age-old singing practice of 'finding your singing voice' may be neurologically mediated by changing how strongly singing is coupled to the language system.",
"title": ""
},
{
"docid": "e6df946c5b56b38f35a3e9798cc819bf",
"text": "Most of the microwave communication systems have requirement of power dividers that are essential for power splitting and combining operations. This paper presents a structure and methodology for designing a rectangular waveguide Folded E plane Tee. The structure proposed has the advantage of less area consumption as compared to a conventional waveguide Tee. The paper also presents design equations using which one can design a Folded E plane Tee at any desired frequency. The designs thus obtained at some random frequencies from the equations have been simulated in COMSOL Multiphysics and the scattering parameters obtained have been presented.",
"title": ""
},
{
"docid": "7fc65ecddd4568283c0c21cd63804f07",
"text": "We present a system that detects floor plan automatically and realistically populated by a variety of objects of walls and windows. Given examples of floor plan, our system extracts, in advance, bearing wall, setting others objects which are not bearing wall into a non-bearing walls set. And then, to find contours in the non-bearing walls set. It recognize windows from these contours. The left objects of the set will to be identified walls including with the original bearing walls. The last step is to disintegrate wall into independent rectangular one by one. We demonstrate that our system can handle multiple realistic floor plan and, through decomposing and rebuilding, recognize walls, windows of a floor plan image. Based on high resolution images downloaded from Baidu, the experimental result shows that the average recognition rate of the proposed method is 90.21%, which proves the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "fee78b996d88584499f342f7da89addf",
"text": "It has become standard for search engines to augment result lists with document summaries. Each document summary consists of a title, abstract, and a URL. In this work, we focus on the task of selecting relevant sentences for inclusion in the abstract. In particular, we investigate how machine learning-based approaches can effectively be applied to the problem. We analyze and evaluate several learning to rank approaches, such as ranking support vector machines (SVMs), support vector regression (SVR), and gradient boosted decision trees (GBDTs). Our work is the first to evaluate SVR and GBDTs for the sentence selection task. Using standard TREC test collections, we rigorously evaluate various aspects of the sentence selection problem. Our results show that the effectiveness of the machine learning approaches varies across collections with different characteristics. Furthermore, the results show that GBDTs provide a robust and powerful framework for the sentence selection task and significantly outperform SVR and ranking SVMs on several data sets.",
"title": ""
},
{
"docid": "03daea46a533bcc91cc07071f7c2ca2a",
"text": "This article describes the RMediation package,which offers various methods for building confidence intervals (CIs) for mediated effects. The mediated effect is the product of two regression coefficients. The distribution-of-the-product method has the best statistical performance of existing methods for building CIs for the mediated effect. RMediation produces CIs using methods based on the distribution of product, Monte Carlo simulations, and an asymptotic normal distribution. Furthermore, RMediation generates percentiles, quantiles, and the plot of the distribution and CI for the mediated effect. An existing program, called PRODCLIN, published in Behavior Research Methods, has been widely cited and used by researchers to build accurate CIs. PRODCLIN has several limitations: The program is somewhat cumbersome to access and yields no result for several cases. RMediation described herein is based on the widely available R software, includes several capabilities not available in PRODCLIN, and provides accurate results that PRODCLIN could not.",
"title": ""
},
{
"docid": "d6681899902b990f82b775927cde9277",
"text": "Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression recognition has recently become a promising research area. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this paper, we investigate various feature representation and expression classification schemes to recognize seven different facial expressions, such as happy, neutral, angry, disgust, sad, fear and surprise, in the JAFFE database. Experimental results show that the method of combining 2D-LDA (Linear Discriminant Analysis) and SVM (Support Vector Machine) outperforms others. The recognition rate of this method is 95.71% by using leave-one-out strategy and 94.13% by using cross-validation strategy. It takes only 0.0357 second to process one image of size 256 × 256.",
"title": ""
},
{
"docid": "46fba65ad6ad888bb3908d75f0bcc029",
"text": "Deep neural network (DNN) obtains significant accuracy improvements on many speech recognition tasks and its power comes from the deep and wide network structure with a very large number of parameters. It becomes challenging when we deploy DNN on devices which have limited computational and storage resources. The common practice is to train a DNN with a small number of hidden nodes and a small senone set using the standard training process, leading to significant accuracy loss. In this study, we propose to better address these issues by utilizing the DNN output distribution. To learn a DNN with small number of hidden nodes, we minimize the Kullback–Leibler divergence between the output distributions of the small-size DNN and a standard large-size DNN by utilizing a large number of un-transcribed data. For better senone set generation, we cluster the senones in the large set into a small one by directly relating the clustering process to DNN parameters, as opposed to decoupling the senone generation and DNN training process in the standard training. Evaluated on a short message dictation task, the proposed two methods get 5.08% and 1.33% relative word error rate reduction from the standard training method, respectively.",
"title": ""
}
] |
scidocsrr
|
b476ab9cdaf5925ded36a1400b3d45fd
|
MDL-CW: A Multimodal Deep Learning Framework with CrossWeights
|
[
{
"docid": "e5874c373f9bc4565249f335560023ff",
"text": "We propose a multi-wing harmonium model for mining multimedia data that extends and improves on earlier models based on two-layer random fields, which capture bidirectional dependencies between hidden topic aspects and observed inputs. This model can be viewed as an undirected counterpart of the two-layer directed models such as LDA for similar tasks, but bears significant difference in inference/learning cost tradeoffs, latent topic representations, and topic mixing mechanisms. In particular, our model facilitates efficient inference and robust topic mixing, and potentially provides high flexibilities in modeling the latent topic spaces. A contrastive divergence and a variational algorithm are derived for learning. We specialized our model to a dual-wing harmonium for captioned images, incorporating a multivariate Poisson for word-counts and a multivariate Gaussian for color histogram. We present empirical results on the applications of this model to classification, retrieval and image annotation on news video collections, and we report an extensive comparison with various extant models.",
"title": ""
}
] |
[
{
"docid": "5eb5c276cc9258cbc9acf8983e080feb",
"text": "In future IoT big-data management and knowledge discovery for large scale industrial automation application, the importance of industrial internet is increasing day by day. Several diversified technologies such as IoT (Internet of Things), computational intelligence, machine type communication, big-data, and sensor technology can be incorporated together to improve the data management and knowledge discovery efficiency of large scale automation applications. So in this work, we need to propose a Cognitive Oriented IoT Big-data Framework (COIB-framework) along with implementation architecture, IoT big-data layering architecture, and data organization and knowledge exploration subsystem for effective data management and knowledge discovery that is well-suited with the large scale industrial automation applications. The discussion and analysis show that the proposed framework and architectures create a reasonable solution in implementing IoT big-data based smart industrial applications.",
"title": ""
},
{
"docid": "5cf42a3c0b2f2a696e5a0cfda4f61f4a",
"text": "Institutional repositories are “digital collections that capture and preserve the intellectual output of a single or multi-university community” (Crow, 2002). While some repositories focus on particular subject domains, an institutional repository stores and makes accessible the educational, research and associated assets of an institution. Although most of the currently established institutional repositories are ‘e-prints’ repositories providing open access to the research outputs of a university or research institution, the content does not need to be limited to e-prints but could potentially include research data, learning material, image collections and many other different types of content.",
"title": ""
},
{
"docid": "5b2b0a3a857d06246cebb69e6e575b5f",
"text": "This paper develops a novel framework for feature extraction based on a combination of Linear Discriminant Analysis and cross-correlation. Multiple Electrocardiogram (ECG) signals, acquired from the human heart in different states such as in fear, during exercise, etc. are used for simulations. The ECG signals are composed of P, Q, R, S and T waves. They are characterized by several parameters and the important information relies on its HRV (Heart Rate Variability). Human interpretation of such signals requires experience and incorrect readings could result in potentially life threatening and even fatal consequences. Thus a proper interpretation of ECG signals is of paramount importance. This work focuses on designing a machine based classification algorithm for ECG signals. The proposed algorithm filters the ECG signals to reduce the effects of noise. It then uses the Fourier transform to transform the signals into the frequency domain for analysis. The frequency domain signal is then cross correlated with predefined classes of ECG signals, in a manner similar to pattern recognition. The correlated co-efficients generated are then thresholded. Moreover Linear Discriminant Analysis is also applied. Linear Discriminant Analysis makes classes of different multiple ECG signals. LDA makes classes on the basis of mean, global mean, mean subtraction, transpose, covariance, probability and frequencies. And also setting thresholds for the classes. The distributed space area is divided into regions corresponding to each of the classes. Each region associated with a class is defined by its thresholds. So it is useful in distinguishing ECG signals from each other. And pedantic details from LDA (Linear Discriminant Analysis) output graph can be easily taken in account rapidly. The output generated after applying cross-correlation and LDA displays either normal, fear, smoking or exercise ECG signal. As a result, the system can help clinically on large scale by providing reliable and accurate classification in a fast and computationally efficient manner. The doctors can use this system by gaining more efficiency. As very few errors are involved in it, showing accuracy between 90% 95%.",
"title": ""
},
{
"docid": "f66c9aa537630fdbff62d8d49205123b",
"text": "This workshop will explore community based repositories for educational data and analytic tools that are used to connect researchers and reduce the barriers to data sharing. Leading innovators in the field, as well as attendees, will identify and report on bottlenecks that remain toward our goal of a unified repository. We will discuss these as well as possible solutions. We will present LearnSphere, an NSF funded system that supports collaborating on and sharing a wide variety of educational data, learning analytics methods, and visualizations while maintaining confidentiality. We will then have hands-on sessions in which attendees have the opportunity to apply existing learning analytics workflows to their choice of educational datasets in the repository (using a simple drag-and-drop interface), add their own learning analytics workflows (requires very basic coding experience), or both. Leaders and attendees will then jointly discuss the unique benefits as well as the limitations of these solutions. Our goal is to create building blocks to allow researchers to integrate their data and analysis methods with others, in order to advance the future of learning science.",
"title": ""
},
{
"docid": "9ecf20a9df11e008ddd01c9dea38b942",
"text": "A n interest rate swap is a contractual agreement between two parties to exchange a series of interest rate payments without exchanging the underlying debt. The interest rate swap represents one example of a general category of financial instruments known as derivative instruments. In the most general terms, a derivative instrument is an agreement whose value derives from some underlying market return, market price, or price index. The rapid growth of the market for swaps and other derivatives in recent years has spurred considerable controversy over the economic rationale for these instruments. Many observers have expressed alarm over the growth and size of the market, arguing that interest rate swaps and other derivative instruments threaten the stability of financial markets. Recently, such fears have led both legislators and bank regulators to consider measures to curb the growth of the market. Several legislators have begun to promote initiatives to create an entirely new regulatory agency to supervise derivatives trading activity. Underlying these initiatives is the premise that derivative instruments increase aggregate risk in the economy, either by encouraging speculation or by burdening firms with risks that management does not understand fully and is incapable of controlling.1 To be certain, much of this criticism is aimed at many of the more exotic derivative instruments that have begun to appear recently. Nevertheless, it is difficult, if not impossible, to appreciate the economic role of these more exotic instruments without an understanding of the role of the interest rate swap, the most basic of the new generation of financial derivatives.",
"title": ""
},
{
"docid": "d9e7f1461f687a4406f48e043c7a42e1",
"text": "This paper addresses the design of reactive real-time embedded systems. Such systems are often heterogeneous in implementation technologies and design styles, for example by combining hardware ASICs with embedded software. The concurrent design process for such embedded systems involves solving the specification, validation, and synthesis problems. We review the variety of approaches to these problems that have been taken.",
"title": ""
},
{
"docid": "de0761b7a43cafe7f30d6f8e518dd031",
"text": "The Internet of Things (IOT) has been denoted as a new wave of information and communication technology (ICT) advancements. The IOT is a multidisciplinary concept that encompasses a wide range of several technologies, application domains, device capabilities, and operational strategies, etc. The ongoing IOT research activities are directed towards the definition and design of standards and open architectures which is still have the issues requiring a global consensus before the final deployment. This paper gives over view about IOT technologies and applications related to agriculture with comparison of other survey papers and proposed a novel irrigation management system. Our main objective of this work is to for Farming where various new technologies to yield higher growth of the crops and their water supply. Automated control features with latest electronic technology using microcontroller which turns the pumping motor ON and OFF on detecting the dampness content of the earth and GSM phone line is proposed after measuring the temperature, humidity, and soil moisture.",
"title": ""
},
{
"docid": "804322502b82ad321a0f97d6f83858ee",
"text": "Cheating is a real problem in the Internet of Things. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. The problem, however, isnt inherent in whether or not to embrace the idea of an open platform and open-source software, but to establish a methodology to verify the trustworthiness and control any access. This paper focuses on building an access control model and system based on trust computing. This is a new field of access control techniques which includes Access Control, Trust Computing, Internet of Things, network attacks, and cheating technologies. Nevertheless, the target access control systems can be very complex to manage. This paper presents an overview of the existing work on trust computing, access control models and systems in IoT. It not only summarizes the latest research progress, but also provides an understanding of the limitations and open issues of the existing work. It is expected to provide useful guidelines for future research. Access Control, Trust Management, Internet of Things Today, our world is characterized by increasing connectivity. Things in this world are increasingly being connected. Smart phones have started an era of global proliferation and rapid consumerization of smart devices. It is predicted that the next disruptive transformation will be the concept of ‘Internet of Things’ [2]. From networked computers to smart devices, and to connected people, we are now moving towards connected ‘things’. Items of daily use are being turned into smart devices as various sensors are embedded in consumer and enterprise equipment, industrial and household appliances and personal devices. Pervasive connectivity mechanisms build bridges between our clothing and vehicles. Interaction among these things/devices can happen with little or no human intervention, thereby conjuring an enormous network, namely the Internet of Things (IoT). One of the primary goals behind IoT is to sense and send data over remote locations to enable detection of significant events, and take relevant actions sooner rather than later [25]. This technological trend is being pursued actively in all areas including the medical and health care fields. IoT provides opportunities to dramatically improve many medical applications, such as glucose level sensing, remote health monitoring (e.g. electrocardiogram, blood pressure, body temperature, and oxygen saturation monitoring, etc), rehabilitation systems, medication management, and ambient assisted living systems. The connectivity offered by IoT extends from humanto-machine to machine-to-machine communications. The interconnected devices collect all kinds of data about patients. Intelligent and ubiquitous services can then be built upon the useful information extracted from the data. During the data aggregation, fusion, and analysis processes, user ar X iv :1 61 0. 01 06 5v 1 [ cs .C R ] 4 O ct 2 01 6 2 Z. Yunpeng and X. Wu privacy and information security become major concerns for IoT services and applications. Security breaches will seriously compromise user acceptance and consumption on IoT applications in the medical and health care areas. The large scale of integration of heterogeneous devices in IoT poses a great challenge for the provision of standard security services. Many IoT devices are vulnerable to attacks since no high-level intelligence can be enabled on these passive devices [10], and security vulnerabilities in products uncovered by researchers have spread from cars [13] to garage doors [9] and to skateboards [35]. Technological utopianism surrounding IoT was very real until the emergence of the Volkswagen emissions scandal [4]. The German conglomerate admitted installing software in its diesel cars that recognizes and identifies patterns when vehicles are being tested for nitrogen oxide emissions and cuts them so that they fall within the limits prescribed by US regulators (004 g/km). Once the test is over, the car returns to its normal state: emitting nitrogen oxides (nitric oxide and nitrogen dioxide) at up to 35 times the US legal limit. The focus of IoT is not the thing itself, but the data generated by the devices and the value therein. What Volkswagen has brought to light goes far beyond protecting data and privacy, preventing intrusion, and keeping the integrity of the data. It casts doubts on the credibility of the IoT industry and its ability to secure data, reach agreement on standards, or indeed guarantee that consumer privacy rights are upheld. All in all, IoT holds tremendous potential to improve our health, make our environment safer, boost productivity and efficiency, and conserve both water and energy. IoT needs to improve its trustworthiness, however, before it can be used to solve challenging economic and environmental problems tied to our social lives. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. If a node of IoT cheats, how does a system identify the cheating node and prevent a malicious attack from misbehaving nodes? This paper focuses on an access control mechanism that will only grant network access permission to trustworthy nodes. Embedding trust management into access control will improve the systems ability to discover untrustworthy participating nodes and prevent discriminatory attacks. There has been substantial research in this domain, most of which has been related to attacks like self-promotion and ballot stuffing where a node falsely promotes its importance and boosts the reputation of a malicious node (by providing good recommendations) to engage in a collusion-style attack. The traditional trust computation model is inefficient in differentiating a participant object in IoT, which is designed to win trust by cheating. In particular, the trust computation model will fail when a malicious node intelligently adjusts its behavior to hide its defect and obtain a higher trust value for its own gain. 1 Access Control Model and System IoT comprises the following three Access Control types Access Control in Internet of Things: A Survey 3 – Role-based access control (RBAC) – Credential-based access control (CBAC) — in order to access some resources and data, users require certain certificate information that falls into the following two types: 1. Attribute-Based access control (ABAC) : If a user has some special attributes, it is possible to access a particular resource or piece of data. 2. Capability-Based access control (Cap-BAC): A capability is a communicable, unforgeable rights markup, which corresponds to a value that uniquely specifies certain access rights to objects owned by subjects. – Trust-based access control (TBAC) In addition, there are also combinations of the aforementioned three methods. In order to improve the security of the system, some of the access control methods include encryption and key management mechanisms.",
"title": ""
},
{
"docid": "7a4c1a44ce754522bdb6481ebbede6e2",
"text": "There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7, 306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives.",
"title": ""
},
{
"docid": "cefcf529227d2d29780b09bb87b2c66c",
"text": "This paper presents a simple method o f trajectory generation of robot manipulators based on an optimal control problem formulation. It was found recently that the jerk, the third derivative of position, of the desired trajectory, adversely affects the efficiency of the control algorithms and therefore should be minimized. Assuming joint position, velocity and acceleration t o be constrained a cost criterion containing jerk is considered. Initially. the simple environment without obstacles and constrained by the physical l imitat ions o f the jo in t angles only i s examined. For practical reasons, the free execution t ime has been used t o handle the velocity and acceleration constraints instead of the complete bounded state variable formulation. The problem o f minimizing the jerk along an arbitrary Cartesian trajectory i s formulated and given analytical solution, making this method useful for real world environments containing obstacles.",
"title": ""
},
{
"docid": "ba3bf5f03e44e29a657d8035bb00535c",
"text": "Due to the broadcast nature of WiFi communication anyone with suitable hardware is able to monitor surrounding traffic. However, a WiFi device is able to listen to only one channel at any given time. The simple solution for capturing traffic across multiple channels involves channel hopping, which as a side effect reduces dwell time per channel. Hence monitoring with channel hopping does not produce a comprehensive view of the traffic across all channels at a given time.\n In this paper we present an inexpensive multi-channel WiFi capturing system (dubbed the wireless shark\") and evaluate its performance in terms of traffic cap- turing efficiency. Our results confirm and quantify the intuition that the performance is directly related to the number of WiFi adapters being used for listening. As a second contribution of the paper we use the wireless shark to observe the behavior of 14 different mobile devices, both in controlled and normal office environments. In our measurements, we focus on the probe traffic that the devices send when they attempt to discover available WiFi networks. Our results expose some distinct characteristics in various mobile devices' probing behavior.",
"title": ""
},
{
"docid": "eb3d82a85c8a9c3f815f0f62b6ae55cd",
"text": "In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.",
"title": ""
},
{
"docid": "1cecdc7bdec511d05c81defa3186efb9",
"text": "Concurrency control in a database system involves the activity of controlling the relative order of conflicting operations, thereby ensuring database consistency. Multiversion concurrency control is timestamp based protocol that can be used to schedule the operations to maintain the consistency of the databases. In this protocol each write on a data item produces a new copy (or version) of that data item while retaining the old version. A systematic approach to specification is essential for the production of any substantial system description. Formal methods are mathematical technique that provide systematic approach for building and verification of model. We have used Event-B as a formal technique for construction of our model. Event-B provides complete framework by rigorous description of problem at abstract level and discharge of proof obligations arising due to consistency checking. In this paper, we outline formal construction of model of multiversion concurrency control scheme for database transactions using Event-B.",
"title": ""
},
{
"docid": "c19844950a3531d152408fd05904772b",
"text": "Processing sequential data of variable length is a major challenge in a wide range of applications, such as speech recognition, language modeling, generative image modeling and machine translation. Here, we address this challenge by proposing a novel recurrent neural network (RNN) architecture, the Fast-Slow RNN (FS-RNN). The FS-RNN incorporates the strengths of both multiscale RNNs and deep transition RNNs as it processes sequential data on different timescales and learns complex transition functions from one time step to the next. We evaluate the FS-RNN on two character level language modeling data sets, Penn Treebank and Hutter Prize Wikipedia, where we improve state of the art results to 1.19 and 1.25 bits-per-character (BPC), respectively. In addition, an ensemble of two FS-RNNs achieves 1.20 BPC on Hutter Prize Wikipedia outperforming the best known compression algorithm with respect to the BPC measure. We also present an empirical investigation of the learning and network dynamics of the FS-RNN, which explains the improved performance compared to other RNN architectures. Our approach is general as any kind of RNN cell is a possible building block for the FS-RNN architecture, and thus can be flexibly applied to different tasks.",
"title": ""
},
{
"docid": "3ff3c453f8e49424d47ae7b360bdcaed",
"text": "Most existing sequence labelling models rely on a fixed decomposition of a target sequence into a sequence of basic units. These methods suffer from two major drawbacks: 1) the set of basic units is fixed, such as the set of words, characters or phonemes in speech recognition, and 2) the decomposition of target sequences is fixed. These drawbacks usually result in sub-optimal performance of modeling sequences. In this paper, we extend the popular CTC loss criterion to alleviate these limitations, and propose a new loss function called Gram-CTC. While preserving the advantages of CTC, Gram-CTC automatically learns the best set of basic units (grams), as well as the most suitable decomposition of target sequences. Unlike CTC, Gram-CTC allows the model to output variable number of characters at each time step, which enables the model to capture longer term dependency and improves the computational efficiency. We demonstrate that the proposed Gram-CTC improves CTC in terms of both performance and efficiency on the large vocabulary speech recognition task at multiple scales of data, and that with Gram-CTC we can outperform the state-of-the-art on a standard speech benchmark.",
"title": ""
},
{
"docid": "b466803c9a9be5d38171ece8d207365e",
"text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.",
"title": ""
},
{
"docid": "80e5ae477832764b1b1bae133b0ed66d",
"text": "Speech emotion recognition is a challenging problem partly because it is unclear what features are effective for the task. In this paper we propose to utilize deep neural networks (DNNs) to extract high level features from raw data and show that they are effective for speech emotion recognition. We first produce an emotion state probability distribution for each speech segment using DNNs. We then construct utterance-level features from segment-level probability distributions. These utterancelevel features are then fed into an extreme learning machine (ELM), a special simple and efficient single-hidden-layer neural network, to identify utterance-level emotions. The experimental results demonstrate that the proposed approach effectively learns emotional information from low-level features and leads to 20% relative accuracy improvement compared to the stateof-the-art approaches.",
"title": ""
},
{
"docid": "15886d83be78940609c697b30eb73b13",
"text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.",
"title": ""
},
{
"docid": "b83a0341f2ead9c72eda4217e0f31ea2",
"text": "Time-series classification has attracted considerable research attention due to the various domains where time-series data are observed, ranging from medicine to econometrics. Traditionally, the focus of time-series classification has been on short time-series data composed of a few patterns exhibiting variabilities, while recently there have been attempts to focus on longer series composed of multiple local patrepeating with an arbitrary irregularity. The primary contribution of this paper relies on presenting a method which can detect local patterns in repetitive time-series via fitting local polynomial functions of a specified degree. We capture the repetitiveness degrees of time-series datasets via a new measure. Furthermore, our method approximates local polynomials in linear time and ensures an overall linear running time complexity. The coefficients of the polynomial functions are converted to symbolic words via equi-area discretizations of the coefficients' distributions. The symbolic polynomial words enable the detection of similar local patterns by assigning the same word to similar polynomials. Moreover, a histogram of the frequencies of the words is constructed from each time-series' bag of words. Each row of the histogram enables a new representation for the series and symbolizes the occurrence of local patterns and their frequencies. In an experimental comparison against state-of-the-art baselines on repetitive datasets, our method demonstrates significant improvements in terms of prediction accuracy.",
"title": ""
}
] |
scidocsrr
|
54fd15410274577ba4b9cacddba0060e
|
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference
|
[
{
"docid": "ab525baa5aef2bedd87307aa76736045",
"text": "For years, recursive neural networks (RvNNs) have shown to be suitable for representing text into fixed-length vectors and achieved good performance on several natural language processing tasks. However, the main drawback of RvNN is that it requires explicit tree structure (e.g. parse tree), which makes data preparation and model implementation hard. In this paper, we propose a novel tree-structured long short-term memory (Tree-LSTM) architecture that efficiently learns how to compose task-specific tree structures only from plain text data. To achieve this property, our model uses Straight-Through (ST) Gumbel-Softmax estimator to decide the parent node among candidates and to calculate gradients of the discrete decision. We evaluate the proposed model on natural language interface and sentiment analysis and show that our model outperforms or at least comparable to previous Tree-LSTM-based works. Especially in the natural language interface task, our model establishes the new state-of-the-art accuracy of 85.4%. We also find that our model converges significantly faster and needs less memory than other models of complex structures.",
"title": ""
},
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
}
] |
[
{
"docid": "b6d5849d7950438716e31880860f835c",
"text": "The promotion of reflective capacity within the teaching of clinical skills and professionalism is posited as fostering the development of competent health practitioners. An innovative approach combines structured reflective writing by medical students and individualized faculty feedback to those students to augment instruction on reflective practice. A course for preclinical students at the Warren Alpert Medical School of Brown University, entitled \"Doctoring,\" combined reflective writing assignments (field notes) with instruction in clinical skills and professionalism and early clinical exposure in a small-group format. Students generated multiple e-mail field notes in response to structured questions on course topics. Individualized feedback from a physician-behavioral scientist dyad supported the students' reflective process by fostering critical-thinking skills, highlighting appreciation of the affective domain, and providing concrete recommendations. The development and implementation of this innovation are presented, as is an analysis of the written evaluative comments of students taking the Doctoring course. Theoretical and clinical rationales for features of the innovation and supporting evidence of their effectiveness are presented. Qualitative analyses of students' evaluations yielded four themes of beneficial contributions to their learning experience: promoting deeper and more purposeful reflection, the value of (interdisciplinary) feedback, the enhancement of group process, and personal and professional development. Evaluation of the innovation was the fifth theme; some limitations are described, and suggestions for improvement are provided. Issues of the quality of the educational paradigm, generalizability, and sustainability are addressed.",
"title": ""
},
{
"docid": "88bd6fe890ed385ae60ace44ab71db3e",
"text": "Background: While concerns about adverse health outcomes of unintended pregnancies for the mother have been expressed, there has only been limited research on the outcomes of unintended pregnancies. This review provides an overview of antecedents and maternal health outcomes of unintended pregnancies (UIPs) carried to term live",
"title": ""
},
{
"docid": "919ee3a62e28c1915d0be556a2723688",
"text": "Bayesian data analysis includes but is not limited to Bayesian inference (Gelman et al., 2003; Kerman, 2006a). Here, we take Bayesian inference to refer to posterior inference (typically, the simulation of random draws from the posterior distribution) given a fixed model and data. Bayesian data analysis takes Bayesian inference as a starting point but also includes fitting a model to different datasets, altering a model, performing inferential and predictive summaries (including prior or posterior predictive checks), and validation of the software used to fit the model. The most general programs currently available for Bayesian inference are WinBUGS (BUGS Project, 2004) and OpenBugs, which can be accessed from R using the packages R2WinBUGS (Sturtz et al., 2005) and BRugs. In addition, various R packages exist that directly fit particular Bayesian models (e.g. MCMCPack, Martin and Quinn (2005)). In this note, we describe our own entry in the “inference engine” sweepstakes but, perhaps more importantly, describe the ongoing development of some R packages that perform other aspects of Bayesian data analysis.",
"title": ""
},
{
"docid": "abfc35847be162ff8744c6e5d8d67d74",
"text": "With the rapid growth of the amount of information, cloud computing servers need to process and analyze large amounts of high-dimensional and unstructured data timely and accurately. This usually requires many query operations. Due to simplicity and ease of use, cuckoo hashing schemes have been widely used in real-world cloud-related applications. However, due to the potential hash collisions, the cuckoo hashing suffers from endless loops and high insertion latency, even high risks of re-construction of entire hash table. In order to address these problems, we propose a cost-efficient cuckoo hashing scheme, called MinCounter. The idea behind MinCounter is to alleviate the occurrence of endless loops in the data insertion by selecting unbusy kicking-out routes. MinCounter selects the “cold” (infrequently accessed), rather than random, buckets to handle hash collisions. We further improve the concurrency of the MinCounter scheme to pursue higher performance and adapt to concurrent applications. MinCounter has the salient features of offering efficient insertion and query services and delivering high performance of cloud servers, as well as enhancing the experiences for cloud users. We have implemented MinCounter in a large-scale cloud testbed and examined the performance by using three real-world traces. Extensive experimental results demonstrate the efficacy and efficiency of MinCounter.",
"title": ""
},
{
"docid": "8ea08d331deff938cddbe10f16a25b9d",
"text": "High-throughput RNA sequencing is an increasingly accessible method for studying gene structure and activity on a genome-wide scale. A critical step in RNA-seq data analysis is the alignment of partial transcript reads to a reference genome sequence. To assess the performance of current mapping software, we invited developers of RNA-seq aligners to process four large human and mouse RNA-seq data sets. In total, we compared 26 mapping protocols based on 11 programs and pipelines and found major performance differences between methods on numerous benchmarks, including alignment yield, basewise accuracy, mismatch and gap placement, exon junction discovery and suitability of alignments for transcript reconstruction. We observed concordant results on real and simulated RNA-seq data, confirming the relevance of the metrics employed. Future developments in RNA-seq alignment methods would benefit from improved placement of multimapped reads, balanced utilization of existing gene annotation and a reduced false discovery rate for splice junctions.",
"title": ""
},
{
"docid": "cb0368397f1d8590516fe6f6d4296225",
"text": "In this paper, an ultra-wideband circular printed monopole antenna is presented. The antenna performance will be studied using the two-readymade software package IE3D and CST. The regular monopole antenna will be tuning by adding two C-shaped conductors near the antenna feeder to control the rejection band inside the WLAN frequency range (5.15–5.825GHz). The simulation results using the two software packages will be compared to each other. An empirical formula for the variation of the rejection band frequency against the mean length of C-shaped conductors is derived. The effect of the variation of the C-shaped conductor mean lengths on the center frequency of the rejection band will be discussed. The WLAN band rejection will be controlled by using two groups of PIN diodes. The circuit will be designed on RT/Duriod substrate (εr=2.2, h=1.57 mm, tanδ = 0.00019), where the simulations results using IE3D and CST are in good agreement with measurement results.",
"title": ""
},
{
"docid": "3da4b3ec70a371b4748e552a5752305c",
"text": "In big cities, taxi service is imbalanced. In some areas, passengers wait too long for a taxi, while in others, many taxis roam without passengers. Knowledge of where a taxi will become available can help us solve the taxi demand imbalance problem. In this paper, we employ a holistic approach to predict taxi demand at high spatial resolution. We showcase our techniques using two real-world data sets, yellow cabs and Uber trips in New York City, and perform an evaluation over 9,940 building blocks in Manhattan. Our approach consists of two key steps. First, we use entropy and the temporal correlation of human mobility to measure the demand uncertainty at the building block level. Second, to identify which predictive algorithm can approach the theoretical maximum predictability, we implement and compare three predictors: the Markov predictor (a probability-based predictive algorithm), the Lempel-Ziv-Welch predictor (a sequence-based predictive algorithm), and the Neural Network predictor (a predictive algorithm that uses machine learning). The results show that predictability varies by building block and, on average, the theoretical maximum predictability can be as high as 83%. The performance of the predictors also vary: the Neural Network predictor provides better accuracy for blocks with low predictability, and the Markov predictor provides better accuracy for blocks with high predictability. In blocks with high maximum predictability, the Markov predictor is able to predict the taxi demand with an 89% accuracy, 11% better than the Neural Network predictor, while requiring only 0.03% computation time. These findings indicate that the maximum predictability can be a good metric for selecting prediction algorithms.",
"title": ""
},
{
"docid": "ceacf326d627d1bea50da2e265cb4829",
"text": "The threat of malicious attacks against the security of the Smart Grid infrastructure cannot be overlooked. The ever-expanding nature of smart grid user base implies that a larger set of vulnerabilities are exploitable by the adversary class to launch malicious attacks. Extensive research has been conducted to identify various threat types against the smart grid, and to propose counter-measures against these. Work has also been done to measure the significance of threats and how attacks can be perpetrated in a smart grid environment. Through this paper, we categorize these smart grid threats, and how they can transpire into attacks. In particular, we provide five different categories of attack types, and also perform an analysis of the various countermeasures thereof proposed in the literature. ",
"title": ""
},
{
"docid": "85005da88291fd72616073d434357bb5",
"text": "INTRODUCTION\nIt has been suggested that the application of penile-extender devices increases penile length and circumference. However, there are a few scientific studies in this field.\n\n\nAIMS\nThe aim of this study was to assess the efficacy of a penile-extender (Golden Erect(®) , Ronas Tajhiz Teb, Tehran, Iran) in increasing penile size.\n\n\nMETHODS\nThis prospective study was performed on subjects complaining about \"short penis\" who were presented to our clinic between September 15, 2008 and December 15, 2008. After measuring the penile length in flaccid and stretched forms and penile circumference, patients were instructed to wear Golden Erect(®) , 4-6 hours per day during the first 2 weeks and then 9 hours per day until the end of the third month. The subjects were also trained how to increase the force of the device during determined intervals. The patients were visited at the end of the first and third months, and penile length and circumference were measured and compared with baseline.\n\n\nMAIN OUTCOME MEASURES\nThe primary end point of the study was changes in flaccid and stretched penile lengths compared with the baseline size during the 3 months follow-up.\n\n\nRESULTS\nTwenty-three cases with a mean age of 26.5 ± 8.1 years entered the study. The mean flaccid penile length increased from 8.8 ± 1.2 cm to 10.1 ± 1.2 cm and 10.5 ± 1.2 cm, respectively, in the first and third months of follow-up, which was statistically significant (P < 0.05). Mean stretched penile length also significantly increased from 11.5 ± 1.0 cm to, respectively, 12.4 ± 1.3 cm and 13.2 ± 1.4 cm during the first and second follow-up (P < 0.05). No significant difference was found regarding proximal penile girth. However, it was not the same regarding the circumference of the glans penis (9.3 ± 0.86 cm vs. 8.8 ± 0.66 cm, P < 0.05).\n\n\nCONCLUSION\nOur findings supported the efficacy of the device in increasing penile length. Our result also suggested the possibility of glans penis girth enhancement using penile extender. Performing more studies is recommended.",
"title": ""
},
{
"docid": "e28ba2ea209537cf9867428e3cf7fdd7",
"text": "People take their mobile phones everywhere they go. In Saudi Arabia, the mobile penetration is very high and students use their phones for different reasons in the classroom. The use of mobile devices in classroom triggers an alert of the impact it might have on students’ learning. This study investigates the association between the use of mobile phones during classroom and the learners’ performance and satisfaction. Results showed that students get distracted, and that this diversion of their attention is reflected in their academic success. However, this is not applicable for all. Some students received high scores even though they declared using mobile phones in classroom, which triggers a request for a deeper study.",
"title": ""
},
{
"docid": "737bc68c51d2ae7665c47a060da3e25f",
"text": "Self-regulatory strategies of goal setting and goal striving are analyzed in three experiments. Experiment 1 uses fantasy realization theory (Oettingen, in: J. Brandstätter, R.M. Lerner (Eds.), Action and Self Development: Theory and Research through the Life Span, Sage Publications Inc, Thousand Oaks, CA, 1999, pp. 315-342) to analyze the self-regulatory processes of turning free fantasies about a desired future into binding goals. School children 8-12 years of age who had to mentally elaborate a desired academic future as well as present reality standing in its way, formed stronger goal commitments than participants solely indulging in the desired future or merely dwelling on present reality (Experiment 1). Effective implementation of set goals is addressed in the second and third experiments (Gollwitzer, Am. Psychol. 54 (1999) 493-503). Adolescents who had to furnish a set educational goal with relevant implementation intentions (specifying where, when, and how they would start goal pursuit) were comparatively more successful in meeting the goal (Experiment 2). Linking anticipated si tuations with goal-directed behaviors (i.e., if-then plans) rather than the mere thinking about good opportunities to act makes implementation intentions facilitate action initiation (Experiment 3). ©2001 Elsevier Science Ltd. All rights reserved. _____________________________________________________________________________________ Successful goal attainment demands completing two different tasks. People have to first turn their desires into binding goals, and second they have to attain the set goal. Both tasks benefit from selfregulatory strategies. In this article we describe a series of experiments with children, adolescents, and young adults that investigate self-regulatory processes facilitating effective goal setting and successful goal striving. The experimental studies investigate (1) different routes to goal setting depending on how",
"title": ""
},
{
"docid": "e6332297afd2883e41888be243b27d1d",
"text": "The 2018 Nucleic Acids Research Database Issue contains 181 papers spanning molecular biology. Among them, 82 are new and 84 are updates describing resources that appeared in the Issue previously. The remaining 15 cover databases most recently published elsewhere. Databases in the area of nucleic acids include 3DIV for visualisation of data on genome 3D structure and RNArchitecture, a hierarchical classification of RNA families. Protein databases include the established SMART, ELM and MEROPS while GPCRdb and the newcomer STCRDab cover families of biomedical interest. In the area of metabolism, HMDB and Reactome both report new features while PULDB appears in NAR for the first time. This issue also contains reports on genomics resources including Ensembl, the UCSC Genome Browser and ENCODE. Update papers from the IUPHAR/BPS Guide to Pharmacology and DrugBank are highlights of the drug and drug target section while a number of proteomics databases including proteomicsDB are also covered. The entire Database Issue is freely available online on the Nucleic Acids Research website (https://academic.oup.com/nar). The NAR online Molecular Biology Database Collection has been updated, reviewing 138 entries, adding 88 new resources and eliminating 47 discontinued URLs, bringing the current total to 1737 databases. It is available at http://www.oxfordjournals.org/nar/database/c/.",
"title": ""
},
{
"docid": "092239f41a6e216411174e5ed9dceee2",
"text": "In this paper, we propose a simple but effective specular highlight removal method using a single input image. Our method is based on a key observation the maximum fraction of the diffuse color component (so called maximum diffuse chromaticity in the literature) in local patches in color images changes smoothly. Using this property, we can estimate the maximum diffuse chromaticity values of the specular pixels by directly applying low-pass filter to the maximum fraction of the color components of the original image, such that the maximum diffuse chromaticity values can be propagated from the diffuse pixels to the specular pixels. The diffuse color at each pixel can then be computed as a nonlinear function of the estimated maximum diffuse chromaticity. Our method can be directly extended for multi-color surfaces if edge-preserving filters (e.g., bilateral filter) are used such that the smoothing can be guided by the maximum diffuse chromaticity. But maximum diffuse chromaticity is to be estimated. We thus present an approximation and demonstrate its effectiveness. Recent development in fast bilateral filtering techniques enables our method to run over 200× faster than the state-of-the-art on a standard CPU and differentiates our method from previous work.",
"title": ""
},
{
"docid": "4ca4ccd53064c7a9189fef3e801612a0",
"text": "workflows, data warehousing, business intelligence Process design and automation technologies are being increasingly used by both traditional and newly-formed, Internet-based enterprises in order to improve the quality and efficiency of their administrative and production processes, to manage e-commerce transactions, and to rapidly and reliably deliver services to businesses and individual customers.",
"title": ""
},
{
"docid": "8641df504b9f8c55c1951294e47875e4",
"text": "3D ultrasound (US) acquisition acquires volumetric images, thus alleviating a classical US imaging bottleneck that requires a highly-trained sonographer to operate the US probe. However, this opportunity has not been explored in practice, since 3D US machines are only suitable for hospital usage in terms of cost, size and power requirements. In this work we propose the first fully-digital, single-chip 3D US imager on FPGA. The proposed design is a complete processing pipeline that includes pre-processing, image reconstruction, and post-processing. It supports up to 1024 input channels, which matches or exceeds state of the art, in an unprecedented estimated power budget of 6.1 W. The imager exploits a highly scalable architecture which can be either downscaled for 2D imaging, or further upscaled on a larger FPGA. Our platform supports both real-time inputs over an optical cable, or test data feeds sent by a laptop running Matlab and custom tools over an Ethernet connection. Additionally, the design allows HDMI video output on a screen.",
"title": ""
},
{
"docid": "e315a7e8e83c4130f9a53dec21598ae6",
"text": "Modern techniques for data analysis and machine learning are so called kernel methods. The most famous and successful one is represented by the support vector machine (SVM) for classification or regression tasks. Further examples are kernel principal component analysis for feature extraction or other linear classifiers like the kernel perceptron. The fundamental ingredient in these methods is the choice of a kernel function, which computes a similarity measure between two input objects. For good generalization abilities of a learning algorithm it is indispensable to incorporate problem-specific a-priori knowledge into the learning process. The kernel function is an important element for this. This thesis focusses on a certain kind of a-priori knowledge namely transformation knowledge. This comprises explicit knowledge of pattern variations that do not or only slightly change the pattern’s inherent meaning e.g. rigid movements of 2D/3D objects or transformations like slight stretching, shifting, rotation of characters in optical character recognition etc. Several methods for incorporating such knowledge in kernel functions are presented and investigated. 1. Invariant distance substitution kernels (IDS-kernels): In many practical questions the transformations are implicitly captured by sophisticated distance measures between objects. Examples are nonlinear deformation models between images. Here an explicit parameterization would require an arbitrary number of parameters. Such distances can be incorporated in distanceand inner-product-based kernels. 2. Tangent distance kernels (TD-kernels): Specific instances of IDS-kernels are investigated in more detail as these can be efficiently computed. We assume differentiable transformations of the patterns. Given such knowledge, one can construct linear approximations of the transformation manifolds and use these efficiently for kernel construction by suitable distance functions. 3. Transformation integration kernels (TI-kernels): The technique of integration over transformation groups for feature extraction can be extended to kernel functions and more general group, non-group, discrete or continuous transformations in a suitable way. Theoretically, these approaches differ in the way the transformations are represented and in the adjustability of the transformation extent. More fundamentally, kernels from category 3 turn out to be positive definite, kernels of types 1 and 2 are not positive definite, which is generally required for being usable in kernel methods. This is the",
"title": ""
},
{
"docid": "24dce115334261ff4561ffd3b40c4fa9",
"text": "Facial expressions play a major role in psychiatric diagnosis, monitoring and treatment adjustment. We recorded 34 schizophrenia patients and matched controls during a clinical interview, and extracted the activity level of 23 facial Action Units (AUs), using 3D structured light cameras and dedicated software. By defining dynamic and intensity AUs activation characteristic features, we found evidence for blunted affect and reduced positive emotional expressions in patients. Further, we designed learning algorithms which achieved up to 85% correct schizophrenia classification rate, and significant correlation with negative symptoms severity. Our results emphasize the clinical importance of facial dynamics, and illustrate the possible advantages of employing affective computing tools in clinical settings.",
"title": ""
},
{
"docid": "545f41e1c94a3198e75801da4c39b0da",
"text": "When attempting to improve the performance of a deep learning system, there are more or less three approaches one can take: the first is to improve the structure of the model, perhaps adding another layer, switching from simple recurrent units to LSTM cells [4], or–in the realm of NLP–taking advantage of syntactic parses (e.g. as in [13, et seq.]); another approach is to improve the initialization of the model, guaranteeing that the early-stage gradients have certain beneficial properties [3], or building in large amounts of sparsity [6], or taking advantage of principles of linear algebra [15]; the final approach is to try a more powerful learning algorithm, such as including a decaying sum over the previous gradients in the update [12], by dividing each parameter update by the L2 norm of the previous updates for that parameter [2], or even by foregoing first-order algorithms for more powerful but more computationally costly second order algorithms [9]. This paper has as its goal the third option—improving the quality of the final solution by using a faster, more powerful learning algorithm.",
"title": ""
},
{
"docid": "ab23f66295574368ccd8fc4e1b166ecc",
"text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.",
"title": ""
},
{
"docid": "d593bf8ce0b340c612dc2a6269b917ff",
"text": "Understanding the intent underlying user queries may help personalize search results and improve user satisfaction. In this paper, we develop a methodology for using ad clickthrough logs, query specific information, and the content of search engine result pages to study characteristics of query intents, specially commercial intent. The findings of our study suggest that ad clickthrough features, query features, and the content of search engine result pages are together effective in detecting query intent. We also study the effect of query type and the number of displayed ads on the average clickthrough rate. As a practical application of our work, we show that modeling query intent can improve the accuracy of predicting ad clickthrough for previously unseen queries.",
"title": ""
}
] |
scidocsrr
|
509d38ceda71f68928cfcc16c6e5e604
|
Protected area needs in a changing climate
|
[
{
"docid": "a28be57b2eb045a525184b67afb14bb2",
"text": "Climate change has already triggered species distribution shifts in many parts of the world. Increasing impacts are expected for the future, yet few studies have aimed for a general understanding of the regional basis for species vulnerability. We projected late 21st century distributions for 1,350 European plants species under seven climate change scenarios. Application of the International Union for Conservation of Nature and Natural Resources Red List criteria to our projections shows that many European plant species could become severely threatened. More than half of the species we studied could be vulnerable or threatened by 2080. Expected species loss and turnover per pixel proved to be highly variable across scenarios (27-42% and 45-63% respectively, averaged over Europe) and across regions (2.5-86% and 17-86%, averaged over scenarios). Modeled species loss and turnover were found to depend strongly on the degree of change in just two climate variables describing temperature and moisture conditions. Despite the coarse scale of the analysis, species from mountains could be seen to be disproportionably sensitive to climate change (approximately 60% species loss). The boreal region was projected to lose few species, although gaining many others from immigration. The greatest changes are expected in the transition between the Mediterranean and Euro-Siberian regions. We found that risks of extinction for European plants may be large, even in moderate scenarios of climate change and despite inter-model variability.",
"title": ""
}
] |
[
{
"docid": "795a4d9f2dc10563dfee28c3b3cd0f08",
"text": "A wide-band probe fed patch antenna with low cross polarization and symmetrical broadside radiation pattern is proposed and studied. By employing a novel meandering probe feed and locating a patch about 0.1/spl lambda//sub 0/ above a ground plane, a patch antenna with 30% impedance bandwidth (SWR<2) and 9 dBi gain is designed. The far field radiation pattern of the antenna is stable across the operating bandwidth. Parametric studies and design guidelines of the proposed feeding structure are provided.",
"title": ""
},
{
"docid": "72c79b86a91f7c8453cd6075314a6b4d",
"text": "This talk aims to introduce LATEX users to XSL-FO. It does not attempt to give an exhaustive view of XSL-FO, but allows a LATEX user to get started. We show the common and different points between these two approaches of word processing.",
"title": ""
},
{
"docid": "888de1004e212e1271758ac35ff9807d",
"text": "We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.",
"title": ""
},
{
"docid": "718e31eabfd386768353f9b75d9714eb",
"text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.",
"title": ""
},
{
"docid": "b2817d85893a624574381eee4f8648db",
"text": "A coupled-fed antenna design capable of covering eight-band WWAN/LTE operation in a smartphone and suitable to integrate with a USB connector is presented. The antenna comprises an asymmetric T-shaped monopole as a coupling feed and a radiator as well, and a coupled-fed loop strip shorted to the ground plane. The antenna generates a wide lower band to cover (824-960 MHz) for GSM850/900 operation and a very wide upper band of larger than 1 GHz to cover the GPS/GSM1800/1900/UMTS/LTE2300/2500 operation (1565-2690 MHz). The proposed antenna provides wideband operation and exhibits great flexible behavior. The antenna is capable of providing eight-band operation for nine different sizes of PCBs, and enhance impedance matching only by varying a single element length, L. Details of proposed antenna, parameters and performance are presented and discussed in this paper.",
"title": ""
},
{
"docid": "d197875ea8637bf36d2746a2a1861c23",
"text": "There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.",
"title": ""
},
{
"docid": "3d12dea4ae76c5af54578262996fe0bb",
"text": "We introduce a two-layer undirected graphical model, calle d a “Replicated Softmax”, that can be used to model and automatically extract low -dimensional latent semantic representations from a large unstructured collec ti n of documents. We present efficient learning and inference algorithms for thi s model, and show how a Monte-Carlo based method, Annealed Importance Sampling, c an be used to produce an accurate estimate of the log-probability the model a ssigns to test data. This allows us to demonstrate that the proposed model is able to g neralize much better compared to Latent Dirichlet Allocation in terms of b th the log-probability of held-out documents and the retrieval accuracy.",
"title": ""
},
{
"docid": "a58930da8179d71616b8b6ef01ed1569",
"text": "Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.",
"title": ""
},
{
"docid": "73adcdf18b86ab3598731d75ac655f2c",
"text": "Many individuals exhibit unconscious body movements called mannerisms while speaking. These repeated changes often distract the audience when not relevant to the verbal context. We present an intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms. We use a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. These patterns are displayed in an interface with subtle question and answer-based feedback scheme that draws attention to the speaker's body language. Our formal evaluation with 27 participants shows that the users became aware of their body language after using the system. In addition, when independent observers annotated the accuracy of the algorithm for every extracted pattern, we find that the patterns extracted by our algorithm is significantly (p<0.001) more accurate than just random selection. This represents a strong evidence that the algorithm is able to extract human-interpretable body movement patterns. An interactive demo of AutoManner is available at http://tinyurl.com/AutoManner.",
"title": ""
},
{
"docid": "154c40c2fab63ad15ded9b341ff60469",
"text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.",
"title": ""
},
{
"docid": "bfa38fded95303834d487cb27d228ad7",
"text": "Apparel classification encompasses the identification of an outfit in an image. The area has its applications in social media advertising, e-commerce and criminal law. In our work, we introduce a new method for shopping apparels online. This paper describes our approach to classify images using Convolutional Neural Networks. We concentrate mainly on two aspects of apparel classification: (1) Multiclass classification of apparel type and (2) Similar Apparel retrieval based on the query image. This shopping technique relieves the burden of storing a lot of information related to the images and traditional ways of filtering search results can be replaced by image filters",
"title": ""
},
{
"docid": "73bf620a97b2eadeb2398dd718b85fe8",
"text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.",
"title": ""
},
{
"docid": "80ff93b5f2e0ff3cff04c314e28159fc",
"text": "In the past 30 years there has been a growing body of research using different methods (behavioural, electrophysiological, neuropsychological, TMS and imaging studies) asking whether processing words from different grammatical classes (especially nouns and verbs) engage different neural systems. To date, however, each line of investigation has provided conflicting results. Here we present a review of this literature, showing that once we take into account the confounding in most studies between semantic distinctions (objects vs. actions) and grammatical distinction (nouns vs. verbs), and the conflation between studies concerned with mechanisms of single word processing and those studies concerned with sentence integration, the emerging picture is relatively clear-cut: clear neural separability is observed between the processing of object words (nouns) and action words (typically verbs), grammatical class effects emerge or become stronger for tasks and languages imposing greater processing demands. These findings indicate that grammatical class per se is not an organisational principle of knowledge in the brain; rather, all the findings we review are compatible with two general principles described by typological linguistics as underlying grammatical class membership across languages: semantic/pragmatic, and distributional cues in language that distinguish nouns from verbs. These two general principles are incorporated within an emergentist view which takes these constraints into account.",
"title": ""
},
{
"docid": "f8b0dcd771e7e7cf50a05cf7221f4535",
"text": "Studies on monocyte and macrophage biology and differentiation have revealed the pleiotropic activities of these cells. Macrophages are tissue sentinels that maintain tissue integrity by eliminating/repairing damaged cells and matrices. In this M2-like mode, they can also promote tumor growth. Conversely, M1-like macrophages are key effector cells for the elimination of pathogens, virally infected, and cancer cells. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, thereby accounting for the many and apparently opposed macrophage functions. Many questions arise. When monocytes differentiate into macrophages in a tissue (concomitantly adopting a specific functional program, M1 or M2), do they all die during the inflammatory reaction, or do some of them survive? Do those that survive become quiescent tissue macrophages, able to react as naïve cells to a new challenge? Or, do monocyte-derived tissue macrophages conserve a \"memory\" of their past inflammatory activation? This review will address some of these important questions under the general framework of the role of monocytes and macrophages in the initiation, development, resolution, and chronicization of inflammation.",
"title": ""
},
{
"docid": "f71b1df36ee89cdb30a1dd29afc532ea",
"text": "Finite state machines are a standard tool to model event-based control logic, and dynamic programming is a staple of optimal decision-making. We combine these approaches in the context of radar resource management for Naval surface warfare. There is a friendly (Blue) force in the open sea, equipped with one multi-function radar and multiple ships. The enemy (Red) force consists of missiles that target the Blue force's radar. The mission of the Blue force is to foil the enemy's threat by careful allocation of radar resources. Dynamically composed finite state machines are used to formalize the model of the battle space and dynamic programming is applied to our dynamic state machine model to generate an optimal policy. To achieve this in near-real-time and a changing environment, we use approximate dynamic programming methods. Example scenario illustrating the model and simulation results are presented.",
"title": ""
},
{
"docid": "8bdd02547be77f4c825c9aed8016ddf8",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "232bf10d578c823b0cd98a3641ace44a",
"text": "The effect of economic globalization on the number of transnational terrorist incidents within countries is analyzed statistically, using a sample of 112 countries from 1975 to 1997. Results show that trade, foreign direct investment (FDI), and portfolio investment have no direct positive effect on transnational terrorist incidents within countries and that economic developments of a country and its top trading partners reduce the number of terrorist incidents inside the country. To the extent that trade and FDI promote economic development, they have an indirect negative effect on transnational terrorism.",
"title": ""
},
{
"docid": "66fd7de53986e8c4a7ed08ed88f0b45b",
"text": "BACKGROUND\nConcerns regarding the risk of estrogen replacement have resulted in a significant increase in the use of soy products by menopausal women who, despite the lack of evidence of the efficacy of such products, seek alternatives to menopausal hormone therapy. Our goal was to determine the efficacy of soy isoflavone tablets in preventing bone loss and menopausal symptoms.\n\n\nMETHODS\nThe study design was a single-center, randomized, placebo-controlled, double-blind clinical trial conducted from July 1, 2004, through March 31, 2009. Women aged 45 to 60 years within 5 years of menopause and with a bone mineral density T score of -2.0 or higher in the lumbar spine or total hip were randomly assigned, in equal proportions, to receive daily soy isoflavone tablets, 200 mg, or placebo. The primary outcome was changes in bone mineral density in the lumbar spine, total hip, and femoral neck at the 2-year follow-up. Secondary outcomes included changes in menopausal symptoms, vaginal cytologic characteristics, N -telopeptide of type I bone collagen, lipids, and thyroid function.\n\n\nRESULTS\nAfter 2 years, no significant differences were found between the participants receiving soy tablets (n = 122) and those receiving placebo (n = 126) regarding changes in bone mineral density in the spine (-2.0% and -2.3%, respectively), the total hip (-1.2% and -1.4%, respectively), or the femoral neck (-2.2% and -2.1%, respectively). A significantly larger proportion of participants in the soy group experienced hot flashes and constipation compared with the control group. No significant differences were found between groups in other outcomes.\n\n\nCONCLUSIONS\nIn this population, the daily administration of tablets containing 200 mg of soy isoflavones for 2 years did not prevent bone loss or menopausal symptoms.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00076050.",
"title": ""
},
{
"docid": "a63db4f5e588e23e4832eae581fc1c4b",
"text": "Driver drowsiness is a major cause of mortality in traffic accidents worldwide. Electroencephalographic (EEG) signal, which reflects the brain activities, is more directly related to drowsiness. Thus, many Brain-Machine-Interface (BMI) systems have been proposed to detect driver drowsiness. However, detecting driver drowsiness at its early stage poses a major practical hurdle when using existing BMI systems. This study proposes a context-aware BMI system aimed to detect driver drowsiness at its early stage by enriching the EEG data with the intensity of head-movements. The proposed system is carefully designed for low-power consumption with on-chip feature extraction and low energy Bluetooth connection. Also, the proposed system is implemented using JAVA programming language as a mobile application for on-line analysis. In total, 266 datasets obtained from six subjects who participated in a one-hour monotonous driving simulation experiment were used to evaluate this system. According to a video-based reference, the proposed system obtained an overall detection accuracy of 82.71% for classifying alert and slightly drowsy events by using EEG data alone and 96.24% by using the hybrid data of head-movement and EEG. These results indicate that the combination of EEG data and head-movement contextual information constitutes a robust solution for the early detection of driver drowsiness.",
"title": ""
},
{
"docid": "dba13fea4538f23ea1208087d3e81d6b",
"text": "This paper investigates the effectiveness of using MeSH® in PubMed through its automatic query expansion process: Automatic Term Mapping (ATM). We run Boolean searches based on a collection of 55 topics and about 160,000 MEDLINE® citations used in the 2006 and 2007 TREC Genomics Tracks. For each topic, we first automatically construct a query by selecting keywords from the question. Next, each query is expanded by ATM, which assigns different search tags to terms in the query. Three search tags: [MeSH Terms], [Text Words], and [All Fields] are chosen to be studied after expansion because they all make use of the MeSH field of indexed MEDLINE citations. Furthermore, we characterize the two different mechanisms by which the MeSH field is used. Retrieval results using MeSH after expansion are compared to those solely based on the words in MEDLINE title and abstracts. The aggregate retrieval performance is assessed using both F-measure and mean rank precision. Experimental results suggest that query expansion using MeSH in PubMed can generally improve retrieval performance, but the improvement may not affect end PubMed users in realistic situations.",
"title": ""
}
] |
scidocsrr
|
63f1583140e335657d783d130f935e28
|
Sorted random projections for robust rotation-invariant texture classification
|
[
{
"docid": "5f31e3405af91cd013c3193c7d3cdd8d",
"text": "In this paper, we review most major filtering approaches to texture feature extraction and perform a comparative study. Filtering approaches included are Laws masks, ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, wavelet packets and wavelet frames, quadrature mirror filters, discrete cosine transform, eigenfilters, optimized Gabor filters, linear predictors, and optimized finite impulse response filters. The features are computed as the local energy of the filter responses. The effect of the filtering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches. For reference, comparisons with two classical nonfiltering approaches, co-occurrence (statistical) and autoregressive (model based) features, are given. We present a ranking of the tested approaches based on extensive experiments.",
"title": ""
},
{
"docid": "432fe001ec8f1331a4bd033e9c49ccdf",
"text": "Recently, methods based on local image features have shown promise for texture and object recognition tasks. This paper presents a large-scale evaluation of an approach that represents images as distributions (signatures or histograms) of features extracted from a sparse set of keypoint locations and learns a Support Vector Machine classifier with kernels based on two effective measures for comparing distributions, the Earth Mover’s Distance and the χ2 distance. We first evaluate the performance of our approach with different keypoint detectors and descriptors, as well as different kernels and classifiers. We then conduct a comparative evaluation with several state-of-the-art recognition methods on four texture and five object databases. On most of these databases, our implementation exceeds the best reported results and achieves comparable performance on the rest. Finally, we investigate the influence of background correlations on recognition performance via extensive tests on the PASCAL database, for which ground-truth object localization information is available. Our experiments demonstrate that image representations based on distributions of local features are surprisingly effective for classification of texture and object images under challenging real-world conditions, including significant intra-class variations and substantial background clutter.",
"title": ""
},
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
}
] |
[
{
"docid": "cd1b9bf3086108d858bf8729ea96eb0d",
"text": "Independence is the building methodology in achieving dreams, goals and objectives in life. Visually impaired persons find themselves challenging to go out independently. There are millions of visually impaired or blind people in this world who are always in need of helping hands. For many years the white cane became a well-known attribute to blind person's navigation and later efforts have been made to improve the cane by adding remote sensor. Blind people have big problem when they walk on the street or stairs using white cane, but they have sharp haptic sensitivity. The electronic walking stick will help the blind person by providing more convenient means of life. The main aim of this paper is to contribute our knowledge and services to the people of blind and disable society.",
"title": ""
},
{
"docid": "fdce30fe0beec4e7406e5b0208f855b9",
"text": "Current malware is often transmitted in packed or encrypted form to prevent examination by anti-virus software.To analyze new malware, researchers typically resort to dynamic code analysis techniques to unpack the code for examination.Unfortunately, these dynamic techniques are susceptible to a variety of anti-monitoring defenses, as well as \"time bombs\" or \"logic bombs,\" and can be slow and tedious to identify and disable. This paper discusses an alternative approach that relies on static analysis techniques to automate this process. Alias analysis can be used to identify the existence of unpacking,static slicing can identify the unpacking code, and control flow analysis can be used to identify and neutralize dynamic defenses. The identified unpacking code can be instrumented and transformed, then executed to perform the unpacking.We present a working prototype that can handle a variety of malware binaries, packed with both custom and commercial packers, and containing several examples of dynamic defenses.",
"title": ""
},
{
"docid": "1ffadb09d21cedca89d27450c38b776b",
"text": "OBJECTIVES\nTo investigate speech outcomes in 5- and 10-year-old children with unilateral cleft lip and palate (UCLP) treated according to minimal incision technique (MIT) - a one-stage palatal method.\n\n\nMETHODS\nA retrospective, longitudinal cohort study of a consecutive series of 69 patients born with UCLP, treated with MIT (mean age 13 months) was included. Forty-two children (43%) received a velopharyngeal flap; 12 before 5 years and another 18 before 10 years of age. Cleft speech variables were rated from standardized audio recordings at 5 and 10 years of age, independently by three experienced, external speech-language pathologists, blinded to the material. The prevalences of cleft speech characteristics were determined, and inter- and intra-rater agreement calculated.\n\n\nRESULTS\nMore than mild hypernasality, weak pressure consonants and perceived incompetent velopharyngeal function were present in 19-22% of the children at 5 years, but improved to less than 5% at 10 years. However, audible nasal air leakage, prevalent in 23% at 5 years, did not improve by age 10. Thirty percent had frequent or almost always persistent compensatory articulation at 5 years, and 6% at age 10. The general impression of speech improved markedly, from 57% giving a normal impression at 5 years to 89% at 10 years. A high prevalence of distorted/s/was found at both 5 and 10 years of age.\n\n\nCONCLUSIONS\nA high occurrence of speech deviances at 5 years of age after MIT was markedly reduced at 10 years in this study of children with unilateral cleft lip and palate. The high pharyngeal flap rate presumably accounted for the positive speech development.",
"title": ""
},
{
"docid": "d597d4a1c32256b95524876218d963da",
"text": "E-commerce in today's conditions has the highest dependence on network infrastructure of banking. However, when the possibility of communicating with the Banking network is not provided, business activities will suffer. This paper proposes a new approach of digital wallet based on mobile devices without the need to exchange physical money or communicate with banking network. A digital wallet is a software component that allows a user to make an electronic payment in cash (such as a credit card or a digital coin), and hides the low-level details of executing the payment protocol that is used to make the payment. The main features of proposed architecture are secure awareness, fault tolerance, and infrastructure-less protocol.",
"title": ""
},
{
"docid": "8af7826c809eb3941c2e394899ca83ef",
"text": "The development of interactive rehabilitation technologies which rely on wearable-sensing for upper body rehabilitation is attracting increasing research interest. This paper reviews related research with the aim: 1) To inventory and classify interactive wearable systems for movement and posture monitoring during upper body rehabilitation, regarding the sensing technology, system measurements and feedback conditions; 2) To gauge the wearability of the wearable systems; 3) To inventory the availability of clinical evidence supporting the effectiveness of related technologies. A systematic literature search was conducted in the following search engines: PubMed, ACM, Scopus and IEEE (January 2010–April 2016). Forty-five papers were included and discussed in a new cuboid taxonomy which consists of 3 dimensions: sensing technology, feedback modalities and system measurements. Wearable sensor systems were developed for persons in: 1) Neuro-rehabilitation: stroke (n = 21), spinal cord injury (n = 1), cerebral palsy (n = 2), Alzheimer (n = 1); 2) Musculoskeletal impairment: ligament rehabilitation (n = 1), arthritis (n = 1), frozen shoulder (n = 1), bones trauma (n = 1); 3) Others: chronic pulmonary obstructive disease (n = 1), chronic pain rehabilitation (n = 1) and other general rehabilitation (n = 14). Accelerometers and inertial measurement units (IMU) are the most frequently used technologies (84% of the papers). They are mostly used in multiple sensor configurations to measure upper limb kinematics and/or trunk posture. Sensors are placed mostly on the trunk, upper arm, the forearm, the wrist, and the finger. Typically sensors are attachable rather than embedded in wearable devices and garments; although studies that embed and integrate sensors are increasing in the last 4 years. 16 studies applied knowledge of result (KR) feedback, 14 studies applied knowledge of performance (KP) feedback and 15 studies applied both in various modalities. 16 studies have conducted their evaluation with patients and reported usability tests, while only three of them conducted clinical trials including one randomized clinical trial. This review has shown that wearable systems are used mostly for the monitoring and provision of feedback on posture and upper extremity movements in stroke rehabilitation. The results indicated that accelerometers and IMUs are the most frequently used sensors, in most cases attached to the body through ad hoc contraptions for the purpose of improving range of motion and movement performance during upper body rehabilitation. Systems featuring sensors embedded in wearable appliances or garments are only beginning to emerge. Similarly, clinical evaluations are scarce and are further needed to provide evidence on effectiveness and pave the path towards implementation in clinical settings.",
"title": ""
},
{
"docid": "8abcf3e56e272c06da26a40d66afcfb0",
"text": "As internet use becomes increasingly integral to modern life, the hazards of excessive use are also becoming apparent. Prior research suggests that socially anxious individuals are particularly susceptible to problematic internet use. This vulnerability may relate to the perception of online communication as a safer means of interacting, due to greater control over self-presentation, decreased risk of negative evaluation, and improved relationship quality. To investigate these hypotheses, a general sample of 338 completed an online survey. Social anxiety was confirmed as a significant predictor of problematic internet use when controlling for depression and general anxiety. Social anxiety was associated with perceptions of greater control and decreased risk of negative evaluation when communicating online, however perceived relationship quality did not differ. Negative expectations during face-to-face interactions partially accounted for the relationship between social anxiety and problematic internet use. There was also preliminary evidence that preference for online communication exacerbates face-to-face avoidance.",
"title": ""
},
{
"docid": "f25b9147e67bd8051852142ebd82cf20",
"text": "Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising.",
"title": ""
},
{
"docid": "09623c821f05ffb7840702a5869be284",
"text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.",
"title": ""
},
{
"docid": "1a69b777e03d2d2589dd9efb9cda2a10",
"text": "Three-dimensional measurement of joint motion is a promising tool for clinical evaluation and therapeutic treatment comparisons. Although many devices exist for joints kinematics assessment, there is a need for a system that could be used in routine practice. Such a system should be accurate, ambulatory, and easy to use. The combination of gyroscopes and accelerometers (i.e., inertial measurement unit) has proven to be suitable for unrestrained measurement of orientation during a short period of time (i.e., few minutes). However, due to their inability to detect horizontal reference, inertial-based systems generally fail to measure differential orientation, a prerequisite for computing the three-dimentional knee joint angle recommended by the Internal Society of Biomechanics (ISB). A simple method based on a leg movement is proposed here to align two inertial measurement units fixed on the thigh and shank segments. Based on the combination of the former alignment and a fusion algorithm, the three-dimensional knee joint angle is measured and compared with a magnetic motion capture system during walking. The proposed system is suitable to measure the absolute knee flexion/extension and abduction/adduction angles with mean (SD) offset errors of -1 degree (1 degree ) and 0 degrees (0.6 degrees ) and mean (SD) root mean square (RMS) errors of 1.5 degrees (0.4 degrees ) and 1.7 degrees (0.5 degrees ). The system is also suitable for the relative measurement of knee internal/external rotation (mean (SD) offset error of 3.4 degrees (2.7 degrees )) with a mean (SD) RMS error of 1.6 degrees (0.5 degrees ). The method described in this paper can be easily adapted in order to measure other joint angular displacements such as elbow or ankle.",
"title": ""
},
{
"docid": "de1f35d0e19cafc28a632984f0411f94",
"text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"title": ""
},
{
"docid": "c26f27dd49598b7f9120f9a31dccb012",
"text": "The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.",
"title": ""
},
{
"docid": "a16be992aa947c8c5d2a7c9899dfbcd8",
"text": "The effect of the Eureka Spring (ES) appliance was investigated on 37 consecutively treated, noncompliant patients with bilateral Class II malocclusions. Lateral cephalographs were taken at the start of orthodontic treatment (T1), at insertion of the ES (T2), and at removal of the ES (T3). The average treatment interval between T2 and T3 was four months. The Class II correction occurred almost entirely by dentoalveolar movement and was almost equally distributed between the maxillary and mandibular dentitions. The rate of molar correction was 0.7 mm/mo. There was no change in anterior face height, mandibular plane angle, palatal plane angle, or gonial angle with treatment. There was a 2 degrees change in the occlusal plane resulting from intrusion of the maxillary molar and the mandibular incisor. Based on the results in this sample, the ES appliance was very effective in correcting Class II malocclusions in noncompliant patients without increasing the vertical dimension.",
"title": ""
},
{
"docid": "0286fb17d9ddb18fb25152c7e5b943c4",
"text": "Treemaps are a well known method for the visualization of attributed hierarchical data. Previously proposed treemap layout algorithms are limited to rectangular shapes, which cause problems with the aspect ratio of the rectangles as well as with identifying the visualized hierarchical structure. The approach of Voronoi treemaps presented in this paper eliminates these problems through enabling subdivisions of and in polygons. Additionally, this allows for creating treemap visualizations within areas of arbitrary shape, such as triangles and circles, thereby enabling a more flexible adaptation of treemaps for a wider range of applications.",
"title": ""
},
{
"docid": "3767702e22ac34493bb1c6c2513da9f7",
"text": "The majority of the online reviews are written in free-text format. It is often useful to have a measure which summarizes the content of the review. One such measure can be sentiment which expresses the polarity (positive/negative) of the review. However, a more granular classification of sentiment, such as rating stars, would be more advantageous and would help the user form a better opinion. In this project, we propose an approach which involves a combination of topic modeling and sentiment analysis to achieve this objective and thereby help predict the rating stars.",
"title": ""
},
{
"docid": "675007890407b7e8a7d15c1255e77ec6",
"text": "This study investigated the influence of the completeness of CRM relational information processes on customer-based relational performance and profit performance. In addition, interaction orientation and CRM readiness were adopted as moderators on the relationship between CRM relational information processes and customer-based performance. Both qualitative and quantitative approaches were applied in this study. The results revealed that the completeness of CRM relational information processes facilitates customer-based relational performance (i.e., customer satisfaction, and positive WOM), and in turn enhances profit performance (i.e., efficiency with regard to identifying, acquiring and retaining, and converting unprofitable customers to profitable ones). The alternative model demonstrated that both interaction orientation and CRM readiness play a mediating role in the relationship between information processes and relational performance. Managers should strengthen the completeness and smoothness of CRM information processes, should increase the level of interactional orientation with customers and should maintain firm CRM readiness to service their customers. The implications of this research and suggestions for managers were also discussed.",
"title": ""
},
{
"docid": "b8322d65e61be7fb252b2e418df85d3e",
"text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]",
"title": ""
},
{
"docid": "80bf0f45eb0b9ad4360ce9c93d57c6fe",
"text": "Definitions of innovation such as 'novelty in action' (Altschuler and Zegans, 1997) and 'new ideas that work' (Mulgan and Albury, 2003) emphasise that innovation is not just a new idea but a new practice. This is the difference between invention and innovation (Bessant, 2003). Some writers reserve the notion of innovation for 'radical' or 'breakthrough' novelty, while others emphasise a spectrum of innovation from large-scale dramatic, 'headline-making' innovations to small scale, incremental changes. However, the definition needs to recognize practical impact: Those changes worth recognizing as innovation should be…new to the organization, be large enough, general enough and durable enough to appreciably affect the operations or character of the organization (Moore et al., 1997, p. 276). How extensive, therefore, does the change have to be in order to be classed as innovation (rather than continuous improvement)? Much of the innovation theory and literature has derived from new product development, where an innovation in technology can be observed and broadly agreed, even if its full implications or its impact are not initially known. By contrast, innovations in governance and services are more ambiguous. Here innovation is usually not a physical artefact at all, but a change in the relationships between service providers and their users. In such changes judgements have to be made about processes, impacts and outcomes, as well as product. Greenhalgh et al. (2004) suggest that, for the National Health Service (NHS), innovations have to be 'perceived as new by a proportion of key stakeholders' (p. 40). Such a socially-constructed Jean Hartley is Professor of Organizational Analysis, Institute of Governance and Public Management, Warwick Business School and an ESRC AIM Public Service Fellow. perspective is a useful approach to public sector innovation across a range of services. Innovation may include reinvention or adaption to another context, location or time period. The diffusion of innovations (sometimes called dissemination, or spread of good or promising practices) to other organizations, localities and jurisdictions is particularly important for the public sector (Rashman and Hartley, 2002). This highlights some important differences between public and private sector innovation. Innovation in the latter is driven primarily by competitive advantage—this tends to restrict the sharing of good practice to strategic partners. By contrast, the drivers in the public sector are to achieve widespread improvements in governance and service performance, including efficiencies, in order to increase public value (Moore, 1995). Such public goals can be enhanced through collaborative …",
"title": ""
},
{
"docid": "58ba2ac85d041626d6fe361bd0578c2f",
"text": "This paper concerns open-world classification, where the classifier not only needs to classify test examples into seen classes that have appeared in training but also reject examples from unseen or novel classes that have not appeared in training. Specifically, this paper focuses on discovering the hidden unseen classes of the rejected examples. Clearly, without prior knowledge this is difficult. However, we do have the data from the seen training classes, which can tell us what kind of similarity/difference is expected for examples from the same class or from different classes. It is reasonable to assume that this knowledge can be transferred to the rejected examples and used to discover the hidden unseen classes in them. This paper aims to solve this problem. It first proposes a joint open classification model with a sub-model for classifying whether a pair of examples belongs to the same or different classes. This sub-model can serve as a distance function for clustering to discover the hidden classes of the rejected examples. Experimental results show that the proposed model is highly promising.",
"title": ""
},
{
"docid": "1141a01de74dd684f076a1ba402325cb",
"text": "AIMS\nIn several studies, possible risk factors/predictors for severe alcohol withdrawal syndrome (AWS), i.e. delirium tremens (DT) and/or seizures, have been investigated. We have recently observed that low blood platelet count could be such a risk factor/predictor. We therefore investigated whether such an association could be found using a large number of alcohol-dependent individuals (n = 334).\n\n\nMETHODS\nThis study is a retrospectively conducted cohort study based on data from female and male patients (>20 years of age), consecutively admitted to an alcohol treatment unit. The individuals had to fulfil the discharge diagnoses alcohol dependence and alcohol withdrawal syndrome according to DSM-IV.\n\n\nRESULTS\nDuring the treatment period, 3% of the patients developed DT, 2% seizures and none had co-occurrence of both conditions. Among those with DT, a higher proportion had thrombocytopenia. Those with seizures had lower blood platelet count and a higher proportion of them had thrombocytopenia. The sensitivity and specificity of thrombocytopenia for the development of DT during the treatment period was 70% and 69%, respectively. The positive predictive value (PPV) was 6% and the negative predictive value (NPV) was 99%. For the development of seizures, the figure for sensitivity was 75% and for specificity 69%. The figures for PPV and NPV were similar as those for the development of DT.\n\n\nCONCLUSIONS\nThrombocytopenia is more frequent in patients who develop severe AWS (DT or seizures). The findings, including the high NPV of thrombocytopenia, must be interpreted with caution due to the small number of patients who developed AWS. Further studies replicating the present finding are therefore needed before the clinical usefulness can be considered.",
"title": ""
},
{
"docid": "08d8e372c5ae4eef9848552ee87fbd64",
"text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …",
"title": ""
}
] |
scidocsrr
|
c68f2bb41423359e759cc58604002820
|
Generating Natural Adversarial Examples
|
[
{
"docid": "c81e823de071ae451420326e9fbb2e3d",
"text": "Deep latent variable models, trained using variational autoencoders or generative adversarial networks, are now a key technique for representation learning of continuous structures. However, applying similar methods to discrete structures, such as text sequences or discretized images, has proven to be more challenging. In this work, we propose a flexible method for training deep latent variable models of discrete structures. Our approach is based on the recently-proposed Wasserstein autoencoder (WAE) which formalizes the adversarial autoencoder (AAE) as an optimal transport problem. We first extend this framework to model discrete sequences, and then further explore different learned priors targeting a controllable representation. This adversarially regularized autoencoder (ARAE) allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce change in the output space. Finally we show that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods.",
"title": ""
},
{
"docid": "49942573c60fa910369b81c44447a9b1",
"text": "Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible text sentences, whose attributes are controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of semantic structures. The model can alternatively be seen as enhancing VAEs with the wake-sleep algorithm for leveraging fake samples as extra training data. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns interpretable representations from even only word annotations, and produces short sentences with desired attributes of sentiment and tenses. Quantitative experiments using trained classifiers as evaluators validate the accuracy of sentence and attribute generation.",
"title": ""
},
{
"docid": "d310779b1006f90719a0ece3cf2583b2",
"text": "While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model’s decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.",
"title": ""
}
] |
[
{
"docid": "727c36aac7bd0327f3edb85613dcf508",
"text": "The interpretation of adjective-noun pairs plays a crucial role in tasks such as recognizing textual entailment. Formal semantics often places adjectives into a taxonomy which should dictate adjectives’ entailment behavior when placed in adjective-noun compounds. However, we show experimentally that the behavior of subsective adjectives (e.g. red) versus non-subsective adjectives (e.g. fake) is not as cut and dry as often assumed. For example, inferences are not always symmetric: while ID is generally considered to be mutually exclusive with fake ID, fake ID is considered to entail ID. We discuss the implications of these findings for automated natural language understanding.",
"title": ""
},
{
"docid": "6a6238bb56eacc7d8ecc8f15f753b745",
"text": "Privacy-preservation has emerged to be a major concern in devising a data mining system. But, protecting the privacy of data mining input does not guarantee a privacy-preserved output. This paper focuses on preserving the privacy of data mining output and particularly the output of classification task. Further, instead of static datasets, we consider the classification of continuously arriving data streams: a rapidly growing research area. Due to the challenges of data stream classification such as vast volume, a mixture of labeled and unlabeled instances throughout the stream and timely classifier publication, enforcing privacy-preservation techniques becomes even more challenging. In order to achieve this goal, we propose a systematic method for preserving output-privacy in data stream classification that addresses several applications like loan approval, credit card fraud detection, disease outbreak or biological attack detection. Specifically, we propose an algorithm named Diverse and k-Anonymized HOeffding Tree (DAHOT) that is an amalgamation of popular data stream classification algorithm Hoeffding tree and a variant of k-anonymity and l-diversity principles. The empirical results on real and synthetic data streams verify the effectiveness of DAHOT as compared to its bedrock Hoeffding tree and two other techniques, one that learns sanitized decision trees from sampled data stream and other technique that uses ensemble-based classification. DAHOT guarantees to preserve the private patterns while classifying the data streams accurately.",
"title": ""
},
{
"docid": "0834473b45a9b009da458a8d5009cfa0",
"text": "Popular open-source software projects receive and review contributions from a diverse array of developers, many of whom have little to no prior involvement with the project. A recent survey reported that reviewers consider conformance to the project's code style to be one of the top priorities when evaluating code contributions on Github. We propose to quantitatively evaluate the existence and effects of this phenomenon. To this aim we use language models, which were shown to accurately capture stylistic aspects of code. We find that rejected changesets do contain code significantly less similar to the project than accepted ones; furthermore, the less similar changesets are more likely to be subject to thorough review. Armed with these results we further investigate whether new contributors learn to conform to the project style and find that experience is positively correlated with conformance to the project's code style.",
"title": ""
},
{
"docid": "487d1c9aa22c605d619414ecce3661bd",
"text": "Formation of dental caries is caused by the colonization and accumulation of oral microorganisms and extracellular polysaccharides that are synthesized from sucrose by glucosyltransferase of Streptococcus mutans. The production of glucosyltransferase from oral microorganisms was attempted, and it was found that Streptococcus mutans produced highest activity of the enzyme. Ethanolic extracts of propolis (EEP) were examined whether EEP inhibit the enzyme activity and growth of the bacteria or not. All EEP from various regions in Brazil inhibited both glucosyltransferase activity and growth of S. mutans, but one of the propolis from Rio Grande do Sul (RS2) demonstrated the highest inhibition of the enzyme activity and growth of the bacteria. It was also found that propolis (RS2) contained the highest concentrations of pinocembrin and galangin.",
"title": ""
},
{
"docid": "7d5d2f819a5b2561db31645d534836b8",
"text": "Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to model the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, clarifying what guarantees can and cannot be associated with such a structure.",
"title": ""
},
{
"docid": "ad69ea9cc0db8bf43e904ff67716a4b3",
"text": "Fragile X syndrome characterized by intellectual disability (ID), facial dysmorphism, and postpubertal macroorchidism is the most common monogenic cause of ID. It is typically induced by an expansion of a CGG repeat in the fragile X mental retardation 1 (FMR1) gene on Xq27 to more than 200 repeats. Only rarely patients have atypical mutations in the FMR1 gene such as point mutations, deletions, or unmethylated/partially methylated full mutations. Most of these patients show a minor phenotype or even appear clinically healthy. Here, we report the dysmorphism and clinical features of a 17-year-old boy with a partially methylated full mutation of approximately 250 repeats. Diagnosis was made subsequently to the evaluation of a FMR1 premutation as the cause for maternal premature ovarian failure. Dysmorphic evaluation revealed no strikingly long face, no prominent forehead/frontal bossing, no prominent mandible, no macroorchidism, and a head circumference in the lower normal range. Acquisition of a driving license for mopeds and unaccompanied rides by public transport in his home province indicate rather mild ID (IQ = 58). Conclusion: This adolescent demonstrates that apart from only minor ID, patients with a partially methylated FMR1 full mutation present less to absent pathognomonic facial dysmorphism, thus emphasizing the impact of family history for a straightforward clinical diagnosis.",
"title": ""
},
{
"docid": "d5fbbd249842b40f3a81f1229213c528",
"text": "In recent years, spatial applications have become more and more important in both scientific research and industry. Spatial query processing is the fundamental functioning component to support spatial applications. However, the state-of-the-art techniques of spatial query processing are facing significant challenges as the data expand and user accesses increase. In this paper we propose and implement a novel scheme (named VegaGiStore) to provide efficient spatial query processing over big spatial data and numerous concurrent user queries. Firstly, a geography-aware approach is proposed to organize spatial data in terms of geographic proximity, and this approach can achieve high aggregate I/O throughput. Secondly, in order to improve data retrieval efficiency, we design a two-tier distributed spatial index for efficient pruning of the search space. Thirdly, we propose an \"indexing + MapReduce'' data processing architecture to improve the computation capability of spatial query. Performance evaluations of the real-deployed VegaGiStore system confirm its effectiveness.",
"title": ""
},
{
"docid": "84646992c6de3b655f8ccd2bda3e6d4c",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.eswa.2012.02.064 ⇑ Corresponding author. E-mail addresses: raffaele.cappelli@unibo.it (R. C bo.it (M. Ferrara). This paper proposes a novel fingerprint retrieval system that combines level-1 (local orientation and frequencies) and level-2 (minutiae) features. Various scoreand rank-level fusion strategies and a novel hybrid fusion approach are evaluated. Extensive experiments are carried out on six public databases and a systematic comparison is made with eighteen retrieval methods and seventeen exclusive classification techniques published in the literature. The novel approach achieves impressive results: its retrieval accuracy is definitely higher than competing state-of-the-art methods, with error rates that in some cases are even one or two orders of magnitude smaller. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d612aeb7f7572345bab8609571f4030d",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "3e9f98a1aa56e626e47a93b7973f999a",
"text": "This paper presents a sociocultural knowledge ontology (OntoSOC) modeling approach. OntoSOC modeling approach is based on Engeström‟s Human Activity Theory (HAT). That Theory allowed us to identify fundamental concepts and relationships between them. The top-down precess has been used to define differents sub-concepts. The modeled vocabulary permits us to organise data, to facilitate information retrieval by introducing a semantic layer in social web platform architecture, we project to implement. This platform can be considered as a « collective memory » and Participative and Distributed Information System (PDIS) which will allow Cameroonian communities to share an co-construct knowledge on permanent organized activities.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "368c769f4427c213c68d1b1d7a0e4ca9",
"text": "The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.",
"title": ""
},
{
"docid": "5cc458548f26619b4cc632f25ea2e9f8",
"text": "As a consequence of the popularity of big data, many users with a variety of backgrounds seek to extract high level information from datasets collected from various sources and combined using data integration techniques. A major challenge for research in data management is to develop tools to assist users in explaining observed query outputs. In this paper we introduce a principled approach to provide explanations for answers to SQL queries based on intervention: removal of tuples from the database that significantly affect the query answers. We provide a formal definition of intervention in the presence of multiple relations which can interact with each other through foreign keys. First we give a set of recursive rules to compute the intervention for any given explanation in polynomial time (data complexity). Then we give simple and efficient algorithms based on SQL queries that can compute the top-K explanations by using standard database management systems under certain conditions. We evaluate the quality and performance of our approach by experiments on real datasets.",
"title": ""
},
{
"docid": "869889e8be00663e994631b17061479b",
"text": "In this study we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes n-grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalization, achieving the best result of 80% accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface n-grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed.",
"title": ""
},
{
"docid": "bd33ed4cde24e8ec16fb94cf543aad8e",
"text": "Users' locations are important to many applications such as targeted advertisement and news recommendation. In this paper, we focus on the problem of profiling users' home locations in the context of social network (Twitter). The problem is nontrivial, because signals, which may help to identify a user's location, are scarce and noisy. We propose a unified discriminative influence model, named as UDI, to solve the problem. To overcome the challenge of scarce signals, UDI integrates signals observed from both social network (friends) and user-centric data (tweets) in a unified probabilistic framework. To overcome the challenge of noisy signals, UDI captures how likely a user connects to a signal with respect to 1) the distance between the user and the signal, and 2) the influence scope of the signal. Based on the model, we develop local and global location prediction methods. The experiments on a large scale data set show that our methods improve the state-of-the-art methods by 13%, and achieve the best performance.",
"title": ""
},
{
"docid": "1bcb0d930848fab3e5b8aee3c983e45b",
"text": "BACKGROUND\nLycopodium clavatum (Lyc) is a widely used homeopathic medicine for the liver, urinary and digestive disorders. Recently, acetyl cholinesterase (AchE) inhibitory activity has been found in Lyc alkaloid extract, which could be beneficial in dementia disorder. However, the effect of Lyc has not yet been explored in animal model of memory impairment and on cerebral blood flow.\n\n\nAIM\nThe present study was planned to explore the effect of Lyc on learning and memory function and cerebral blood flow (CBF) in intracerebroventricularly (ICV) administered streptozotocin (STZ) induced memory impairment in rats.\n\n\nMATERIALS AND METHODS\nMemory deficit was induced by ICV administration of STZ (3 mg/kg) in rats on 1st and 3rd day. Male SD rats were treated with Lyc Mother Tincture (MT) 30, 200 and 1000 for 17 days. Learning and memory was evaluated by Morris water maze test on 14th, 15th and 16th day. CBF was measured by Laser Doppler flow meter on 17th day.\n\n\nRESULTS\nSTZ (ICV) treated rats showed impairment in learning and memory along with reduced CBF. Lyc MT and 200 showed improvement in learning and memory. There was increased CBF in STZ (ICV) treated rats at all the potencies of Lyc studied.\n\n\nCONCLUSION\nThe above study suggests that Lyc may be used as a drug of choice in condition of memory impairment due to its beneficial effect on CBF.",
"title": ""
},
{
"docid": "4f63c03e9a4d2049535a48cd7e8835d8",
"text": "This article reports on a histological and morphological study on the induction of in vitro flowering in vegetatively propagated plantlets from different date palm cultivars. The study aimed to further explore the control of in vitro flower induction in relation to the photoperiodic requirements in date palm and to come up with a novel system that may allow for early sex determination through plant cycle reduction. In fact, the in vitro reversion of a shoot meristem from a vegetative to a reproductive state was achieved within 1–5 months depending on the variety considered. This reversion was accompanied by several morphological transformations that affected the apical part of the leafy bud corresponding mainly to a size increase of the prefloral meristem zone followed by the appearance of an inflorescence. The flowers that were produced in vitro were histologically and morphologically similar to those formed in vivo. The histological examination of the in vitro flowering induction process showed that the conversion into inflorescences involved the entire apical vegetative meristem of the plantlet used as a starting material and brought about a change in its anatomical structure without affecting its phyllotaxis and the leaf shape. Through alternating between hormone-free and hormone-containing media under different light/dark conditions, the highest flower induction rates were obtained with a basal Murashige and Skoog medium. A change in the architectural model of date palm was induced because unlike the natural lateral flowering, in vitro flowering was terminal. Such in vitro flower induction allowed a significant reduction in plant cycle and can, therefore, be considered a promising candidate to save time for future improvement and selection programs in date palm.",
"title": ""
},
{
"docid": "340f64ed182a54ef617d7aa2ffeac138",
"text": "Compared with animals, plants generally possess a high degree of developmental plasticity and display various types of tissue or organ regeneration. This regenerative capacity can be enhanced by exogenously supplied plant hormones in vitro, wherein the balance between auxin and cytokinin determines the developmental fate of regenerating organs. Accumulating evidence suggests that some forms of plant regeneration involve reprogramming of differentiated somatic cells, whereas others are induced through the activation of relatively undifferentiated cells in somatic tissues. We summarize the current understanding of how plants control various types of regeneration and discuss how developmental and environmental constraints influence these regulatory mechanisms.",
"title": ""
},
{
"docid": "45cff09810b8741d8be1010aa6ff3000",
"text": "This paper discusses experience in applying time harmonic three-dimensional (3D) finite element (FE) analysis in analyzing an axial-flux (AF) solid-rotor induction motor (IM). The motor is a single rotor - single stator AF IM. The construction presented in this paper has not been analyzed before in any technical documents. The field analysis and the comparison of torque calculation results of the 3D calculations with measured torque results are presented",
"title": ""
}
] |
scidocsrr
|
f01b3bcc1e3f6ba62a91414f97d33d8d
|
Marketplace or Reseller?
|
[
{
"docid": "c7d629a83de44e17a134a785795e26d8",
"text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.",
"title": ""
},
{
"docid": "4a87e61106125ffdd49c42517ce78b87",
"text": "Due to network effects and switching costs, platform providers often become entrenched. To dislodge them, entrants generally must offer revolutionary products. We explore a second path to platform leadership change that does not rely on Schumpeterian creative destruction: platform envelopment. By leveraging common components and shared user relationships, one platform provider can move into another’s market, combining its own functionality with the target’s in a multi-platform bundle. Dominant firms otherwise sheltered from entry by standalone rivals may be vulnerable to an adjacent platform provider’s envelopment attack. We analyze conditions under which envelopment strategies are likely to succeed.",
"title": ""
},
{
"docid": "58c2f9f5f043f87bc51d043f70565710",
"text": "T strategic use of first-party content by two-sided platforms is driven by two key factors: the nature of buyer and seller expectations (favorable versus unfavorable) and the nature of the relationship between first-party content and third-party content (complements or substitutes). Platforms facing unfavorable expectations face an additional constraint: their prices and first-party content investment need to be such that low (zero) participation equilibria are eliminated. This additional constraint typically leads them to invest more (less) in first-party content relative to platforms facing favorable expectations when firstand third-party content are substitutes (complements). These results hold with both simultaneous and sequential entry of the two sides. With two competing platforms—incumbent facing favorable expectations and entrant facing unfavorable expectations— and multi-homing on one side of the market, the incumbent always invests (weakly) more in first-party content relative to the case in which it is a monopolist.",
"title": ""
}
] |
[
{
"docid": "14e5e95ae4422120f5f1bb8cccb2b186",
"text": "We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.",
"title": ""
},
{
"docid": "8bcda11934a1eaff4b41cbe695bbfc4f",
"text": "Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of back-propagation, a few approaches have been proposed in the past that could play a similar credit assignment role as backprop. In this spirit, we explore a novel approach to credit assignment in deep networks that we call target propagation. The main idea is to compute targets rather than gradients, at each layer. Like gradients, they are propagated backwards. In a way that is related but different from previously proposed proxies for back-propagation which rely on a backwards network with symmetric weights, target propagation relies on auto-encoders at each layer. Unlike back-propagation, it can be applied even when units exchange stochastic bits rather than real numbers. We show that a linear correction for the imperfectness of the auto-encoders is very effective to make target propagation actually work, along with adaptive learning rates.",
"title": ""
},
{
"docid": "a9e27b52ed31b47c23b1281c28556487",
"text": "Nuclear receptors are integrators of hormonal and nutritional signals, mediating changes to metabolic pathways within the body. Given that modulation of lipid and glucose metabolism has been linked to diseases including type 2 diabetes, obesity and atherosclerosis, a greater understanding of pathways that regulate metabolism in physiology and disease is crucial. The liver X receptors (LXRs) and the farnesoid X receptors (FXRs) are activated by oxysterols and bile acids, respectively. Mounting evidence indicates that these nuclear receptors have essential roles, not only in the regulation of cholesterol and bile acid metabolism but also in the integration of sterol, fatty acid and glucose metabolism.",
"title": ""
},
{
"docid": "77b1e7b6f91cf5e2d4380a9d117ae7d9",
"text": "This paper theoretically introduces and develops a new operation diagram (OPD) and parameter estimator for the synchronous reluctance machine (SynRM). The OPD demonstrates the behavior of the machine's main performance parameters, such as torque, current, voltage, frequency, flux, power factor (PF), and current angle, all in one graph. This diagram can easily be used to describe different control strategies, possible operating conditions, both below- and above-rated speeds, etc. The saturation effect is also discussed with this diagram by finite-element-method calculations. A prototype high-performance SynRM is designed for experimental studies, and then, both machines' [corresponding induction machine (IM)] performances at similar loading and operation conditions are tested, measured, and compared to demonstrate the potential of SynRM. The laboratory measurements (on a standard 15-kW Eff1 IM and its counterpart SynRM) show that SynRM has higher efficiency, torque density, and inverter rating and lower rotor temperature and PF in comparison to IM at the same winding-temperature-rise condition. The measurements show that the torque capability of SynRM closely follows that of IM.",
"title": ""
},
{
"docid": "30740e33cdb2c274dbd4423e8f56405e",
"text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.",
"title": ""
},
{
"docid": "9adf653a332e07b8aa055b62449e1475",
"text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.",
"title": ""
},
{
"docid": "3e43ee5513a0bd8bea8b1ea5cf8cefec",
"text": "Hans-Juergen Boehm Computer Science Department, Rice University, Houston, TX 77251-1892, U.S.A. Mark Weiser Xerox Corporation, Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304, U.S.A. A later version of this paper appeared in Software Practice and Experience 18, 9, pp. 807-820. Copyright 1988 by John Wiley and Sons, Ld. The publishers rules appear to allow posting of preprints, but only on the author’s web site.",
"title": ""
},
{
"docid": "4107fe17e6834f96a954e13cbb920f78",
"text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.",
"title": ""
},
{
"docid": "4afbb5f877f3920dccdf60f6f4dfbf91",
"text": "Handling degenerate rotation-only camera motion is a challenge for keyframe-based simultaneous localization and mapping with six degrees of freedom. Existing systems usually filter corresponding keyframe candidates, resulting in mapping starvation and tracking failure. We propose to employ these otherwise discarded keyframes to build up local panorama maps registered in the 3D map. Thus, the system is able to maintain tracking during rotational camera motions. Additionally, we seek to actively associate panoramic and 3D map data for improved 3D mapping through the triangulation of more new 3D map features. We demonstrate the efficacy of our approach in several evaluations that show how the combined system handles rotation only camera motion while creating larger and denser maps compared to a standard SLAM system.",
"title": ""
},
{
"docid": "8a6b9930a9dccb0555980140dd6c4ae4",
"text": "The mass shooting at Sandy Hook elementary school on December 14, 2012 catalyzed a year of active debate and legislation on gun control in the United States. Social media hosted an active public discussion where people expressed their support and opposition to a variety of issues surrounding gun legislation. In this paper, we show how a contentbased analysis of Twitter data can provide insights and understanding into this debate. We estimate the relative support and opposition to gun control measures, along with a topic analysis of each camp by analyzing over 70 million gun-related tweets from 2013. We focus on spikes in conversation surrounding major events related to guns throughout the year. Our general approach can be applied to other important public health and political issues to analyze the prevalence and nature of public opinion.",
"title": ""
},
{
"docid": "725e92f13cc7c03b890b5d2e7380b321",
"text": "Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as “the curse of dimensionality”. This paper presents a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated as a control theory problem and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and speed. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.",
"title": ""
},
{
"docid": "8b158bfaf805974c1f8478c7ac051426",
"text": "BACKGROUND AND AIMS\nThe analysis of large-scale genetic data from thousands of individuals has revealed the fact that subtle population genetic structure can be detected at levels that were previously unimaginable. Using the Human Genome Diversity Panel as reference (51 populations - 650,000 SNPs), this works describes a systematic evaluation of the resolution that can be achieved for the inference of genetic ancestry, even when small panels of genetic markers are used.\n\n\nMETHODS AND RESULTS\nA comprehensive investigation of human population structure around the world is undertaken by leveraging the power of Principal Components Analysis (PCA). The problem is dissected into hierarchical steps and a decision tree for the prediction of individual ancestry is proposed. A complete leave-one-out validation experiment demonstrates that, using all available SNPs, assignment of individuals to their self-reported populations of origin is essentially perfect. Ancestry informative genetic markers are selected using two different metrics (In and correlation with PCA scores). A thorough cross-validation experiment indicates that, in most cases here, the number of SNPs needed for ancestry inference can be successfully reduced to less than 0.1% of the original 650,000 while retaining close to 100% accuracy. This reduction can be achieved using a novel clustering-based redundancy removal algorithm that is also introduced here. Finally, the applicability of our suggested SNP panels is tested on HapMap Phase 3 populations.\n\n\nCONCLUSION\nThe proposed methods and ancestry informative marker panels, in combination with the increasingly more comprehensive databases of human genetic variation, open new horizons in a variety of fields, ranging from the study of human evolution and population history, to medical genetics and forensics.",
"title": ""
},
{
"docid": "2052d056e4f4831ebd9992882e8e4015",
"text": "Soccer video semantic analysis has attracted a lot of researchers in the last few years. Many methods of machine learning have been applied to this task and have achieved some positive results, but the neural network method has not yet been used to this task from now. Taking into account the advantages of Convolution Neural Network(CNN) in fully exploiting features and the ability of Recurrent Neural Network(RNN) in dealing with the temporal relation, we construct a deep neural network to detect soccer video event in this paper. First we determine the soccer video event boundary which we used Play-Break(PB) segment by the traditional method. Then we extract the semantic features of key frames from PB segment by pre-trained CNN, and at last use RNN to map the semantic features of PB to soccer event types, including goal, goal attempt, card and corner. Because there is no suitable and effective dataset, we classify soccer frame images into nine categories according to their different semantic views and then construct a dataset called Soccer Semantic Image Dataset(SSID) for training CNN. The sufficient experiments evaluated on 30 soccer match videos demonstrate the effectiveness of our method than state-of-art methods.",
"title": ""
},
{
"docid": "7a6181a65121ce577bc77711ce7a095c",
"text": "We present a new, general, and real-time technique for soft global illumination in low-frequency environmental lighting. It accumulates over relatively few spherical proxies which approximate the light blocking and re-radiating effect of dynamic geometry. Soft shadows are computed by accumulating log visibility vectors for each sphere proxy as seen by each receiver point. Inter-reflections are computed by accumulating vectors representing the proxy's unshadowed radiance when illuminated by the environment. Both vectors capture low-frequency directional dependence using the spherical harmonic basis. We also present a new proxy accumulation strategy that splats each proxy to receiver pixels in image space to collect its shadowing and indirect lighting contribution. Our soft GI rendering pipeline unifies direct and indirect soft effects with a simple accumulation strategy that maps entirely to the GPU and outperforms previous vertex-based methods.",
"title": ""
},
{
"docid": "2d98a90332278049d61a6eb431317216",
"text": "Feature extraction is a method of capturing visual content of an image. The feature extraction is the process to represent raw image in its reduced form to facilitate decision making such as pattern classification. We have tried to address the problem of classification MRI brain images by creating a robust and more accurate classifier which can act as an expert assistant to medical practitioners. The objective of this paper is to present a novel method of feature selection and extraction. This approach combines the Intensity, Texture, shape based features and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor contained brain MR images from the Internet Brain Segmentation Repository. The proposed technique has been carried out over a larger database as compare to any previous work and is more robust and effective. PCA and Linear Discriminant Analysis (LDA) were applied on the training sets. The Support Vector Machine (SVM) classifier served as a comparison of nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of features used. The feature selection using the proposed technique is more beneficial as it analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.",
"title": ""
},
{
"docid": "b4a2c3679fe2490a29617c6a158b9dbc",
"text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"title": ""
},
{
"docid": "61e460c93d82acf80983f5947154b139",
"text": "The Internet has many benefits, some of them are to gain knowledge and gain the latest information. The internet can be used by anyone and can contain any information, including negative content such as pornographic content, radicalism, racial intolerance, violence, fraud, gambling, security and drugs. Those contents cause the number of children victims of pornography on social media increasing every year. Based on that, it needs a system that detects pornographic content on social media. This study aims to determine the best model to detect the pornographic content. Model selection is determined based on unigram and bigram features, classification algorithm, k-fold cross validation. The classification algorithm used is Support Vector Machine and Naive Bayes. The highest F1-score is yielded by the model with combination of Support Vector Machine, most common words, and combination of unigram and bigram, which returns F1-Score value of 91.14%.",
"title": ""
},
{
"docid": "85c32427a1a6c04e3024d22b03b26725",
"text": "Monte Carlo tree search (MCTS) is extremely popular in computer Go which determines each action by enormous simulations in a broad and deep search tree. However, human experts select most actions by pattern analysis and careful evaluation rather than brute search of millions of future interactions. In this paper, we propose a computer Go system that follows experts way of thinking and playing. Our system consists of two parts. The first part is a novel deep alternative neural network (DANN) used to generate candidates of next move. Compared with existing deep convolutional neural network (DCNN), DANN inserts recurrent layer after each convolutional layer and stacks them in an alternative manner. We show such setting can preserve more contexts of local features and its evolutions which are beneficial for move prediction. The second part is a long-term evaluation (LTE) module used to provide a reliable evaluation of candidates rather than a single probability from move predictor. This is consistent with human experts nature of playing since they can foresee tens of steps to give an accurate estimation of candidates. In our system, for each candidate, LTE calculates a cumulative reward after several future interactions when local variations are settled. Combining criteria from the two parts, our system determines the optimal choice of next move. For more comprehensive experiments, we introduce a new professional Go dataset (PGD), consisting of 253, 233 professional records. Experiments on GoGoD and PGD datasets show the DANN can substantially improve performance of move prediction over pure DCNN. When combining LTE, our system outperforms most relevant approaches and open engines based on",
"title": ""
},
{
"docid": "b3556499bf5d788de7c4d46100ac3a9f",
"text": "Reuse has been proposed as a microarchitecture-level mechanism to reduce the amount of executed instructions, collapsing dependencies and freeing resources for other instructions. Previous works have used reuse domains such as memory accesses, integer or not floating point, based on the reusability rate. However, these works have not studied the specific contribution of reusing different subsets of instructions for performance. In this work, we analysed the sensitivity of trace reuse to instruction subsets, comparing their efficiency to their complementary subsets. We also studied the amount of reuse that can be extracted from loops. Our experiments show that disabling trace reuse outside loops does not harm performance but reduces in 12% the number of accesses to the reuse table. Our experiments with reuse subsets show that most of the speedup can be retained even when not reusing all types of instructions previously found in the reuse domain. 1 ar X iv :1 71 1. 06 67 2v 1 [ cs .A R ] 1 7 N ov 2 01 7",
"title": ""
},
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
}
] |
scidocsrr
|
4d97dc47536dbc6f296ac0e89fb309cf
|
An open-source navigation system for micro aerial vehicles
|
[
{
"docid": "c12d534d219e3d249ba3da1c0956c540",
"text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.",
"title": ""
},
{
"docid": "cff9a7f38ca6699b235c774232a56f54",
"text": "This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.",
"title": ""
}
] |
[
{
"docid": "14bb62c02192f837303dcc2e327475a6",
"text": "In this paper, we have proposed three kinds of network security situation awareness (NSSA) models. In the era of big data, the traditional NSSA methods cannot analyze the problem effectively. Therefore, the three models are designed for big data. The structure of these models are very large, and they are integrated into the distributed platform. Each model includes three modules: network security situation detection (NSSD), network security situation understanding (NSSU), and network security situation projection (NSSP). Each module comprises different machine learning algorithms to realize different functions. We conducted a comprehensive study of the safety of these models. Three models compared with each other. The experimental results show that these models can improve the efficiency and accuracy of data processing when dealing with different problems. Each model has its own advantages and disadvantages.",
"title": ""
},
{
"docid": "b1ef897890df4c719d85dd339f8dee70",
"text": "Repositories of health records are collections of events with varying number and sparsity of occurrences within and among patients. Although a large number of predictive models have been proposed in the last decade, they are not yet able to simultaneously capture cross-attribute and temporal dependencies associated with these repositories. Two major streams of predictive models can be found. On one hand, deterministic models rely on compact subsets of discriminative events to anticipate medical conditions. On the other hand, generative models offer a more complete and noise-tolerant view based on the likelihood of the testing arrangements of events to discriminate a particular outcome. However, despite the relevance of generative predictive models, they are not easily extensible to deal with complex grids of events. In this work, we rely on the Markov assumption to propose new predictive models able to deal with cross-attribute and temporal dependencies. Experimental results hold evidence for the utility and superior accuracy of generative models to anticipate health conditions, such as the need for surgeries. Additionally, we show that the proposed generative models are able to decode temporal patterns of interest (from the learned lattices) with acceptable completeness and precision levels, and with superior efficiency for voluminous repositories.",
"title": ""
},
{
"docid": "9164bd704cdb8ca76d0b5f7acda9d4ef",
"text": "In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal \"canonical\" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.",
"title": ""
},
{
"docid": "f5cf9268c2d3ddf04d840f5f1b68f238",
"text": "The ribosomal uL10 protein, formerly known as P0, is an essential element of the ribosomal GTPase-associated center responsible for the interplay with translational factors during various stages of protein synthesis. In eukaryotic cells, uL10 binds two P1/P2 protein heterodimers to form a pentameric P-stalk, described as uL10-(P1-P2)2, which represents the functional form of these proteins on translating ribosomes. Unlike most ribosomal proteins, which are incorporated into pre-ribosomal particles during early steps of ribosome biogenesis in the nucleus, P-stalk proteins are attached to the 60S subunit in the cytoplasm. Although the primary role of the P-stalk is related to the process of translation, other extraribosomal functions of its constituents have been proposed, especially for the uL10 protein; however, the list of its activities beyond the ribosome is still an open question. Here, by the combination of biochemical and advanced fluorescence microscopy techniques, we demonstrate that upon nucleolar stress induction the uL10 protein accumulates in the cytoplasm of mammalian cells as a free, ribosome-unbound protein. Importantly, using a novel approach, FRAP-AC (FRAP after photoConversion), we have shown that the ribosome-free pool of uL10 represents a population of proteins released from pre-existing ribosomes. Taken together, our data indicate that the presence of uL10 on the ribosomes is affected in stressed cells, thus it might be considered as a regulatory element responding to environmental fluctuations.",
"title": ""
},
{
"docid": "f68f259523b2ec08448de3c0f9d7d23a",
"text": "A comprehensive computational fluid-dynamics-based study of a pleated wing section based on the wing of Aeshna cyanea has been performed at ultra-low Reynolds numbers corresponding to the gliding flight of these dragonflies. In addition to the pleated wing, simulations have also been carried out for its smoothed counterpart (called the 'profiled' airfoil) and a flat plate in order to better understand the aerodynamic performance of the pleated wing. The simulations employ a sharp interface Cartesian-grid-based immersed boundary method, and a detailed critical assessment of the computed results was performed giving a high measure of confidence in the fidelity of the current simulations. The simulations demonstrate that the pleated airfoil produces comparable and at times higher lift than the profiled airfoil, with a drag comparable to that of its profiled counterpart. The higher lift and moderate drag associated with the pleated airfoil lead to an aerodynamic performance that is at least equivalent to and sometimes better than the profiled airfoil. The primary cause for the reduction in the overall drag of the pleated airfoil is the negative shear drag produced by the recirculation zones which form within the pleats. The current numerical simulations therefore clearly demonstrate that the pleated wing is an ingenious design of nature, which at times surpasses the aerodynamic performance of a more conventional smooth airfoil as well as that of a flat plate. For this reason, the pleated airfoil is an excellent candidate for a fixed wing micro-aerial vehicle design.",
"title": ""
},
{
"docid": "da61524899080951ea8453e7bb7c5ec6",
"text": "StressSense is smart clothing made of fabric sensors that monitor the stress level of the wearers. The fabric sensors are comfortable, allowing for long periods of monitoring and the electronic components are waterproof and detachable for ease of care. This design project is expected to be beneficial for people who have a lot of stress in their daily life and who care about their mental health. It can be also used for people who need to control their stress level critically, such as analysts, stock managers, athletes, and patients with chronic diseases and disorders.",
"title": ""
},
{
"docid": "31f1079ac79278eaf5fbcd5ef11482e7",
"text": "Data from two studies describe the development of an implicit measure of humility and support the idea that dispositional humility is a positive quality with possible benefits. In Study 1, 135 college students completed Humility and Self-Esteem Implicit Association Tests (IATs) and several self-report measures of personality self-concept. Fifty-four participants also completed the Humility IAT again approximately 2 weeks later and their humility was rated by close acquaintances. The Humility IAT was found to be internally and temporally consistent. Implicit humility correlated with self-reported humility relative to arrogance, implicit self-esteem, and narcissism (inversely). Humility was not associated with self-reported low selfesteem, pessimism, or depression. In fact, self-reported humility relative to arrogance correlated positively with self-reported self-esteem, gratitude, forgiveness, spirituality, and general health. In addition, self-reported humility and acquaintancerated humility correlated positively; however, implicit humility and acquaintance-rated humility were not strongly associated. In Study 2, to examine the idea that humility might be associated with increased academic performance, we examined actual course grades of 55 college students who completed Humility and Self-Esteem IATs. Implicit humility correlated positively with higher actual course grades when narcissism, conscientiousness, and implicit self-esteem were simultaneously controlled. Implications and future research directions are discussed.",
"title": ""
},
{
"docid": "7dcc7cdff8a9196c716add8a1faf0203",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "c182bef2a20bb9c13d0b2b89e7adf5ce",
"text": "Endocannabinoids are neuromodulators that act as retrograde synaptic messengers inhibiting the release of different neurotransmitters in cerebral areas such as hippocampus, cortex, and striatum. However, little is known about other roles of the endocannabinoid system in brain. In the present work we provide substantial evidence that the endocannabinoid anandamide (AEA) regulates neuronal differentiation both in culture and in vivo. Thus AEA, through the CB(1) receptor, inhibited cortical neuron progenitor differentiation to mature neuronal phenotype. In addition, human neural stem cell differentiation and nerve growth factor-induced PC12 cell differentiation were also inhibited by cannabinoid challenge. AEA decreased PC12 neuronal-like generation via CB(1)-mediated inhibition of sustained extracellular signal-regulated kinase (ERK) activation, which is responsible for nerve growth factor action. AEA thus inhibited TrkA-induced Rap1/B-Raf/ERK activation. Finally, immunohistochemical analyses by confocal microscopy revealed that adult neurogenesis in dentate gyrus was significantly decreased by the AEA analogue methanandamide and increased by the CB(1) antagonist SR141716. These data indicate that endocannabinoids inhibit neuronal progenitor cell differentiation through attenuation of the ERK pathway and suggest that they constitute a new physiological system involved in the regulation of neurogenesis.",
"title": ""
},
{
"docid": "115fab034391b2003dc0365460f5bbf1",
"text": "Polymyalgia rheumatica (PMR) is a chronic inflammatory disorder of unknown cause characterised by the subacute onset of shoulder and pelvic girdle pain, and early morning stiffness in men and women over the age of 50 years. Due to the lack of a gold standard investigation, diagnosis is based on a clinical construct and laboratory evidence of inflammation. Heterogeneity in the clinical presentation and disease course of PMR has long been recognised. Aside from the evolution of alternative diagnoses, such as late-onset rheumatoid arthritis, concomitant giant cell arteritis is also recognised in 16-21% of cases. In 2012, revised classification criteria were released by the European League Against Rheumatism and American College of Rheumatology in order to identify a more homogeneous population upon which future studies could be based. In this article, we aim to provide an updated perspective on the pathogenesis and diagnosis of PMR, with particular focus on imaging modalities, such as ultrasound and whole body positron emission tomography/computed tomography, which have advanced our current understanding of this disease. Future treatment directions, based on recognition of the key cytokines involved in PMR, will also be explored.",
"title": ""
},
{
"docid": "a239e75cb06355884f65f041e215b902",
"text": "BACKGROUND\nNecrotizing enterocolitis (NEC) and nosocomial sepsis are associated with increased morbidity and mortality in preterm infants. Through prevention of bacterial migration across the mucosa, competitive exclusion of pathogenic bacteria, and enhancing the immune responses of the host, prophylactic enteral probiotics (live microbial supplements) may play a role in reducing NEC and associated morbidity.\n\n\nOBJECTIVES\nTo compare the efficacy and safety of prophylactic enteral probiotics administration versus placebo or no treatment in the prevention of severe NEC and/or sepsis in preterm infants.\n\n\nSEARCH STRATEGY\nFor this update, searches were made of MEDLINE (1966 to October 2010), EMBASE (1980 to October 2010), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 2, 2010), and abstracts of annual meetings of the Society for Pediatric Research (1995 to 2010).\n\n\nSELECTION CRITERIA\nOnly randomized or quasi-randomized controlled trials that enrolled preterm infants < 37 weeks gestational age and/or < 2500 g birth weight were considered. Trials were included if they involved enteral administration of any live microbial supplement (probiotics) and measured at least one prespecified clinical outcome.\n\n\nDATA COLLECTION AND ANALYSIS\nStandard methods of the Cochrane Collaboration and its Neonatal Group were used to assess the methodologic quality of the trials, data collection and analysis.\n\n\nMAIN RESULTS\nSixteen eligible trials randomizing 2842 infants were included. Included trials were highly variable with regard to enrollment criteria (i.e. birth weight and gestational age), baseline risk of NEC in the control groups, timing, dose, formulation of the probiotics, and feeding regimens. Data regarding extremely low birth weight infants (ELBW) could not be extrapolated. In a meta-analysis of trial data, enteral probiotics supplementation significantly reduced the incidence of severe NEC (stage II or more) (typical RR 0.35, 95% CI 0.24 to 0.52) and mortality (typical RR 0.40, 95% CI 0.27 to 0.60). There was no evidence of significant reduction of nosocomial sepsis (typical RR 0.90, 95% CI 0.76 to 1.07). The included trials reported no systemic infection with the probiotics supplemental organism. The statistical test of heterogeneity for NEC, mortality and sepsis was insignificant.\n\n\nAUTHORS' CONCLUSIONS\nEnteral supplementation of probiotics prevents severe NEC and all cause mortality in preterm infants. Our updated review of available evidence supports a change in practice. More studies are needed to assess efficacy in ELBW infants and assess the most effective formulation and dose to be utilized.",
"title": ""
},
{
"docid": "06a1d90991c5a9039c6758a66205e446",
"text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.",
"title": ""
},
{
"docid": "a1915a869616b9c8c2547f66ec89de13",
"text": "The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8% of actual crop weight.",
"title": ""
},
{
"docid": "72b080856124d39b62d531cb52337ce9",
"text": "Experimental and clinical studies have identified a crucial role of microcirculation impairment in severe infections. We hypothesized that mottling, a sign of microcirculation alterations, was correlated to survival during septic shock. We conducted a prospective observational study in a tertiary teaching hospital. All consecutive patients with septic shock were included during a 7-month period. After initial resuscitation, we recorded hemodynamic parameters and analyzed their predictive value on mortality. The mottling score (from 0 to 5), based on mottling area extension from the knees to the periphery, was very reproducible, with an excellent agreement between independent observers [kappa = 0.87, 95% CI (0.72–0.97)]. Sixty patients were included. The SOFA score was 11.5 (8.5–14.5), SAPS II was 59 (45–71) and the 14-day mortality rate 45% [95% CI (33–58)]. Six hours after inclusion, oliguria [OR 10.8 95% CI (2.9, 52.8), p = 0.001], arterial lactate level [<1.5 OR 1; between 1.5 and 3 OR 3.8 (0.7–29.5); >3 OR 9.6 (2.1–70.6), p = 0.01] and mottling score [score 0–1 OR 1; score 2–3 OR 16, 95% CI (4–81); score 4–5 OR 74, 95% CI (11–1,568), p < 0.0001] were strongly associated with 14-day mortality, whereas the mean arterial pressure, central venous pressure and cardiac index were not. The higher the mottling score was, the earlier death occurred (p < 0.0001). Patients whose mottling score decreased during the resuscitation period had a better prognosis (14-day mortality 77 vs. 12%, p = 0.0005). The mottling score is reproducible and easy to evaluate at the bedside. The mottling score as well as its variation during resuscitation is a strong predictor of 14-day survival in patients with septic shock.",
"title": ""
},
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "a423435c1dc21c33b93a262fa175f5c5",
"text": "The study investigated several teacher characteristics, with a focus on two measures of teaching experience, and their association with second grade student achievement gains in low performing, high poverty schools in a Mid-Atlantic state. Value-added models using three-level hierarchical linear modeling were used to analyze the data from 1,544 students, 154 teachers, and 53 schools. Results indicated that traditional teacher qualification characteristics such as licensing status and educational attainment were not statistically significant in producing student achievement gains. Total years of teaching experience was also not a significant predictor but a more specific measure, years of teaching experience at a particular grade level, was significantly associated with increased student reading achievement. We caution researchers and policymakers when interpreting results from studies that have used only a general measure of teacher experience as effects are possibly underestimated. Policy implications are discussed.",
"title": ""
},
{
"docid": "739aaf487d6c5a7b7fe9d0157d530382",
"text": "A blockchain framework is presented for addressing the privacy and security challenges associated with the Big Data in smart mobility. It is composed of individuals, companies, government and universities where all the participants collect, own, and control their data. Each participant shares their encrypted data to the blockchain network and can make information transactions with other participants as long as both party agrees to the transaction rules (smart contract) issued by the owner of the data. Data ownership, transparency, auditability and access control are the core principles of the proposed blockchain for smart mobility Big Data.",
"title": ""
},
{
"docid": "e3acdb12bf902aeee1d6619fd1bd13cc",
"text": "The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.",
"title": ""
},
{
"docid": "833786dcf2288f21343d60108819fe49",
"text": "This paper describes an audio event detection system which automatically classifies an audio event as ambient noise, scream or gunshot. The classification system uses two parallel GMM classifiers for discriminating screams from noise and gunshots from noise. Each classifier is trained using different features, appropriately chosen from a set of 47 audio features, which are selected according to a 2-step process. First, feature subsets of increasing size are assembled using filter selection heuristics. Then, a classifier is trained and tested with each feature subset. The obtained classification performance is used to determine the optimal feature vector dimension. This allows a noticeable speed-up w.r.t. wrapper feature selection methods. In order to validate the proposed detection algorithm, we carried out extensive experiments on a rich set of gunshots and screams mixed with ambient noise at different SNRs. Our results demonstrate that the system is able to guarantee a precision of 90% at a false rejection rate of 8%.",
"title": ""
}
] |
scidocsrr
|
cec3f15a0ef158a6c2aa4ab26edba8bf
|
Index modulation techniques for 5G wireless networks
|
[
{
"docid": "aa40633b4f06b6bb882c77d7d9241949",
"text": "This paper proposes a new multiple-input-multiple-output (MIMO) technique called quadrature spatial modulation (QSM). QSM enhances the overall throughput of conventional SM systems by using an extra modulation spatial dimension. The current SM technique uses only the real part of the SM constellation, and the proposed method in this paper extends this to in-phase and quadrature dimensions. It is shown that significant performance enhancements can be achieved at the expense of synchronizing the transmit antennas. Additionally, a closed-form expression for the pairwise error probability (PEP) of generic QSM system is derived and used to calculate a tight upper bound of the average bit error probability (ABEP) over Rayleigh fading channels. Moreover, a simple and general asymptotic expression is derived and analyzed. Obtained Monte Carlo simulation results corroborate the accuracy of the conducted analysis and show the significant enhancements of the proposed QSM scheme.",
"title": ""
},
{
"docid": "3b6e50d7f6389f109da2b1ba125cc64b",
"text": "A new class of low-complexity, yet energy-efficient Multiple-Input Multiple-Output (MIMO) transmission techniques, namely, the family of Spatial Modulation (SM) aided MIMOs (SM-MIMO), has emerged. These systems are capable of exploiting the spatial dimensions (i.e., the antenna indices) as an additional dimension invoked for transmitting information, apart from the traditional Amplitude and Phase Modulation (APM). SM is capable of efficiently operating in diverse MIMO configurations in the context of future communication systems. It constitutes a promising transmission candidate for large-scale MIMO design and for the indoor optical wireless communication while relying on a single-Radio Frequency (RF) chain. Moreover, SM may be also viewed as an entirely new hybrid modulation scheme, which is still in its infancy. This paper aims for providing a general survey of the SM design framework as well as of its intrinsic limits. In particular, we focus our attention on the associated transceiver design, on spatial constellation optimization, on link adaptation techniques, on distributed/cooperative protocol design issues, and on their meritorious variants.",
"title": ""
}
] |
[
{
"docid": "a280c56578d96797b1b7dc2e934b0c3e",
"text": "The Perspective-n-Point (PnP) problem seeks to estimate the pose of a calibrated camera from n 3D-to-2D point correspondences. There are situations, though, where PnP solutions are prone to fail because feature point correspondences cannot be reliably estimated (e.g. scenes with repetitive patterns or with low texture). In such scenarios, one can still exploit alternative geometric entities, such as lines, yielding the so-called Perspective-n-Line (PnL) algorithms. Unfortunately, existing PnL solutions are not as accurate and efficient as their point-based counterparts. In this paper we propose a novel approach to introduce 3D-to-2D line correspondences into a PnP formulation, allowing to simultaneously process points and lines. For this purpose we introduce an algebraic line error that can be formulated as linear constraints on the line endpoints, even when these are not directly observable. These constraints can then be naturally integrated within the linear formulations of two state-of-the-art point-based algorithms, the OPnP [45] and the EPnP [24], allowing them to indistinctly handle points, lines, or a combination of them. Exhaustive experiments show that the proposed formulation brings remarkable boost in performance compared to only point or only line based solutions, with a negligible computational overhead compared to the original OPnP and EPnP.",
"title": ""
},
{
"docid": "55928e118303b080d49a399da1f9dba3",
"text": "This paper describes a customized database and a comprehensive set of queries that can be used for systematic benchmarking of relational database systems. Designing this database and a set of carefully tuned benchmarks represents a first attempt in developing a scientific methodology for performance evaluation of database management systems. We have used this database to perform a comparative evaluation of the database machine DIRECT, the \"university\" and \"commercial\" versions of the INGRES database system, the relational database system ORACLE, and the IDM 500 database machine. We present a subset of our measurements (for the single user case only), that constitute a preliminary performance evaluation of these systems.",
"title": ""
},
{
"docid": "a01333e16abb503cf6d26c54ac24d473",
"text": "Topic models could have a huge impact on improving the ways users find and discover content in digital libraries and search interfaces through their ability to automatically learn and apply subject tags to each and every item in a collection, and their ability to dynamically create virtual collections on the fly. However, much remains to be done to tap this potential, and empirically evaluate the true value of a given topic model to humans. In this work, we sketch out some sub-tasks that we suggest pave the way towards this goal, and present methods for assessing the coherence and interpretability of topics learned by topic models. Our large-scale user study includes over 70 human subjects evaluating and scoring almost 500 topics learned from collections from a wide range of genres and domains. We show how scoring model -- based on pointwise mutual information of word-pair using Wikipedia, Google and MEDLINE as external data sources - performs well at predicting human scores. This automated scoring of topics is an important first step to integrating topic modeling into digital libraries",
"title": ""
},
{
"docid": "01c53962a4aebd75eb68860ee28447bd",
"text": "A power-scalable 2 Byte I/O operating at 12 Gb/s per lane is reported. The source-synchronous I/O includes controllable TX driver amplitude, flexible RX equalization, and multiple deskew modes. This allows power reduction when operating over low-loss, low-skew interconnects, while at the same time supporting higher-loss channels without loss of bandwidth. Transceiver circuit innovations are described including a low-skew transmission-line clock distribution, a 4:1 serializer with quadrature quarter-rate clocks, and a phase rotator based on current-integrating phase interpolators. Measurements of a test chip fabricated in 32 nm SOI CMOS technology demonstrate 1.4 pJ/b efficiency over 0.75” Megtron-6 PCB traces, and 1.9 pJ/b efficiency over 20” Megtron-6 PCB traces.",
"title": ""
},
{
"docid": "86e4b8f3f1608292437968b1165ccac5",
"text": "Activation of oncogenes and loss of tumour suppressors promote metabolic reprogramming in cancer, resulting in enhanced nutrient uptake to supply energetic and biosynthetic pathways. However, nutrient limitations within solid tumours may require that malignant cells exhibit metabolic flexibility to sustain growth and survival. Here, we highlight these adaptive mechanisms and also discuss emerging approaches to probe tumour metabolism in vivo and their potential to expand the metabolic repertoire of malignant cells even further.",
"title": ""
},
{
"docid": "ee617dacdb47fd02a797f2968aaa784f",
"text": "The Internet of Things (IoT) is defined as a paradigm in which objects equipped with sensors, actuators, and processors communicate with each other to serve a meaningful purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in this new emerging area. This survey paper proposes a novel taxonomy for IoT technologies, highlights some of the most important technologies, and profiles some applications that have the potential to make a striking difference in human life, especially for the differently abled and the elderly. As compared to similar survey papers in the area, this paper is far more comprehensive in its coverage and exhaustively covers most major technologies spanning from sensors to applications.",
"title": ""
},
{
"docid": "df1ea45a4b20042abd99418ff6d1f44e",
"text": "This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution.",
"title": ""
},
{
"docid": "9fc869c7e7d901e418b1b69d636cbd33",
"text": "Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2",
"title": ""
},
{
"docid": "9d6a0b31bf2b64f1ec624222a2222e2a",
"text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati",
"title": ""
},
{
"docid": "41b305c49b74063f16e5eb07bcb905d9",
"text": "Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 2 of M (one output unity, all others zero) and a squarederror or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and u priori class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.",
"title": ""
},
{
"docid": "e099186ceed71e03276ab168ecf79de7",
"text": "Twelve patients with deafferentation pain secondary to central nervous system lesions were subjected to chronic motor cortex stimulation. The motor cortex was mapped as carefully as possible and the electrode was placed in the region where muscle twitch of painful area can be observed with the lowest threshold. 5 of the 12 patients reported complete absence of previous pain with intermittent stimulation at 1 year following the initiation of this therapy. Improvements in hemiparesis was also observed in most of these patients. The pain of these patients was typically barbiturate-sensitive and morphine-resistant. Another 3 patients had some degree of residual pain but considerable reduction of pain was still obtained by stimulation. Thus, 8 of the 12 patients (67%) had continued effect of this therapy after 1 year. In 3 patients, revisions of the electrode placement were needed because stimulation became incapable of inducing muscle twitch even with higher stimulation intensity. The effect of stimulation on pain and capability of producing muscle twitch disappeared simultaneously in these cases and the effect reappeared after the revisions, indicating that appropriate stimulation of the motor cortex is definitely necessary for obtaining satisfactory pain control in these patients. None of the patients subjected to this therapy developed neither observable nor electroencephalographic seizure activity.",
"title": ""
},
{
"docid": "73e6082c387eab6847b8ca853f38c6f3",
"text": "OBJECTIVES\nThis study explored the effectiveness of group music intervention against agitated behavior in elderly persons with dementia.\n\n\nMETHODS\nThis was an experimental study using repeated measurements. Subjects were elderly persons who suffered from dementia and resided in nursing facilities. In total, 104 participants were recruited by permuted block randomization and of the 100 subjects who completed this study, 49 were in the experimental group and 51 were in the control group. The experimental group received a total of twelve 30-min group music intervention sessions, conducted twice a week for six consecutive weeks, while the control group participated in normal daily activities. In order to measure the effectiveness of the therapeutic sessions, assessments were conducted before the intervention, at the 6th and 12th group sessions, and at 1 month after cessation of the intervention. Longitudinal effects were analyzed by means of generalized estimating equations (GEEs).\n\n\nRESULTS\nAfter the group music therapy intervention, the experimental group showed better performance at the 6th and 12th sessions, and at 1 month after cessation of the intervention based on reductions in agitated behavior in general, physically non-aggressive behavior, verbally non-aggressive behavior, and physically aggressive behavior, while a reduction in verbally aggressive behavior was shown only at the 6th session.\n\n\nCONCLUSIONS\nGroup music intervention alleviated agitated behavior in elderly persons with dementia. We suggest that nursing facilities for demented elderly persons incorporate group music intervention in routine activities in order to enhance emotional relaxation, create inter-personal interactions, and reduce future agitated behaviors.",
"title": ""
},
{
"docid": "f44fad35f68957ff27e9cfb97758cc2d",
"text": "Boosting combines weak classifiers to form highly accurate predictors. Although the case of binary classification is well understood, in the multiclass setting, the “correct” requirements on the weak classifier, or the notion of the most efficient boosting algorithms are missing. In this paper, we create a broad and general framework, within which we make precise and identify the optimal requirements on the weak-classifier, as well as design the most effective, in a certain sense, boosting algorithms that assume such requirements.",
"title": ""
},
{
"docid": "5a4c9b6626d2d740246433972ad60f16",
"text": "We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows:",
"title": ""
},
{
"docid": "95e1d5dc90f7fc6ece51f61585842f3d",
"text": "This paper investigates how the splitting cri teria and pruning methods of decision tree learning algorithms are in uenced by misclas si cation costs or changes to the class distri bution Splitting criteria that are relatively insensitive to costs class distributions are found to perform as well as or better than in terms of expected misclassi cation cost splitting criteria that are cost sensitive Con sequently there are two opposite ways of deal ing with imbalance One is to combine a cost insensitive splitting criterion with a cost in sensitive pruning method to produce a deci sion tree algorithm little a ected by cost or prior class distribution The other is to grow a cost independent tree which is then pruned in a cost sensitive manner",
"title": ""
},
{
"docid": "87276bf7802a209a9e8fae2a95ff93c2",
"text": "Traditional two wheels differential drive normally used on mobile robots have manoeuvrability limitations and take time to sort out. Most teams use two driving wheels (with one or two cast wheels), four driving wheels and even three driving wheels. A three wheel drive with omni-directional wheel has been tried with success, and was implemented on fast moving autonomous mobile robots. This paper deals with the mathematical kinematics description of such mobile platform, it describes the advantages and also the type of control used.",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
},
{
"docid": "1c11c14bcc1e83a3fba3ef5e4c52d69b",
"text": "Ontologies have become the de-facto modeling tool of choice, employed in many applications and prominently in the semantic web. Nevertheless, ontology construction remains a daunting task. Ontological bootstrapping, which aims at automatically generating concepts and their relations in a given domain, is a promising technique for ontology construction. Bootstrapping an ontology based on a set of predefined textual sources, such as web services, must address the problem of multiple, largely unrelated concepts. In this paper, we propose an ontology bootstrapping process for web services. We exploit the advantage that web services usually consist of both WSDL and free text descriptors. The WSDL descriptor is evaluated using two methods, namely Term Frequency/Inverse Document Frequency (TF/IDF) and web context generation. Our proposed ontology bootstrapping process integrates the results of both methods and applies a third method to validate the concepts using the service free text descriptor, thereby offering a more accurate definition of ontologies. We extensively validated our bootstrapping method using a large repository of real-world web services and verified the results against existing ontologies. The experimental results indicate high precision. Furthermore, the recall versus precision comparison of the results when each method is separately implemented presents the advantage of our integrated bootstrapping approach.",
"title": ""
},
{
"docid": "0580342f7efb379fc417d2e5e48c4b73",
"text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.