query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
8139c1836d07c91269fed4fc6cb7ad61
|
Auto-Detect: Data-Driven Error Detection in Tables
|
[
{
"docid": "ba1cbd5fcd98158911f4fb6f677863f9",
"text": "Classical approaches to clean data have relied on using integrity constraints, statistics, or machine learning. These approaches are known to be limited in the cleaning accuracy, which can usually be improved by consulting master data and involving experts to resolve ambiguity. The advent of knowledge bases KBs both general-purpose and within enterprises, and crowdsourcing marketplaces are providing yet more opportunities to achieve higher accuracy at a larger scale. We propose KATARA, a knowledge base and crowd powered data cleaning system that, given a table, a KB, and a crowd, interprets table semantics to align it with the KB, identifies correct and incorrect data, and generates top-k possible repairs for incorrect data. Experiments show that KATARA can be applied to various datasets and KBs, and can efficiently annotate data and suggest possible repairs.",
"title": ""
},
{
"docid": "46921a173ee1ed2a379da869060637d4",
"text": "Given a table of data, existing systems can often detect basic atomic types (e.g., strings vs. numbers) for each column. A new generation of data-analytics and data-preparation systems are starting to automatically recognize rich semantic types such as date-time, email address, etc., for such metadata can bring an array of benefits including better table understanding, improved search relevance, precise data validation, and semantic data transformation. However, existing approaches only detect a limited number of types using regular-expression-like patterns, which are often inaccurate, and cannot handle rich semantic types such as credit card and ISBN numbers that encode semantic validations (e.g., checksum).\n We developed AUTOTYPE from open-source repositories like GitHub. Users only need to provide a set of positive examples for a target data type and a search keyword, our system will automatically identify relevant code, and synthesize type-detection functions using execution traces. We compiled a benchmark with 112 semantic types, out of which the proposed system can synthesize code to detect 84 such types at a high precision. Applying the synthesized type-detection logic on web table columns have also resulted in a significant increase in data types discovered compared to alternative approaches.",
"title": ""
},
{
"docid": "79833f074b2e06d5c56898ca3f008c00",
"text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.",
"title": ""
}
] |
[
{
"docid": "a5ed1ebf973e3ed7ea106e55795e3249",
"text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.",
"title": ""
},
{
"docid": "e72ce7617cc941543a07059bc3a1a4a2",
"text": "Ensemble learning strategies, especially boosting and bagging decision trees, have demonstrated impressive capacities to improve the prediction accuracy of base learning algorithms. Further gains have been demonstrated by strategies that combine simple ensemble formation approaches. We investigate the hypothesis that the improvement in accuracy of multistrategy approaches to ensemble learning is due to an increase in the diversity of ensemble members that are formed. In addition, guided by this hypothesis, we develop three new multistrategy ensemble learning techniques. Experimental results in a wide variety of natural domains suggest that these multistrategy ensemble learning techniques are, on average, more accurate than their component ensemble learning techniques.",
"title": ""
},
{
"docid": "2a30aa44df358be7bb27afd0014a07ff",
"text": "The adoption of Smart Grid devices throughout utility networks will effect tremendous change in grid operations and usage of electricity over the next two decades. The changes in ways to control loads, coupled with increased penetration of renewable energy sources, offer a new set of challenges in balancing consumption and generation. Increased deployment of energy storage devices in the distribution grid will help make this process happen more effectively and improve system performance. This paper addresses the new types of storage being utilized for grid support and the ways they are integrated into the grid.",
"title": ""
},
{
"docid": "c08518b806c93dde1dd04fdf3c9c45bb",
"text": "Purpose – The objectives of this article are to develop a multiple-item scale for measuring e-service quality and to study the influence of perceived quality on consumer satisfaction levels and the level of web site loyalty. Design/methodology/approach – First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings – The validation process of scales suggested that perceived quality is a multidimensional construct: web design, customer service, assurance and order management; that perceived quality influences on satisfaction; and that satisfaction influences on consumer loyalty. Moreover, no differences in these conclusions were observed if the total sample is divided between buyers and information searchers. Practical implications – First, the need to develop user-friendly web sites which ease consumer purchasing and searching, thus creating a suitable framework for the generation of higher satisfaction and loyalty levels. Second, the web site manager should enhance service loyalty, customer sensitivity, personalised service and a quick response to complaints. Third, the web site should uphold sufficient security levels in communications and meet data protection requirements regarding the privacy. Lastly, the need for correct product delivery and product manipulation or service is recommended. Originality/value – Most relevant studies about perceived quality in the internet have focused on web design aspects. Moreover, the existing literature regarding internet consumer behaviour has not fully analysed profits generated by higher perceived quality in terms of user satisfaction and loyalty.",
"title": ""
},
{
"docid": "cea92cadacce42ed8db1d3d14370f838",
"text": "Domestic dogs are unusually skilled at reading human social and communicative behavior--even more so than our nearest primate relatives. For example, they use human social and communicative behavior (e.g. a pointing gesture) to find hidden food, and they know what the human can and cannot see in various situations. Recent comparisons between canid species suggest that these unusual social skills have a heritable component and initially evolved during domestication as a result of selection on systems mediating fear and aggression towards humans. Differences in chimpanzee and human temperament suggest that a similar process may have been an important catalyst leading to the evolution of unusual social skills in our own species. The study of convergent evolution provides an exciting opportunity to gain further insights into the evolutionary processes leading to human-like forms of cooperation and communication.",
"title": ""
},
{
"docid": "5f068a11901763af752df9480b97e0c0",
"text": "Beginning with a brief review of CMOS scaling trends from 1 m to 0.1 m, this paper examines the fundamental factors that will ultimately limit CMOS scaling and considers the design issues near the limit of scaling. The fundamental limiting factors are electron thermal energy, tunneling leakage through gate oxide, and 2D electrostatic scale length. Both the standby power and the active power of a processor chip will increase precipitously below the 0.1m or 100-nm technology generation. To extend CMOS scaling to the shortest channel length possible while still gaining significant performance benefit, an optimized, vertically and laterally nonuniform doping design (superhalo) is presented. It is projected that room-temperature CMOS will be scaled to 20-nm channel length with the superhalo profile. Low-temperature CMOS allows additional design space to further extend CMOS scaling to near 10 nm.",
"title": ""
},
{
"docid": "204f70a01af01e29e6f20b4c8784a7d0",
"text": "This paper discusses software development using the Test Driven Development (TDD) methodology in two different environments (Windows and MSN divisions) at Microsoft. In both these case studies we measure the various context, product and outcome measures to compare and evaluate the efficacy of TDD. We observed a significant increase in quality of the code (greater than two times) for projects developed using TDD compared to similar projects developed in the same organization in a non-TDD fashion. The projects also took at least 15% extra upfront time for writing the tests. Additionally, the unit tests have served as auto documentation for the code when libraries/APIs had to be used as well as for code maintenance.",
"title": ""
},
{
"docid": "3e24b0fbe188a3371df5985b05f69291",
"text": "The prediction of a stock market direction may serve as an ear ly recommendation system for short-term investors and as an early financial distress warning system for long-term shareholde rs. In this paper, we propose an empirical study on the Korean and Hong Kong stock market with an integrated machine learning f ramework that employs Principal Component Analysis (PCA) a nd Support Vector Machine (SVM). We try to predict the upward or d wnward direction of stock market index and stock price. In the proposed framework, PCA, as a feature selection method, ide ntifies principal components in the stock market movement an d SVM, as a classifier for future stock market movement, processes t hem along with other economic factors in training and foreca sting. We present the results of an extensive empirical study of the proposed method on the Korean composite stock price index (K OSPI) and Hangseng index (HSI), as well as the individual constitu ents included in the indices. In our experiment, ten years da ta (from January 1st, 2002 to January 1st, 2012) are collected and sch emed by rolling windows to predict one-day-ahead direction s. The experimental results show notably high hit ratios in predic ting the movements of the individual constituents in the KOS PI and HSI. The results also varify the co-movement effect between the Korean (Hong Kong) stock market and the Ameri can stock market. c © 2013 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "6f0283efa932663c83cc2c63d19fd6cf",
"text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.",
"title": ""
},
{
"docid": "06fdd2dae0aa83ec3697342d831da39f",
"text": "Traditionally, nostalgia has been conceptualized as a medical disease and a psychiatric disorder. Instead, we argue that nostalgia is a predominantly positive, self-relevant, and social emotion serving key psychological functions. Nostalgic narratives reflect more positive than negative affect, feature the self as the protagonist, and are embedded in a social context. Nostalgia is triggered by dysphoric states such as negative mood and loneliness. Finally, nostalgia generates positive affect, increases selfesteem, fosters social connectedness, and alleviates existential threat. KEYWORDS—nostalgia; positive affect; self-esteem; social connectedness; existential meaning The term nostalgia was inadvertedly inspired by history’s most famous itinerant. Emerging victoriously from the Trojan War, Odysseus set sail for his native island of Ithaca to reunite with his faithful wife, Penelope. For 3 years, our wandering hero fought monsters, assorted evildoers, and mischievous gods. For another 7 years, he took respite in the arms of the beautiful sea nymph Calypso. Possessively, she offered to make him immortal if he stayed with her on the island of Ogygia. ‘‘Full well I acknowledge,’’ Odysseus replied to his mistress, ‘‘prudent Penelope cannot compare with your stature or beauty, for she is only a mortal, and you are immortal and ageless. Nevertheless, it is she whom I daily desire and pine for. Therefore I long for my home and to see the day of returning’’ (Homer, 1921, Book V, pp. 78–79). This romantic declaration, along with other expressions of Odyssean longing in the eponymous Homeric epic, gave rise to the term nostalgia. It is a compound word, consisting of nostos (return) and algos (pain). Nostalgia, then, is literally the suffering due to relentless yearning for the homeland. The term nostalgia was coined in the 17th century by the Swiss physician Johaness Hofer (1688/1934), but references to the emotion it denotes can be found in Hippocrates, Caesar, and the Bible. HISTORICAL AND MODERN CONCEPTIONS OF NOSTALGIA From the outset, nostalgia was equated with homesickness. It was also considered a bad omen. In the 17th and 18th centuries, speculation about nostalgia was based on observations of Swiss mercenaries in the service of European monarchs. Nostalgia was regarded as a medical disease confined to the Swiss, a view that persisted through most of the 19th century. Symptoms— including bouts of weeping, irregular heartbeat, and anorexia— were attributed variously to demons inhabiting the middle brain, sharp differentiation in atmospheric pressure wreaking havoc in the brain, or the unremitting clanging of cowbells in the Swiss Alps, which damaged the eardrum and brain cells. By the beginning of the 20th century, nostalgia was regarded as a psychiatric disorder. Symptoms included anxiety, sadness, and insomnia. By the mid-20th century, psychodynamic approaches considered nostalgia a subconscious desire to return to an earlier life stage, and it was labeled as a repressive compulsive disorder. Soon thereafter, nostalgia was downgraded to a variant of depression, marked by loss and grief, though still equated with homesickness (for a historical review of nostalgia, see Sedikides, Wildschut, & Baden, 2004). By the late 20th century, there were compelling reasons for nostalgia and homesickness to finally part ways. Adult participants regard nostalgia as different from homesickness. For example, they associate the words warm, old times, childhood, and yearning more frequently with nostalgia than with homesickness (Davis, 1979). Furthermore, whereas homesickness research focused on the psychological problems (e.g., separation anxiety) that can arise when young people transition beyond the home environment, nostalgia transcends social groups and age. For example, nostalgia is found cross-culturally and among wellfunctioning adults, children, and dementia patients (Sedikides et al., 2004; Sedikides, Wildschut, Routledge, & Arndt, 2008; Zhou, Sedikides, Wildschut, & Gao, in press). Finally, although homesickness refers to one’s place of origin, nostalgia can refer Address correspondence to Constantine Sedikides, Center for Research on Self and Identity, School of Psychology, University of Southampton, Southampton SO17 1BJ, England, U.K.; e-mail: cs2@soton.ac.uk. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 304 Volume 17—Number 5 Copyright r 2008 Association for Psychological Science to a variety of objects (e.g., persons, events, places; Wildschut, Sedikides, Arndt, & Routledge, 2006). It is in this light that we note the contemporary definition of nostalgia as a sentimental longing for one’s past. It is, moreover, a sentimentality that is pervasively experienced. Over 80% of British undergraduates reported experiencing nostalgia at least once a week (Wildschut et al., 2006). Given this apparent ubiquity, the time has come for an empirical foray into the content, causes, and functions of this emotion. THE EMPIRICAL BASIS FOR UNDERSTANDING NOSTALGIA The Canvas of Nostalgia What is the content of the nostalgic experience? Wildschut et al. (2006) analyzed the content of narratives submitted voluntarily by (American and Canadian) readers to the periodical Nostalgia. Also, Wildschut et al. asked British undergraduates to write a narrative account of a nostalgic experience. These narratives were also analyzed for content. Across both studies, the most frequently listed objects of nostalgic reverie were close others (family members, friends, partners), momentous events (birthdays, vacations), and settings (sunsets, lakes). Nostalgia has been conceptualized variously as a negative, ambivalent, or positive emotion (Sedikides et al., 2004). These conceptualizations were put to test. In a study by Wildschut, Stephan, Sedikides, Routledge, and Arndt (2008), British and American undergraduates wrote narratives about a ‘‘nostalgic event’’ (vs. an ‘‘ordinary event’’) in their lives and reflected briefly upon the event and how it made them feel. Content analysis revealed that the simultaneous expression of happiness and sadness was more common in narratives of nostalgic events than in narratives of ordinary events. Also in Wildschut et al., British undergraduates wrote about a nostalgic (vs. ordinary vs. simply positive) event in their lives and then rated their happiness and sadness. Although the recollection of ordinary and positive events rarely gave rise to both happiness and sadness, such coactivation occurred much more frequently following the recollection of a nostalgic event. Yet, nostalgic events featured more frequent expressions of happiness than of sadness and induced higher levels of happiness than of sadness. Wildschut et al. (2006) obtained additional evidence that nostalgia is mostly a positively toned emotion: The narratives included far more expressions of positive than negative affect. At the same time, though, there was evidence of bittersweetness. Many narratives contained descriptions of disappointments and losses, and some touched on such issues as separation and even the death of loved ones. Nevertheless, positive and negative elements were often juxtaposed to create redemption, a narrative pattern that progresses from a negative or undesirable state (e.g., suffering, pain, exclusion) to a positive or desirable state (e.g., acceptance, euphoria, triumph; McAdams, 2001). For example, although a family reunion started badly (e.g., an uncle insulting the protagonist), it nevertheless ended well (e.g., the family singing together after dinner). The strength of the redemption theme may explain why, despite the descriptions of sorrow, the overall affective signature of the nostalgic narratives was positive. Moreover, Wildschut et al. (2006) showed that nostalgia is a self-relevant and social emotion: The self almost invariably figured as the protagonist in the narratives and was almost always surrounded by close others. In all, the canvas of nostalgia is rich, reflecting themes of selfhood, sociality, loss, redemption, and ambivalent, yet mostly positive, affectivity. The Triggers of Nostalgia Wildschut et al. (2006) asked participants to describe when they become nostalgic. The most frequently reported trigger was negative affect (‘‘I think of nostalgic experiences when I am sad as they often make me feel better’’), and, within this category, loneliness was the most frequently reported discrete affective state (‘‘If I ever feel lonely or sad I tend to think of my friends or family who I haven’t seen in a long time’’). Given these initial reports, Wildschut et al. proceeded to test whether indeed negative mood and loneliness qualify as nostalgia triggers. British undergraduates read one of three news stories, each based on actual events, that were intended to influence their mood. In the negative-mood condition, they read about the Tsunami that struck coastal regions in Asia and Africa in December 2004. In the neutral-mood condition, they read about the January 2005 landing of the Huygens probe on Titan. In the positive-mood condition, they read about the November 2004 birth of a polar bear, ostensibly in the London Zoo (actually in the Detroit Zoo). Then they completed a measure of nostalgia, rating the extent to which they missed 18 aspects of their past (e.g., ‘‘holidays I went on,’’ ‘‘past TV shows, movies,’’ ‘‘someone I loved’’). Participants in the negativemood condition were more nostalgic (i.e., missed more aspects of their past) than were participants in the other two conditions. In another study, loneliness was successfully induced by giving participants false (high vs. low) feedback on a ‘‘loneliness’’ test (i.e., they were led to believe they were either lonely or not lonely based on the feedback). Subsequently, participants rated how much they missed 18 aspects of their past. Participants in the high-loneliness condition were more nostalgic than those in the low-loneliness condition. These findings were re",
"title": ""
},
{
"docid": "e8abf8e4cd087cf3b77ae6a024e95971",
"text": "Cloud computing has been emerged in the last decade to enable utility-based computing resource management without purchasing hardware equipment. Cloud providers run multiple data centers in various locations to manage and provision the Cloud resources to their customers. More recently, the introduction of Software-Defined Networking (SDN) and Network Function Virtualization (NFV) opens more opportunities in Clouds which enables dynamic and autonomic configuration and provisioning of the resources in Cloud data centers. This paper proposes architectural framework and principles for Programmable Network Clouds hosting SDNs and NFVs for geographically distributed MultiCloud computing environments. Cost and SLA-aware resource provisioning and scheduling that minimizes the operating cost without violating the negotiated SLAs are investigated and discussed in regards of techniques for autonomic and timely VNF composition, deployment and management across multiple Clouds. We also discuss open challenges and directions for creating auto-scaling solutions for performance optimization of VNFs using analytics and monitoring techniques, algorithms for SDN controller for scalable traffic and deployment management. The simulation platform and the proof-of-concept prototype are presented with initial evaluation results.",
"title": ""
},
{
"docid": "1592dc2c81d9d6b9c58cc1a5b530c923",
"text": "We propose a cloudlet network architecture to bring the computing resources from the centralized cloud to the edge. Thus, each User Equipment (UE) can communicate with its Avatar, a software clone located in a cloudlet, and can thus lower the end-to-end (E2E) delay. However, UEs are moving over time, and so the low E2E delay may not be maintained if UEs' Avatars stay in their original cloudlets. Thus, live Avatar migration (i.e., migrating a UE's Avatar to a suitable cloudlet based on the UE's location) is enabled to maintain the low E2E delay between each UE and its Avatar. On the other hand, the migration itself incurs extra overheads in terms of resources of the Avatar, which compromise the performance of applications running in the Avatar. By considering the gain (i.e., the E2E delay reduction) and the cost (i.e., the migration overheads) of the live Avatar migration, we propose a PRofIt Maximization Avatar pLacement (PRIMAL) strategy for the cloudlet network in order to optimize the tradeoff between the migration gain and the migration cost by selectively migrating the Avatars to their optimal locations. Simulation results demonstrate that as compared to the other two strategies (i.e., Follow Me Avatar and Static), PRIMAL maximizes the profit in terms of maintaining the low average E2E delay between UEs and their Avatars and minimizing the migration cost simultaneously.",
"title": ""
},
{
"docid": "f9468884fd24ff36b81fc2016a519634",
"text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.",
"title": ""
},
{
"docid": "c62a2280367b4d7c6a715c92a9696bae",
"text": "OBJECTIVES\nPain assessment is essential to tailor intensive care of neonates. The present focus is on acute procedural pain; assessment of pain of longer duration remains a challenge. We therefore tested a modified version of the COMFORT-behavior scale-named COMFORTneo-for its psychometric qualities in the Neonatal Intensive Care Unit setting.\n\n\nMETHODS\nIn a clinical observational study, nurses assessed patients with COMFORTneo and Numeric Rating Scales (NRS) for pain and distress, respectively. Interrater reliability, concurrent validity, and sensitivity to change were calculated as well as sensitivity and specificity for different cut-off scores for subsets of patients.\n\n\nRESULTS\nInterrater reliability was good: median linearly weighted Cohen kappa 0.79. Almost 3600 triple ratings were obtained for 286 neonates. Internal consistency was good (Cronbach alpha 0.84 and 0.88). Concurrent validity was demonstrated by adequate and good correlations, respectively, with NRS-pain and NRS-distress: r=0.52 (95% confidence interval 0.44-0.59) and r=0.70 (95% confidence interval 0.64-0.75). COMFORTneo cut-off scores of 14 or higher (score range is 6 to 30) had good sensitivity and specificity (0.81 and 0.90, respectively) using NRS-pain or NRS-distress scores of 4 or higher as criterion.\n\n\nDISCUSSION\nThe COMFORTneo showed preliminary reliability. No major differences were found in cut-off values for low birth weight, small for gestational age, neurologic impairment risk levels, or sex. Multicenter studies should focus on establishing concurrent validity with other instruments in a patient group with a high probability of ongoing pain.",
"title": ""
},
{
"docid": "d730fb49b7b6f971593e7e116e0c48bf",
"text": "Modern image and video compression techniques today offer the possibility to store or transmit the vast amount of data necessary to represent digital images and video in an efficient and robust way. New audio visual applications in the field of communication, multimedia and broadcasting became possible based on digital video coding technology. As manifold as applications for image coding are today, as manifold are the different approaches and algorithms and were the first hardware implementations and even systems in the commercial field, such as private teleconferencing systems [chen, hal]. However, with the advances in VLSI-technology it became possible to open more application fields to a larger number of users and therefore the necessity for video coding standards arose. Commercially, international standardization of video communication systems and protocols aims to serve two important purposes: interoperability and economy of scale. Interworking between video communication equipment from different vendors is a desirable feature for users and equipment manufactures alike. It increases the attractiveness for buying and using video",
"title": ""
},
{
"docid": "f175e9c17aa38a17253de2663c4999f1",
"text": "As we increasingly rely on computers to process and manage our personal data, safeguarding sensitive information from malicious hackers is a fast growing concern. Among many forms of information leakage, covert timing channels operate by establishing an illegitimate communication channel between two processes and through transmitting information via timing modulation, thereby violating the underlying system's security policy. Recent studies have shown the vulnerability of popular computing environments, such as cloud computing, to these covert timing channels. In this work, we propose a new micro architecture-level framework, CC-Hunter, that detects the possible presence of covert timing channels on shared hardware. Our experiments demonstrate that Chanter is able to successfully detect different types of covert timing channels at varying bandwidths and message patterns.",
"title": ""
},
{
"docid": "abda350daca4705e661d8e59a6946e08",
"text": "Concept definition is important in language understanding (LU) adaptation since literal definition difference can easily lead to data sparsity even if different data sets are actually semantically correlated. To address this issue, in this paper, a novel concept transfer learning approach is proposed. Here, substructures within literal concept definition are investigated to reveal the relationship between concepts. A hierarchical semantic representation for concepts is proposed, where a semantic slot is represented as a composition of atomic concepts. Based on this new hierarchical representation, transfer learning approaches are developed for adaptive LU. The approaches are applied to two tasks: value set mismatch and domain adaptation, and evaluated on two LU benchmarks: ATIS and DSTC 2&3. Thorough empirical studies validate both the efficiency and effectiveness of the proposed method. In particular, we achieve state-ofthe-art performance (F1-score 96.08%) on ATIS by only using lexicon features.",
"title": ""
},
{
"docid": "ac43f790e48424bece26439799654624",
"text": "A scheme of evaluating an impact of a given scientific paper based on importance of papers quoting it is investigated. Introducing a weight of a given citation, dependent on the previous scientific achievements of the author of the citing paper, we define the weighting factor of a given scientist. Technically the weighting factors are defined by the components of the normalized leading eigenvector of the matrix describing the citation graph. The weighting factor of a given scientist, reflecting the scientific output of other researchers quoting his work, allows us to define weighted number of citation of a given paper, weighted impact factor of a journal and weighted Hirsch index of an individual scientist or of an entire scientific institution.",
"title": ""
},
{
"docid": "bd88c04b8862f699e122e248ef416963",
"text": "Optic ataxia is a high-order deficit in reaching to visual goals that occurs with posterior parietal cortex (PPC) lesions. It is a component of Balint's syndrome that also includes attentional and gaze disorders. Aspects of optic ataxia are misreaching in the contralesional visual field, difficulty preshaping the hand for grasping, and an inability to correct reaches online. Recent research in nonhuman primates (NHPs) suggests that many aspects of Balint's syndrome and optic ataxia are a result of damage to specific functional modules for reaching, saccades, grasp, attention, and state estimation. The deficits from large lesions in humans are probably composite effects from damage to combinations of these functional modules. Interactions between these modules, either within posterior parietal cortex or downstream within frontal cortex, may account for more complex behaviors such as hand-eye coordination and reach-to-grasp.",
"title": ""
},
{
"docid": "e668eddaa2cec83540a992e09e0be368",
"text": "The increasing number of attacks on internet-based systems calls for security measures on behalf those systems’ operators. Beside classical methods and tools for penetration testing, there exist additional approaches using publicly available search engines. We present an alternative approach using contactless vulnerability analysis with both classical and subject-specific search engines. Based on an extension and combination of their functionality, this approach provides a method for obtaining promising results for audits of IT systems, both quantitatively and qualitatively. We evaluate our approach and confirm its suitability for a timely determination of vulnerabilities in large-scale networks. In addition, the approach can also be used to perform vulnerability analyses of network areas or domains in unclear legal situations.",
"title": ""
}
] |
scidocsrr
|
7a19961d855f60ec771d26910b2c92e5
|
Names and Similarities on the Web: Fact Extraction in the Fast Lane
|
[
{
"docid": "55891ffb1281d3215f6d36e2a9f6ff0b",
"text": "The TREC-8 Question Answering (QA) Track was the first large-scale evaluation of domain-independent question answering systems. In addition to fostering research on the QA task, the track was used to investigate whether the evaluation methodology used for document retrieval is appropriate for a different natural language processing task. As with document relevance judging, assessors had legitimate differences of opinions as to whether a response actually answers a question, but comparative evaluation of QA systems was stable despite these differences. Creating a reusable QA test collection is fundamentally more difficult than creating a document retrieval test collection since the QA task has no equivalent to document identifiers.",
"title": ""
}
] |
[
{
"docid": "9c30ef5826b413bab262b7a0884eb119",
"text": "In this survey paper, we review recent uses of convolution neural networks (CNNs) to solve inverse problems in imaging. It has recently become feasible to train deep CNNs on large databases of images, and they have shown outstanding performance on object classification and segmentation tasks. Motivated by these successes, researchers have begun to apply CNNs to the resolution of inverse problems such as denoising, deconvolution, super-resolution, and medical image reconstruction, and they have started to report improvements over state-of-the-art methods, including sparsity-based techniques such as compressed sensing. Here, we review the recent experimental work in these areas, with a focus on the critical design decisions: Where does the training data come from? What is the architecture of the CNN? and How is the learning problem formulated and solved? We also bring together a few key theoretical papers that offer perspective on why CNNs are appropriate for inverse problems and point to some next steps in the field.",
"title": ""
},
{
"docid": "b5e0faba5be394523d10a130289514c2",
"text": "Child neglect results from either acts of omission or of commission. Fatalities from neglect account for 30% to 40% of deaths caused by child maltreatment. Deaths may occur from failure to provide the basic needs of infancy such as food or medical care. Medical care may also be withheld because of parental religious beliefs. Inadequate supervision may contribute to a child's injury or death through adverse events involving drowning, fires, and firearms. Recognizing the factors contributing to a child's death is facilitated by the action of multidisciplinary child death review teams. As with other forms of child maltreatment, prevention and early intervention strategies are needed to minimize the risk of injury and death to children.",
"title": ""
},
{
"docid": "0187dd662caa70268e1d147c20344716",
"text": "Heterotaxy is a disorder of left–right body patterning, or laterality, that is associated with major congenital heart disease. The aetiology and mechanisms underlying most cases of human heterotaxy are poorly understood. In vertebrates, laterality is initiated at the embryonic left–right organizer, where motile cilia generate leftward flow that is detected by immotile sensory cilia, which transduce flow into downstream asymmetric signals. The mechanism that specifies these two cilia types remains unknown. Here we show that the N-acetylgalactosamine-type O-glycosylation enzyme GALNT11 is crucial to such determination. We previously identified GALNT11 as a candidate disease gene in a patient with heterotaxy, and now demonstrate, in Xenopus tropicalis, that galnt11 activates Notch signalling. GALNT11 O-glycosylates human NOTCH1 peptides in vitro, thereby supporting a mechanism of Notch activation either by increasing ADAM17-mediated ectodomain shedding of the Notch receptor or by modification of specific EGF repeats. We further developed a quantitative live imaging technique for Xenopus left–right organizer cilia and show that Galnt11-mediated Notch1 signalling modulates the spatial distribution and ratio of motile and immotile cilia at the left–right organizer. galnt11 or notch1 depletion increases the ratio of motile cilia at the expense of immotile cilia and produces a laterality defect reminiscent of loss of the ciliary sensor Pkd2. By contrast, Notch overexpression decreases this ratio, mimicking the ciliopathy primary ciliary dyskinesia. Together our data demonstrate that Galnt11 modifies Notch, establishing an essential balance between motile and immotile cilia at the left–right organizer to determine laterality, and reveal a novel mechanism for human heterotaxy.",
"title": ""
},
{
"docid": "6ae739344034410a570b12a57db426e3",
"text": "In recent times we tend to use a number of surveillance systems for monitoring the targeted area. This requires an enormous amount of storage space along with a lot of human power in order to implement and monitor the area under surveillance. This is supposed to be costly and not a reliable process. In this paper we propose an intelligent surveillance system that continuously monitors the targeted area and detects motion in each and every frame. If the system detects motion in the targeted area then a notification is automatically sent to the user by sms and the video starts getting recorded till the motion is stopped. Using this method the required memory space for storing the video is reduced since it doesn't store the entire video but stores the video only when a motion is detected. This is achieved by using real time video processing using open CV (computer vision / machine vision) technology and raspberry pi system.",
"title": ""
},
{
"docid": "3663322ebe405b5e9d588ccdf305da02",
"text": "In this demonstration paper, we present gRecs, a system for group recommendations that follows a collaborative strategy. We enhance recommendations with the notion of support to model the confidence of the recommendations. Moreover, we propose partitioning users into clusters of similar ones. This way, recommendations for users are produced with respect to the preferences of their cluster members without extensively searching for similar users in the whole user base. Finally, we leverage the power of a top-k algorithm for locating the top-k group recommendations.",
"title": ""
},
{
"docid": "70a69feceaeeef1c622669047e6b1ab9",
"text": "Internet of things (IoT) that integrate a variety of devices into networks to provide advanced and intelligent services have to protect user privacy and address attacks such as spoofing attacks, denial of service attacks, jamming and eavesdropping. In this article, we investigate the attack model for IoT systems, and review the IoT security solutions based on machine learning techniques including supervised learning, unsupervised learning and reinforcement learning. We focus on the machine learning based IoT authentication, access control, secure offloading and malware detection schemes to protect data privacy. In this article, we discuss the challenges that need to be addressed to implement these machine learning based security schemes in practical IoT systems.",
"title": ""
},
{
"docid": "ce5c0f59953e8672da5e413230c4d8d2",
"text": "Multivariate volumetric datasets are often encountered in results generated by scientific simulations. Compared to univariate datasets, analysis and visualization of multivariate datasets are much more challenging due to the complex relationships among the variables. As an effective way to visualize and analyze multivariate datasets, volume rendering has been frequently used, although designing good multivariate transfer functions is still non-trivial. In this paper, we present an interactive workflow to allow users to design multivariate transfer functions. To handle large scale datasets, in the preprocessing stage we reduce the number of data points through data binning and aggregation, and then a new set of data points with a much smaller size are generated. The relationship between all pairs of variables is presented in a matrix juxtaposition view, where users can navigate through the different subspaces. An entropy based method is used to help users to choose which subspace to explore. We proposed two weights: scatter weight and size weight that are associated with each projected point in those different subspaces. Based on those two weights, data point filter and kernel density estimation operations are employed to assist users to discover interesting features. For each user-selected feature, a Gaussian function is constructed and updated incrementally. Finally, all those selected features are visualized through multivariate volume rendering to reveal the structure of data. With our system, users can interactively explore different subspaces and specify multivariate transfer functions in an effective way. We demonstrate the effectiveness of our system with several multivariate volumetric datasets.",
"title": ""
},
{
"docid": "0224c1abc7084ce3e68f1c6ceb5d5ece",
"text": "A useful way of understanding personality traits is to examine the motivational nature of a trait because motives drive behaviors and influence attitudes. In two cross-sectional, self-report studies (N=942), we examined the relationships between fundamental social motives and dark personality traits (i.e., narcissism, psychopathy, sadism, spitefulness, and Machiavellianism) and examined the role of childhood socio-ecological conditions (Study 2 only). For example, we found that Machiavellianism and psychopathy were negatively associated with motivations that involved developing and maintaining good relationships with others. Sex differences in the darker aspects of personality were a function of, at least in part, fundamental social motives such as the desire for status. Fundamental social motives mediated the associations that childhood socio-ecological conditions had with the darker aspects of personality. Our results showed how motivational tendencies in men and women may provide insights into alternative life history strategies reflected in dark personality traits.",
"title": ""
},
{
"docid": "f0813fe6b6324e1056dc19a5259d9538",
"text": "Plant disease detection is emerging field in India as agriculture is important sector in Economy and Social life. Earlier unscientific methods were in existence. Gradually with technical and scientific advancement, more reliable methods through lowest turnaround time are developed and proposed for early detection of plant disease. Such techniques are widely used and proved beneficial to farmers as detection of plant disease is possible with minimal time span and corrective actions are carried out at appropriate time. In this paper, we studied and evaluated existing techniques for detection of plant diseases to get clear outlook about the techniques and methodologies followed. The detection of plant disease is significantly based on type of family plants and same is carried out in two phases as segmentation and classification. Here, we have discussed existing segmentation method along with classifiers for detection of diseases in Monocot and Dicot family plant.",
"title": ""
},
{
"docid": "ee58216dd7e3a0d8df8066703b763187",
"text": "Extraction of discriminative features from salient facial patches plays a vital role in effective facial expression recognition. The accurate detection of facial landmarks improves the localization of the salient patches on face images. This paper proposes a novel framework for expression recognition by using appearance features of selected facial patches. A few prominent facial patches, depending on the position of facial landmarks, are extracted which are active during emotion elicitation. These active patches are further processed to obtain the salient patches which contain discriminative features for classification of each pair of expressions, thereby selecting different facial patches as salient for different pair of expression classes. One-against-one classification method is adopted using these features. In addition, an automated learning-free facial landmark detection technique has been proposed, which achieves similar performances as that of other state-of-art landmark detection methods, yet requires significantly less execution time. The proposed method is found to perform well consistently in different resolutions, hence, providing a solution for expression recognition in low resolution images. Experiments on CK+ and JAFFE facial expression databases show the effectiveness of the proposed system.",
"title": ""
},
{
"docid": "09a8aee1ff3315562c73e5176a870c37",
"text": "In a sparse-representation-based face recognition scheme, the desired dictionary should have good representational power (i.e., being able to span the subspace of all faces) while supporting optimal discrimination of the classes (i.e., different human subjects). We propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals. The proposed method, discriminative K-SVD (D-KSVD), is based on extending the K-SVD algorithm by incorporating the classification error into the objective function, thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure. The D-KSVD algorithm finds the dictionary and solves for the classifier using a procedure derived from the K-SVD algorithm, which has proven efficiency and performance. This is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation. We evaluate the proposed method using two commonly-used face databases, the Extended YaleB database and the AR database, with detailed comparison to 3 alternative approaches, including the leading state-of-the-art in the literature. The experiments show that the proposed method outperforms these competing methods in most of the cases. Further, using Fisher criterion and dictionary incoherence, we also show that the learned dictionary and the corresponding classifier are indeed better-posed to support sparse-representation-based recognition.",
"title": ""
},
{
"docid": "049f780d2d9aacb7fc48eb9ea6c49331",
"text": "The application of a dynamic voltage restorer (DVR) connected to a wind-turbine-driven doubly fed induction generator (DFIG) is investigated. The setup allows the wind turbine system an uninterruptible fault ride-through of voltage dips. The DVR can compensate the faulty line voltage, while the DFIG wind turbine can continue its nominal operation as demanded in actual grid codes. Simulation results for a 2 MW wind turbine and measurement results on a 22 kW laboratory setup are presented, especially for asymmetrical grid faults. They show the effectiveness of the DVR in comparison to the low-voltage ride-through of the DFIG using a crowbar that does not allow continuous reactive power production.",
"title": ""
},
{
"docid": "0f9b073461047d698b6bba8d9ee7bff2",
"text": "Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.",
"title": ""
},
{
"docid": "f07acc25bbe54043dc0ecaec30a787c6",
"text": "The Alzheimer's Disease Neuroimaging Initiative (ADNI) is an ongoing, longitudinal, multicenter study designed to develop clinical, imaging, genetic, and biochemical biomarkers for the early detection and tracking of Alzheimer's disease (AD). The study aimed to enroll 400 subjects with early mild cognitive impairment (MCI), 200 subjects with early AD, and 200 normal control subjects; $67 million funding was provided by both the public and private sectors, including the National Institute on Aging, 13 pharmaceutical companies, and 2 foundations that provided support through the Foundation for the National Institutes of Health. This article reviews all papers published since the inception of the initiative and summarizes the results as of February 2011. The major accomplishments of ADNI have been as follows: (1) the development of standardized methods for clinical tests, magnetic resonance imaging (MRI), positron emission tomography (PET), and cerebrospinal fluid (CSF) biomarkers in a multicenter setting; (2) elucidation of the patterns and rates of change of imaging and CSF biomarker measurements in control subjects, MCI patients, and AD patients. CSF biomarkers are consistent with disease trajectories predicted by β-amyloid cascade (Hardy, J Alzheimers Dis 2006;9(Suppl 3):151-3) and tau-mediated neurodegeneration hypotheses for AD, whereas brain atrophy and hypometabolism levels show predicted patterns but exhibit differing rates of change depending on region and disease severity; (3) the assessment of alternative methods of diagnostic categorization. Currently, the best classifiers combine optimum features from multiple modalities, including MRI, [(18)F]-fluorodeoxyglucose-PET, CSF biomarkers, and clinical tests; (4) the development of methods for the early detection of AD. CSF biomarkers, β-amyloid 42 and tau, as well as amyloid PET may reflect the earliest steps in AD pathology in mildly symptomatic or even nonsymptomatic subjects, and are leading candidates for the detection of AD in its preclinical stages; (5) the improvement of clinical trial efficiency through the identification of subjects most likely to undergo imminent future clinical decline and the use of more sensitive outcome measures to reduce sample sizes. Baseline cognitive and/or MRI measures generally predicted future decline better than other modalities, whereas MRI measures of change were shown to be the most efficient outcome measures; (6) the confirmation of the AD risk loci CLU, CR1, and PICALM and the identification of novel candidate risk loci; (7) worldwide impact through the establishment of ADNI-like programs in Europe, Asia, and Australia; (8) understanding the biology and pathobiology of normal aging, MCI, and AD through integration of ADNI biomarker data with clinical data from ADNI to stimulate research that will resolve controversies about competing hypotheses on the etiopathogenesis of AD, thereby advancing efforts to find disease-modifying drugs for AD; and (9) the establishment of infrastructure to allow sharing of all raw and processed data without embargo to interested scientific investigators throughout the world. The ADNI study was extended by a 2-year Grand Opportunities grant in 2009 and a renewal of ADNI (ADNI-2) in October 2010 through to 2016, with enrollment of an additional 550 participants.",
"title": ""
},
{
"docid": "5e71dbae22dabf2f6c25e5db46fb01ed",
"text": "A Hamiltonian walk of a connected graph is a shortest closed walk that passes through every vertex at least once, and the length of a Hamiltonian walk is the total number of edges traversed by the walk. We show that every maximal planar graph with p ( 2 3) vertices has a Hamiltonian cycle or a Hamiltonian walk of length 5 3(p 3)/2.",
"title": ""
},
{
"docid": "013325b5f83e73efdbaa2d0b9ac14afb",
"text": "Electricity prices are known to be very volatile and subject to frequent jumps due to system breakdown, demand shocks, and inelastic supply. Appropriate pricing, portfolio, and risk management models should incorporate these spikes. We develop a framework to price European-style options that are consistent with the possibility of market spikes. The pricing framework is based on a regime jump model that disentangles mean-reversion from the spikes. In the model the spikes are truly time-specific events and therefore independent from the meanreverting price process. This closely resembles the characteristics of electricity prices, as we show with Dutch APX spot price data in the period January 2001 till June 2002. Thanks to the independence of the two price processes in the model, we break derivative prices down in a mean-reverting value and a spike value. We use this result to show how the model can be made consistent with forward prices in the market and present closed-form formulas for European-style options. 5001-6182 Business 5601-5689 4001-4280.7 Accountancy, Bookkeeping Finance Management, Business Finance, Corporation Finance Library of Congress Classification (LCC) HG 6024+ Options M Business Administration and Business Economics M 41 G 3 Accounting Corporate Finance and Governance Journal of Economic Literature (JEL) G 19 General Financial Markets: Other 85 A Business General 225 A 220 A Accounting General Financial Management European Business Schools Library Group (EBSLG) 220 R Options market Gemeenschappelijke Onderwerpsontsluiting (GOO) 85.00 Bedrijfskunde, Organisatiekunde: algemeen 85.25 85.30 Accounting Financieel management, financiering Classification GOO 85.30 Financieel management, financiering Bedrijfskunde / Bedrijfseconomie Accountancy, financieel management, bedrijfsfinanciering, besliskunde",
"title": ""
},
{
"docid": "b6d8e6b610eff993dfa93f606623e31d",
"text": "Data journalism designates journalistic work inspired by digital data sources. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community and referred the process of verifying and ensuring the accuracy of published media content; since 2012, however, it has increasingly focused on the analysis of politics, economy, science, and news content shared in any form, but first and foremost on the Web (social and otherwise). These trends have been noticed by computer scientists working in the industry and academia. Thus, a very lively area of digital content management research has taken up these problems and works to propose foundations (models), algorithms, and implement them through concrete tools. Our tutorial: (i) Outlines the current state of affairs in the area of digital (or computational) fact-checking in newsrooms, by journalists, NGO workers, scientists and IT companies; (ii) Shows which areas of digital content management research, in particular those relying on the Web, can be leveraged to help fact-checking, and gives a comprehensive survey of efforts in this area; (iii) Highlights ongoing trends, unsolved problems, and areas where we envision future scientific and practical advances. PVLDB Reference Format: S. Cazalens, J. Leblay, P. Lamarre, I. Manolescu, X. Tannier. Computational Fact Checking: A Content Management Perspective. PVLDB, 11 (12): 2110-2113, 2018. DOI: https://doi.org/10.14778/3229863.3229880 This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For any use beyond those covered by this license, obtain permission by emailing info@vldb.org. Proceedings of the VLDB Endowment, Vol. 11, No. 12 Copyright 2018 VLDB Endowment 2150-8097/18/8. DOI: https://doi.org/10.14778/3229863.3229880 1. OUTLINE In Section 1.1, we provide a short history of journalistic fact-checking and presents its most recent and visible actors, from the media and/or NGO communities. Section 1.2 discusses the scientific content management areas which bring useful tools for computational fact-checking. 1.1 Data journalism and fact-checking While data of some form is a natural ingredient of all reporting, the increasing volumes and complexity of digital data lead to a qualitative jump, where technical skills, and in particular data science skills, are stringently needed in journalistic work. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community; it referred to the task of identifying and checking factual claims present in media content, which dedicated newsroom personnel would then check for factual accuracy. The goal of such checking was to avoid misinformation, to protect the journal reputation and avoid legal actions. Starting around 2012, first in the United States (FactCheck.org), then in Europe, and soon after in all areas of the world, journalists have started to take advantage of modern technologies for processing content, such as text, video, structured and unstructured data, in order to automate, at least partially, the knowledge finding, reasoning, and analysis tasks which had been previously performed completely by humans. Over time, the focus of fact-checking shifted from verifying claims made by media outlets, toward the claims made by politicians and other public figures. This trend coincided with the parallel (but distinct) evolution toward asking Government Open Data, that is: the idea that governing bodies should share with the public precise information describing their functioning, so that the people have means to assess the quality of their elected representation. Government Open Data became quickly available, in large volumes, e.g. through data.gov in the US, data.gov.uk in the UK, data.gouv.fr in France etc.; journalists turned out to be the missing link between the newly available data and comprehension by the public. Data journalism thus found http://factcheck.org",
"title": ""
},
{
"docid": "76375aa50ebe8388d653241ba481ecd2",
"text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.",
"title": ""
},
{
"docid": "28cf177349095e7db4cdaf6c9c4a6cb1",
"text": "Neural Architecture Search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2) most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform both hand-crafted as well as automatically-designed networks.",
"title": ""
},
{
"docid": "e56e6fd8620ab8c76abc73c379d1fdd5",
"text": "Article history: Received 7 August 2015 Received in revised form 26 January 2016 Accepted 1 April 2016 Available online 7 April 2016 The emergence of social commerce has brought substantial changes to both businesses and consumers. Hence, understanding consumer behavior in the context of social commerce has become critical for companies that aim to better influence consumers and harness the power of their social ties. Given that research on this issue is new and largely fragmented, it will be theoretically important to evaluate what has been studied and derive meaningful insights through a structured review of the literature. In this study, we conduct a systematic review of social commerce studies to explicate how consumers behave on social networking sites. We classify these studies, discuss noteworthy theories, and identify important research methods. More importantly, we draw upon the stimulus–organism–response model and the five-stage consumer decision-making process to propose an integrative framework for understanding consumer behavior in this context. We believe that this framework can provide a useful basis for future social commerce research. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
356e43814fbc7d56ff24b4e399dae0cd
|
An Investigation of Gamification Typologies for Enhancing Learner Motivation
|
[
{
"docid": "372ab07026a861acd50e7dd7c605881d",
"text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.",
"title": ""
},
{
"docid": "84647b51dbbe755534e1521d9d9cf843",
"text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>",
"title": ""
}
] |
[
{
"docid": "9e3d3783aa566b50a0e56c71703da32b",
"text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.",
"title": ""
},
{
"docid": "36c3bd9e1203b9495d92a40c5fa5f2c0",
"text": "A 14-year-old boy presented with asymptomatic right hydronephrosis detected on routine yearly ultrasound examination. Previously, he had at least two normal renal ultrasonograms, 4 years after remission of acute myeloblastic leukemia, treated by AML-BFM-93 protocol. A function of the right kidney and no damage on the left was confirmed by a DMSA scan. Right retroperitoneoscopic nephrectomy revealed 3 renal arteries with the lower pole artery lying on the pelviureteric junction. Histologically chronic tubulointerstitial nephritis was detected. In the pathogenesis of this severe unilateral renal damage, we suspect the exacerbation of deleterious effects of cytostatic therapy on kidneys with intermittent hydronephrosis.",
"title": ""
},
{
"docid": "4b0cf6392d84a0cc8ab80c6ed4796853",
"text": "This paper introduces the Finite-State TurnTaking Machine (FSTTM), a new model to control the turn-taking behavior of conversational agents. Based on a non-deterministic finite-state machine, the FSTTM uses a cost matrix and decision theoretic principles to select a turn-taking action at any time. We show how the model can be applied to the problem of end-of-turn detection. Evaluation results on a deployed spoken dialog system show that the FSTTM provides significantly higher responsiveness than previous approaches.",
"title": ""
},
{
"docid": "0f1f3dc24dda58837db83817bca53c58",
"text": "Deep neural networks have been successfully applied to numerous machine learning tasks because of their impressive feature abstraction capabilities. However, conventional deep networks assume that the training and test data are sampled from the same distribution, and this assumption is often violated in real-world scenarios. To address the domain shift or data bias problems, we introduce layer-wise domain correction (LDC), a new unsupervised domain adaptation algorithm which adapts an existing deep network through additive correction layers spaced throughout the network. Through the additive layers, the representations of source and target domains can be perfectly aligned. The corrections that are trained via maximum mean discrepancy, adapt to the target domain while increasing the representational capacity of the network. LDC requires no target labels, achieves state-of-the-art performance across several adaptation benchmarks, and requires significantly less training time than existing adaptation methods.",
"title": ""
},
{
"docid": "e7e24b5c2a7f1b9ec49099ec1abd2969",
"text": "In this paper, we propose a novel junction detection method in handwritten images, which uses the stroke-length distribution in every direction around a reference point inside the ink of texts. Our proposed junction detection method is simple and efficient, and yields a junction feature in a natural manner, which can be considered as a local descriptor. We apply our proposed junction detector to writer identification by Junclets which is a codebook-based representation trained from the detected junctions. A new challenging data set which contains multiple scripts (English and Chinese) written by the same writers is introduced to evaluate the performance of the proposed junctions for cross-script writer identification. Furthermore, two other common data sets are used to evaluate our junction-based descriptor. Experimental results show that our proposed junction detector is stable under rotation and scale changes, and the performance of writer identification indicates that junctions are important atomic elements to characterize the writing styles. The proposed junction detector is applicable to both historical documents and modern handwritings, and can be used as well for junction retrieval. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c678ea5e9bc8852ec80a8315a004c7f0",
"text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.",
"title": ""
},
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "9cb567317559ada8baec5b6a611e68d0",
"text": "Fungal bioactive polysaccharides deriving mainly from the Basidiomycetes family (and some from the Ascomycetes) and medicinal mushrooms have been well known and widely used in far Asia as part of traditional diet and medicine, and in the last decades have been the core of intense research for the understanding and the utilization of their medicinal properties in naturally produced pharmaceuticals. In fact, some of these biopolymers (mainly β-glucans or heteropolysaccharides) have already made their way to the market as antitumor, immunostimulating or prophylactic drugs. The fact that many of these biopolymers are produced by edible mushrooms makes them also very good candidates for the formulation of novel functional foods and nutraceuticals without any serious safety concerns, in order to make use of their immunomodulating, anticancer, antimicrobial, hypocholesterolemic, hypoglycemic and health-promoting properties. This article summarizes the most important properties and applications of bioactive fungal polysaccharides and discusses the latest developments on the utilization of these biopolymers in human nutrition.",
"title": ""
},
{
"docid": "8ee3d3200ed95cad5ff4ed77c08bb608",
"text": "We present a rare case of a non-fatal impalement injury of the brain. A 13-year-old boy was found in his classroom unconsciously lying on floor. His classmates reported that they had been playing, and throwing building bricks, when suddenly the boy collapsed. The emergency physician did not find significant injuries. Upon admission to a hospital, CT imaging revealed a \"blood path\" through the brain. After clinical forensic examination, an impalement injury was diagnosed, with the entry wound just below the left eyebrow. Eventually, the police presented a variety of pointers that were suspected to have caused the injury. Forensic trace analysis revealed human blood on one of the pointers, and subsequent STR analysis linked the blood to the injured boy. Confronted with the results of the forensic examination, the classmates admitted that they had been playing \"sword fights\" using the pointers, and that the boy had been hit during the game. The case illustrates the difficulties of diagnosing impalement injuries, and identifying the exact cause of the injury.",
"title": ""
},
{
"docid": "52b46bda93c5d426d59132110d78830c",
"text": "We introduce in a unitary way the paradigm of radiofrequency identification (RFID) merged with the technology of Unmanned Aerial Vehicles (UAV) giving rise to RFIDrone devices. Such family comprises the READER-Drone, which is a suitable UAV integrated with an autonomous RFID reader to act as mobile scanner of the environment, and the TAG-Drone, a UAV only equipped with an RFID sensor tag that hence becomes a mobile and automatically re-positioned sensor. We shows some handy electromagnetic models to identify the upper-bound communication performance of RFIDrone in close proximity of a scattering surface and we resume the results of some preliminary open-air experimentation corroborating the theoretical analysis.",
"title": ""
},
{
"docid": "6fa5a58e0f0af633f56418fb4b4808e9",
"text": "We report a low-temperature process for covalent bonding of thermal SiO2 to plasma-enhanced chemical vapor deposited (PECVD) SiO2 for Si-compound semiconductor integration. A record-thin interfacial oxide layer of 60 nm demonstrates sufficient capability for gas byproduct diffusion and absorption, leading to a high surface energy of 2.65 J/m after a 2-h 300 C anneal. O2 plasma treatment and surface chemistry optimization in dilute hydrofluoric (HF) solution and NH4OH vapor efficiently suppress the small-size interfacial void density down to 2 voids/cm, dramatically increasing the wafer-bonded device yield. Bonding-induced strain, as determined by x-ray diffraction measurements, is negligible. The demonstration of a 50 mm InP epitaxial layer transferred to a silicon-on-insulator (SOI) substrate shows the promise of the method for wafer-scale applications.",
"title": ""
},
{
"docid": "301bc00e99607569dcba6317ebb2f10d",
"text": "Bandwidth and gain enhancement of microstrip patch antennas (MPAs) is proposed using reflective metasurface (RMS) as a superstrate. Two different types of the RMS, namelythe double split-ring resonator (DSR) and double closed-ring resonator (DCR) are separately investigated. The two antenna prototypes were manufactured, measured and compared. The experimental results confirm that the RMS loaded MPAs achieve high-gain as well as bandwidth improvement. The desinged antenna using the RMS as a superstrate has a high-gain of over 9.0 dBi and a wide impedance bandwidth of over 13%. The RMS is also utilized to achieve a thin antenna with a cavity height of 6 mm, which is equivalent to λ/21 at the center frequency of 2.45 GHz. At the same time, the cross polarization level and front-to-back ratio of these antennas are also examined. key words: wideband, high-gain, metamaterial, Fabry-Perot cavity (FPC), frequency selective surface (FSS)",
"title": ""
},
{
"docid": "a3c011d846fed4f910cd3b112767ccc1",
"text": "Tooth morphometry is known to be influenced by cultural, environmental and racial factors. Tooth size standards can be used in age and sex determination. One hundred models (50 males & 50 females) of normal occlusion were evaluated and significant correlations (p<0.001) were found to exist between the combined maxillary incisor widths and the maxillary intermolar and interpremolar arch widths. The study establishes the morphometric criterion for premolar and molar indices and quantifies the existence of a statistically significant sexual dimorphism in arch widths (p<0.02). INTRODUCTION Teeth are an excellent material in living and non-living populations for anthropological, genetic, odontologic and forensic investigations 1 .Their morphometry is known to be influenced by cultural, environmental and racial factors. The variations in tooth form are a common occurrence & these can be studied by measurements. Out of the two proportionswidth and length, the former is considered to be more important 2 . Tooth size standards can be used in age and sex determination 3 . Whenever it is possible to predict the sex, identification is simplified because then only missing persons of one sex need to be considered. In this sense identification of sex takes precedence over age 4 . Various features like tooth morphology and crown size are characteristic for males and females 5 .The present study on the maxillary arch takes into account the premolar arch width, molar arch width and the combined width of the maxillary central incisors in both the sexes. Pont's established constant ratio's between tooth sizes and arch widths in French population which came to be known as premolar and molar indices 6 .In the ideal dental arch he concluded that the ratio of combined incisor width to transverse arch width was .80 in the premolar area and .64 in the molar area. There has been a recent resurgence of interest in the clinical use of premolar and molar indices for establishing dental arch development objectives 7 . The present study was conducted to ascertain whether or not Pont's Index can be used reliably on north Indians and to establish the norms for the same. MATERIAL AND METHODS SELECTION CRITERIA One hundred subjects, fifty males and fifty females in the age group of 17-21 years were selected for the study as attrition is considered to be minimal for this age group. The study was conducted on the students of Sudha Rustagi College of Dental Sciences & Research, Faridabad, Haryana. INCLUSION CRITERIA Healthy state of gingival and peridontium.",
"title": ""
},
{
"docid": "d229c679dcd4fa3dd84c6040b95fc99c",
"text": "This paper reviews the supervised learning versions of the no-free-lunch theorems in a simpli ed form. It also discusses the signi cance of those theorems, and their relation to other aspects of supervised learning.",
"title": ""
},
{
"docid": "771834bc4bfe8231fe0158ec43948bae",
"text": "Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.",
"title": ""
},
{
"docid": "0520c57f2cd13ce423e656d89c7f3cc0",
"text": "The term ‘‘urban stream syndrome’’ describes the consistently observed ecological degradation of streams draining urban land. This paper reviews recent literature to describe symptoms of the syndrome, explores mechanisms driving the syndrome, and identifies appropriate goals and methods for ecological restoration of urban streams. Symptoms of the urban stream syndrome include a flashier hydrograph, elevated concentrations of nutrients and contaminants, altered channel morphology, and reduced biotic richness, with increased dominance of tolerant species. More research is needed before generalizations can be made about urban effects on stream ecosystem processes, but reduced nutrient uptake has been consistently reported. The mechanisms driving the syndrome are complex and interactive, but most impacts can be ascribed to a few major large-scale sources, primarily urban stormwater runoff delivered to streams by hydraulically efficient drainage systems. Other stressors, such as combined or sanitary sewer overflows, wastewater treatment plant effluents, and legacy pollutants (long-lived pollutants from earlier land uses) can obscure the effects of stormwater runoff. Most research on urban impacts to streams has concentrated on correlations between instream ecological metrics and total catchment imperviousness. Recent research shows that some of the variance in such relationships can be explained by the distance between the stream reach and urban land, or by the hydraulic efficiency of stormwater drainage. The mechanisms behind such patterns require experimentation at the catchment scale to identify the best management approaches to conservation and restoration of streams in urban catchments. Remediation of stormwater impacts is most likely to be achieved through widespread application of innovative approaches to drainage design. Because humans dominate urban ecosystems, research on urban stream ecology will require a broadening of stream ecological research to integrate with social, behavioral, and economic research.",
"title": ""
},
{
"docid": "ef55f11664a16933166e55548598b939",
"text": "In the paper, we present a new method for classifying documents with rigid geometry. Our approach is based on the fast and robust Viola-Jones object detection algorithm. The advantages of our proposed method are high speed, the possibility of automatic model construction using a training set, and processing of raw source images without any pre-processing steps such as draft recognition, layout analysis or binarisation. Furthermore, our algorithm allows not only to classify documents, but also to detect the placement and orientation of documents within an image.",
"title": ""
},
{
"docid": "99a728e8b9a351734db9b850fe79bd61",
"text": "Predicting anchor links across social networks has important implications to an array of applications, including cross-network information diffusion and cross-domain recommendation. One challenging problem is: whether and to what extent we can address the anchor link prediction problem, if only structural information of networks is available. Most existing methods, unsupervised or supervised, directly work on networks themselves rather than on their intrinsic structural regularities, and thus their effectiveness is sensitive to the high dimension and sparsity of networks. To offer a robust method, we propose a novel supervised model, called PALE, which employs network embedding with awareness of observed anchor links as supervised information to capture the major and specific structural regularities and further learns a stable cross-network mapping for predicting anchor links. Through extensive experiments on two realistic datasets, we demonstrate that PALE significantly outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "7e93c570c957a24ff4eb2132d691a8f1",
"text": "Most of video-surveillance based applications use a foreground extraction algorithm to detect interest objects from videos provided by static cameras. This paper presents a benchmark dataset and evaluation process built from both synthetic and real videos, used in the BMC workshop (Background Models Challenge). This dataset focuses on outdoor situations with weather variations such as wind, sun or rain. Moreover, we propose some evaluation criteria and an associated free software to compute them from several challenging testing videos. The evaluation process has been applied for several state of the art algorithms like gaussian mixture models or codebooks.",
"title": ""
},
{
"docid": "b7969a0c307b51dc563a165f267f1c8f",
"text": "This study examined the overlap in teen dating violence and bullying perpetration and victimization, with regard to acts of physical violence, psychological abuse, and-for the first time ever-digitally perpetrated cyber abuse. A total of 5,647 youth (51% female, 74% White) from 10 schools participated in a cross-sectional anonymous survey. Results indicated substantial co-occurrence of all types of teen dating violence and bullying. Youth who perpetrated and/or experienced physical, psychological, and cyber bullying were likely to have also perpetrated/experienced physical and sexual dating violence, and psychological and cyber dating abuse.",
"title": ""
}
] |
scidocsrr
|
a97966858719eff8599ad5fbb8b7286a
|
LineNet: a Zoomable CNN for Crowdsourced High Definition Maps Modeling in Urban Environments
|
[
{
"docid": "830f36268b9220d378d9aafaf52f5144",
"text": "Deep Convolutional Neural Networks (DCNNs) achieve invariance to domain transformations (deformations) by using multiple `max-pooling' (MP) layers. In this work we show that alternative methods of modeling deformations can improve the accuracy and efficiency of DCNNs. First, we introduce epitomic convolution as an alternative to the common convolution-MP cascade of DCNNs, that comes with the same computational cost but favorable learning properties. Second, we introduce a Multiple Instance Learning algorithm to accommodate global translation and scaling in image classification, yielding an efficient algorithm that trains and tests a DCNN in a consistent manner. Third we develop a DCNN sliding window detector that explicitly, but efficiently, searches over the object's position, scale, and aspect ratio. We provide competitive image classification and localization results on the ImageNet dataset and object detection results on Pascal VOC2007.",
"title": ""
},
{
"docid": "d01fe3897f0f09fc023d943ece518e6e",
"text": "In this paper, we propose an efficient lane detection algorithm for lane departure detection; this algorithm is suitable for low computing power systems like automobile black boxes. First, we extract candidate points, which are support points, to extract a hypotheses as two lines. In this step, Haar-like features are used, and this enables us to use an integral image to remove computational redundancy. Second, our algorithm verifies the hypothesis using defined rules. These rules are based on the assumption that the camera is installed at the center of the vehicle. Finally, if a lane is detected, then a lane departure detection step is performed. As a result, our algorithm has achieved 90.16% detection rate; the processing time is approximately 0.12 milliseconds per frame without any parallel computing.",
"title": ""
},
{
"docid": "b9b194410824bd769b708baef7953aaf",
"text": "Road and lane detection play an important role in autonomous driving and commercial driver-assistance systems. Vision-based road detection is an essential step towards autonomous driving, yet a challenging task due to illumination and complexity of the visual scenery. Urban scenes may present additional challenges such as intersections, multi-lane scenarios, or clutter due to heavy traffic. This paper presents an integrative approach to ego-lane detection that aims to be as simple as possible to enable real-time computation while being able to adapt to a variety of urban and rural traffic scenarios. The approach at hand combines and extends a road segmentation method in an illumination-invariant color image, lane markings detection using a ridge operator, and road geometry estimation using RANdom SAmple Consensus (RANSAC). Employing the segmented road region as a prior for lane markings extraction significantly improves the execution time and success rate of the RANSAC algorithm, and makes the detection of weakly pronounced ridge structures computationally tractable, thus enabling ego-lane detection even in the absence of lane markings. Segmentation performance is shown to increase when moving from a color-based to a histogram correlation-based model. The power and robustness of this algorithm has been demonstrated in a car simulation system as well as in the challenging KITTI data base of real-world urban traffic scenarios.",
"title": ""
},
{
"docid": "5b4e2380172b90c536eb974268a930b6",
"text": "This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up.",
"title": ""
}
] |
[
{
"docid": "f5d9d701bcc3b629dc90db57448c443c",
"text": "IoT is a driving force for the next generation of cyber-physical manufacturing systems. The construction and operation of these systems is a big challenge. In this paper, a framework that exploits model driven engineering to address the increasing complexity in this kind of systems is presented. The framework utilizes the model driven engineering paradigm to define a domain specific development environment that allows the control engineer, a) to transform the mechanical units of the plant to Industrial Automation Things (IAT), i.e., to IoT-compliant manufacturing cyber-physical components, and, b) to specify the cyber components, which implement the plant processes, as physical mashups, i.e., compositions of plant services provided by IATs. The UML4IoT profile is extended to address the requirements of the framework. The approach was successfully applied on a laboratory case study to demonstrate its effectiveness in terms of flexibility and responsiveness.",
"title": ""
},
{
"docid": "5e9e62b69b0e98e81f5eec77bbcc0f73",
"text": "The Conners' Parent Rating Scale (CPRS) is a popular research and clinical tool for obtaining parental reports of childhood behavior problems. The present study introduces a revised CPRS (CPRS-R) which has norms derived from a large, representative sample of North American children, uses confirmatory factor analysis to develop a definitive factor structure, and has an updated item content to reflect recent knowledge and developments concerning childhood behavior problems. Exploratory and confirmatory factor-analytic results revealed a seven-factor model including the following factors: Cognitive Problems, Oppositional, Hyperactivity-Impulsivity, Anxious-Shy, Perfectionism, Social Problems, and Psychosomatic. The psychometric properties of the revised scale appear adequate as demonstrated by good internal reliability coefficients, high test-retest reliability, and effective discriminatory power. Advantages of the CPRS-R include a corresponding factor structure with the Conners' Teacher Rating Scale-Revised and comprehensive symptom coverage for attention deficit hyperactivity disorder (ADHD) and related disorders. Factor congruence with the original CPRS as well as similarities with other parent rating scales are discussed.",
"title": ""
},
{
"docid": "a28c91e46099d49f45360501969d6514",
"text": "Mobile forensics is an exciting new field of research. An increasing number of Open source and commercial digital forensics tools are focusing on less time during digital forensic examination. There is a major issue affecting some mobile forensic tools that allow the tools to spend much time during the forensic examination. It is caused by implementation of poor file searching algorithms by some forensic tool developers. This research is focusing on reducing the time taken to search for a file by proposing a novel, multi-pattern signature matching algorithm called M-Aho-Corasick which is adapted from the original Aho-Corasick algorithm. Experiments are conducted on five different datasets which one of the data sets is obtained from Digital Forensic Research Workshop (DFRWS 2010). Comparisons are made between M-Aho-Corasick using M_Triage with Dec0de, Lifter, XRY, and Xaver. The result shows that M-Aho-Corasick using M_Triage has reduced the searching time by 75% as compared to Dec0de, 36% as compared to Lifter, 28% as compared to XRY, and 71% as compared to Xaver. Thus, M-Aho-Corasick using M_Triage tool is more efficient than Dec0de, Lifter, XRY, and Xaver in avoiding the extraction of high number of false positive results. Keywords—mobile forensics; Images; Videos; M-AhoCorasick; (File Signature Pattern Matching)",
"title": ""
},
{
"docid": "bc06b540765ddf762dc8cb72cae7ad41",
"text": "We present a method to produce free, enormous corpora to train taggers for Named Entity Recognition (NER), the task of identifying and classifying names in text, often solved by statistical learning systems. Our approach utilises the text of Wikipedia, a free online encyclopedia, transforming links between Wikipedia articles into entity annotations. Having derived a baseline corpus, we found that altering Wikipedia’s links and identifying classes of capitalised non-entity terms would enable the corpus to conform more closely to gold-standard annotations, increasing performance by up to 32% F score. The evaluation of our method is novel since the training corpus is not usually a variable in NER experimentation. We therefore develop a number of methods for analysing and comparing training corpora. Gold-standard training corpora for NER perform poorly (F score up to 32% lower) when evaluated on test data from a different gold-standard corpus. Our Wikipedia-derived data can outperform manually-annotated corpora on this cross-corpus evaluation task by up to 7% on held-out test data. These experimental results show that Wikipedia is viable as a source of automatically-annotated training corpora, which have wide domain coverage applicable to a broad range of NLP applications.",
"title": ""
},
{
"docid": "5abcd733dce7e8ced901830cbcaad56b",
"text": "Stored-value cards, or prepaid cards, are increasingly popular. Like credit cards, their use is vulnerable to fraud, costing merchants and card processors millions of dollars. Prior techniques to automate fraud detection rely on a priori rules or specialized learned models associated with the customer. Mostly, these techniques do not consider fraud sequences or changing behavior, which can lead to false alarms. This study demonstrates how a transaction model can be dynamically created and updated, and fraud can be automatically detected for prepaid cards. A card processing company creates models of the store terminals rather than the customers, in part, because of the anonymous nature of prepaid cards. The technique automatically creates, updates, and compares hidden Markov models (HMM) of merchant terminals. We present fraud detection and experiments on real transactional data, showing the efficiency and effectiveness of the approach. In the fraud test cases, derived from known fraud cases, the technique has a good F-score. The technique can detect fraud in real-time for merchants, as card transactions are processed by a modern transaction processing system. © 2017 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "c406d734f32cc4b88648c037d9d10e46",
"text": "In this paper, we review the state-of-the-art technologies for driver inattention monitoring, which can be classified into the following two main categories: 1) distraction and 2) fatigue. Driver inattention is a major factor in most traffic accidents. Research and development has actively been carried out for decades, with the goal of precisely determining the drivers' state of mind. In this paper, we summarize these approaches by dividing them into the following five different types of measures: 1) subjective report measures; 2) driver biological measures; 3) driver physical measures; 4) driving performance measures; and 5) hybrid measures. Among these approaches, subjective report measures and driver biological measures are not suitable under real driving conditions but could serve as some rough ground-truth indicators. The hybrid measures are believed to give more reliable solutions compared with single driver physical measures or driving performance measures, because the hybrid measures minimize the number of false alarms and maintain a high recognition rate, which promote the acceptance of the system. We also discuss some nonlinear modeling techniques commonly used in the literature.",
"title": ""
},
{
"docid": "c65050bb98a071fa8b60fa262536a476",
"text": "Proliferative periostitis is a pathologic lesion that displays an osteo-productive and proliferative inflammatory response of the periosteum to infection or other irritation. This lesion is a form of chronic osteomyelitis that is often asymptomatic, occurring primarily in children, and found only in the mandible. The lesion can be odontogenic or non-odontogenic in nature. A 12 year-old boy presented with an unusual odontogenic proliferative periostitis that originated from the lower left first molar, however, the radiographic radiolucent area and proliferative response were discovered at the apices of the lower left second molar. The periostitis was treated by single-visit non-surgical endodontic treatment of lower left first molar without antibiotic therapy. The patient has been recalled regularly; the lesion had significantly reduced in size 3-months postoperatively. Extraoral symmetry occurred at approximately one year recall. At the last visit, 2 years after initial treatment, no problems or signs of complications have occurred; the radiographic examination revealed complete resolution of the apical lesion and apical closure of the lower left second molar. Odontogenic proliferative periostitis can be observed at the adjacent normal tooth. Besides, this case demonstrates that non-surgical endodontics is a viable treatment option for management of odontogenic proliferative periostitis.",
"title": ""
},
{
"docid": "ba39f3a2b5ed9af6cdf4530176039e05",
"text": "Survival analysis can be applied to build models fo r time to default on debt. In this paper we report an application of survival analysis to model default o n a large data set of credit card accounts. We exp lore the hypothesis that probability of default is affec ted by general conditions in the economy over time. These macroeconomic variables cannot readily be inc luded in logistic regression models. However, survival analysis provides a framework for their in clusion as time-varying covariates. Various macroeconomic variables, such as interest rate and unemployment rate, are included in the analysis. We show that inclusion of these indicators improves model fit and affects probability of default yielding a modest improvement in predictions of def ault on an independent test set.",
"title": ""
},
{
"docid": "c4e80fd8e2c5b1795c016c9542f8f33e",
"text": "Duckweeds, plants of the Lemnaceae family, have the distinction of being the smallest angiosperms in the world with the fastest doubling time. Together with its naturally ability to thrive on abundant anthropogenic wastewater, these plants hold tremendous potential to helping solve critical water, climate and fuel issues facing our planet this century. With the conviction that rapid deployment and optimization of the duckweed platform for biomass production will depend on close integration between basic and applied research of these aquatic plants, the first International Conference on Duckweed Research and Applications (ICDRA) was organized and took place in Chengdu, China, from October 7th to 10th of 2011. Co-organized with Rutgers University of New Jersey (USA), this Conference attracted participants from Germany, Denmark, Japan, Australia, in addition to those from the US and China. The following are concise summaries of the various oral presentations and final discussions over the 2.5 day conference that serve to highlight current research interests and applied research that are paving the way for the imminent deployment of this novel aquatic crop. We believe the sharing of this information with the broad Plant Biology community is an important step toward the renaissance of this excellent plant model that will have important impact on our quest for sustainable development of the world.",
"title": ""
},
{
"docid": "519b0dbeb1193a14a06ba212790f49d4",
"text": "In recent years, sign language recognition has attracted much attention in computer vision . A sign language is a means of conveying the message by using hand, arm, body, and face to convey thoughts and meanings. Like spoken languages, sign languages emerge and evolve naturally within hearing-impaired communities. However, sign languages are not universal. There is no internationally recognized and standardized sign language for all deaf people. As is the case in spoken language, every country has got its own sign language with high degree of grammatical variations. The sign language used in India is commonly known as Indian Sign Language (henceforth called ISL).",
"title": ""
},
{
"docid": "88486271f9e455bdba5d02c99dcc19c3",
"text": "TextCNN, the convolutional neural network for text, is a useful deep learning algorithm for sentence classification tasks such as sentiment analysis and question classification[2]. However, neural networks have long been known as black boxes because interpreting them is a challenging task. Researchers have developed several tools to understand a CNN for image classification by deep visualization[6], but research about deep TextCNNs is still insufficient. In this paper, we are trying to understand what a TextCNN learns on two classical NLP datasets. Our work focuses on functions of different convolutional kernels and correlations between convolutional kernels.",
"title": ""
},
{
"docid": "c24550119d4251d6d7ce1219b8aa0ee4",
"text": "This article considers the delivery of efficient and effective dental services for patients whose disability and/or medical condition may not be obvious and which consequently can present a hidden challenge in the dental setting. Knowing that the patient has a particular condition, what its features are and how it impacts on dental treatment and oral health, and modifying treatment accordingly can minimise the risk of complications. The taking of a careful medical history that asks the right questions in a manner that encourages disclosure is key to highlighting hidden hazards and this article offers guidance for treating those patients who have epilepsy, latex sensitivity, acquired or inherited bleeding disorders and patients taking oral or intravenous bisphosphonates.",
"title": ""
},
{
"docid": "207d3e95d3f04cafa417478ed9133fcc",
"text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5cf396e42e8708d768235f95bc8f227f",
"text": "This thesis examines how artificial neural networks can benefit a large vocabulary, speaker independent, continuous speech recognition system. Currently, most speech recognition systems are based on hidden Markov models (HMMs), a statistical framework that supports both acoustic and temporal modeling. Despite their state-of-the-art performance, HMMs make a number of suboptimal modeling assumptions that limit their potential effectiveness. Neural networks avoid many of these assumptions, while they can also learn complex functions, generalize effectively, tolerate noise, and support parallelism. While neural networks can readily be applied to acoustic modeling, it is not yet clear how they can be used for temporal modeling. Therefore, we explore a class of systems called NN-HMM hybrids, in which neural networks perform acoustic modeling, and HMMs perform temporal modeling. We argue that a NN-HMM hybrid has several theoretical advantages over a pure HMM system, including better acoustic modeling accuracy, better context sensitivity, more natural discrimination, and a more economical use of parameters. These advantages are confirmed experimentally by a NN-HMM hybrid that we developed, based on context-independent phoneme models, that achieved 90.5% word accuracy on the Resource Management database, in contrast to only 86.0% accuracy achieved by a pure HMM under similar conditions. In the course of developing this system, we explored two different ways to use neural networks for acoustic modeling: prediction and classification. We found that predictive networks yield poor results because of a lack of discrimination, but classification networks gave excellent results. We verified that, in accordance with theory, the output activations of a classification network form highly accurate estimates of the posterior probabilities P(class|input), and we showed how these can easily be converted to likelihoods P(input|class) for standard HMM recognition algorithms. Finally, this thesis reports how we optimized the accuracy of our system with many natural techniques, such as expanding the input window size, normalizing the inputs, increasing the number of hidden units, converting the network’s output activations to log likelihoods, optimizing the learning rate schedule by automatic search, backpropagating error from word level outputs, and using gender dependent networks.",
"title": ""
},
{
"docid": "22d7464aaf0ad46e3bd04a30312ee659",
"text": "Cities are drivers of economic development, providing infrastructure to support countless activities and services. Today, the world’s 750 biggest cities account for more than 57% of the global GDP and this number is expected to increase to 61% by 2030. More than half of the world’s population lives in cities, or urban areas, and this share will continue to growth. Rapid urban growth has posed both challenges and opportunities for city planners, not in the least when it comes to the design of transportation and logistic systems for freight. But urbanization also fosters innovation and sharing, which have led to new models for organizing movement of goods within the city. In this chapter, we highlight one of these new models: Crowd Logistics. We define the characterizing features of crowd logistics, review applications of crowd-based services within urban environments, and discuss research opportunities in the area of crowd logistics.",
"title": ""
},
{
"docid": "03ce79214eb7e7f269464574b1e5c208",
"text": "Variable draft is shown to be an essential feature for a research and survey SWATH ship large enough for unrestricted service worldwide. An ongoing semisubmerged (variable draft) SWATH can be designed for access to shallow harbors. Speed at transit (shallow) draft can be comparable to monohulls of the same power while assuring equal or better seakeeping characteristics. Seakeeping with the ship at deeper drafts can be superior to an equivalent SWATH that is designed for all operations at a single draft. The lower hulls of the semisubmerged SWATH ship can be devoid of fins. A practical target for interior clear spacing between the lower hulls is about 50 feet. Access to the sea surface for equipment can be provided astern, over the side, or from within a centerwell amidships. One of the lower hulls can be optimized to carry acoustic sounding equipment. A design is presented in this paper for a semisubmerged ship with a trial speed in excess of 15 knots, a scientific mission payload of 300 tons, and accommodations for 50 personnel. 1. SEMISUBMERGED SWATH TECHNOLOGY A single draft for the full range of operating conditions is a comon feature of typical SWATH ship designs. This constant draft characteristic is found in the SWATH ships built by Mitsuil” , most notably the KAIY03, and the SWATH T-AGOS4 which is now under construction for the U.S. Navy. The constant draft design for ships of this size (about 3,500 tons displacement) poses two significant drawbacks. One is that the draft must be at least 25 feet to satisfy seakeeping requirements. This draft is restrictive for access to many harbors that would be useful for research and survey functions. The second is that hull and column (strut) hydrodynamics generally result in the SWATH being a larger ship and having greater power requirements than for an equivalent monohull. The ship size and hull configuration, together with the necessity for a. President, Blue Sea Corporation b. President, Alan C. McClure Associates, Inc. stabilizing fins, usually leads to a higher capital cost than for a rougher riding, but otherwise equivalent, monohull. The distinguishing feature of the semisubmerged SWATH ship is variable draft. Sufficient allowance for ballast transfer is made to enable the ship to vary its draft under all load conditions. The shallowest draft is well within usual harbor limits and gives the lower hulls a slight freeboard. It also permits transit in low to moderate sea conditions using less propulsion power than is needed by a constant draft SWATH. The semisubmerged SWATH gives more design flexibility to provide for deep draft conditions that strike a balance between operating requirements and seakeeping characteristics. Intermediate “storm” drafts can be selected that are a compromise between seakeeping, speed, and upper hull clearance to avoid slamming. A discussion of these and other tradeoffs in semisubmerged SWATH ship design for oceanographic applications is given in a paper by Gaul and McClure’ . A more general discussion of design tradeoffs is given in a later paper6. The semisubmerged SWATH technology gives rise to some notable contrasts with constant draft SWATH ships. For any propulsion power applied, the semisubmerged SWATH has a range of speed that depends on draft. Highest speeds are obtained at minimum (transit) draft. Because the lower hull freeboard is small at transit draft, seakeeping at service speed can be made equal to or better than an equivalent monohull. The ship is designed for maximum speed at transit draft so the lower hull form is more akin to a surface craft than a submarine. This allows use of a nearly rectangular cross section for the lower hulls which provides damping of vertical motion. For moderate speeds at deeper drafts with the highly damped lower hull form, the ship need not be equipped with stabilizing fins. Since maximum speed is achieved with the columns of the water, it is practical (struts) out to use two c. President, Omega Marine Engineering Systems, Inc. d. Joint venture of Blue Sea Corporation and Martran Consultants, Inc. columns, rather than one, on each lower hull. The four column configuration at deep drafts minimizes the variation of ship motion response with change in course relative to surface wave direction. The width of the ship and lack of appendages on the lower hulls increases the utility of a large underside deck opening (moonpool) amidship. The basic Semisubmerged SWATH Research and Survey Ship design has evolved from requirements first stated by the Institute for Geophysics of the University of Texas (UTIG) in 1984. Blue Sea McClure provided the only SWATH configuration in a set of five conceptual designs procured competitively by the University. Woods Hole Oceanographic Institution, on behalf of the University-National Oceanographic Laboratory System, subsequently contracted for a revision of the UTIG design to meet requirements for an oceanographic research ship. The design was further refined to meet requirements posed by the U.S. Navy for an oceanographic research ship. The intent of this paper is to use this generic design to illustrate the main features of semisubmerged SWATH ships.",
"title": ""
},
{
"docid": "c1a6b9df700226212dca8857e7001896",
"text": "Knowing the location of a social media user and their posts is important for various purposes, such as the recommendation of location-based items/services, and locality detection of crisis/disasters. This paper describes our submission to the shared task “Geolocation Prediction in Twitter” of the 2nd Workshop on Noisy User-generated Text. In this shared task, we propose an algorithm to predict the location of Twitter users and tweets using a multinomial Naive Bayes classifier trained on Location Indicative Words and various textual features (such as city/country names, #hashtags and @mentions). We compared our approach against various baselines based on Location Indicative Words, city/country names, #hashtags and @mentions as individual feature sets, and experimental results show that our approach outperforms these baselines in terms of classification accuracy, mean and median error distance.",
"title": ""
},
{
"docid": "3dbedb4539ac6438e9befbad366d1220",
"text": "The main focus of this paper is to propose integration of dynamic and multiobjective algorithms for graph clustering in dynamic environments under multiple objectives. The primary application is to multiobjective clustering in social networks which change over time. Social networks, typically represented by graphs, contain information about the relations (or interactions) among online materials (or people). A typical social network tends to expand over time, with newly added nodes and edges being incorporated into the existing graph. We reflect these characteristics of social networks based on real-world data, and propose a suitable dynamic multiobjective evolutionary algorithm. Several variants of the algorithm are proposed and compared. Since social networks change continuously, the immigrant schemes effectively used in previous dynamic optimisation give useful ideas for new algorithms. An adaptive integration of multiobjective evolutionary algorithms outperformed other algorithms in dynamic social networks.",
"title": ""
},
{
"docid": "653b44b98c78bed426c0e5630145c2ba",
"text": "In the field of non-monotonic logics, the notion of rational closure is acknowledged as a landmark, and we are going to see that such a construction can be characterised by means of a simple method in the context of propositional logic. We then propose an application of our approach to rational closure in the field of Description Logics, an important knowledge representation formalism, and provide a simple decision procedure for this case.",
"title": ""
},
{
"docid": "ab68f5a8b6a48423c8d8d01758cbd47d",
"text": "Typical recommender systems use the root mean squared error (RMSE) between the predicted and actual ratings as the evaluation metric. We argue that RMSE is not an optimal choice for this task, especially when we will only recommend a few (top) items to any user. Instead, we propose using a ranking metric, namely normalized discounted cumulative gain (NDCG), as a better evaluation metric for this task. Borrowing ideas from the learning to rank community for web search, we propose novel models which approximately optimize NDCG for the recommendation task. Our models are essentially variations on matrix factorization models where we also additionally learn the features associated with the users and the items for the ranking task. Experimental results on a number of standard collaborative filtering data sets validate our claims. The results also show the accuracy and efficiency of our models and the benefits of learning features for ranking.",
"title": ""
}
] |
scidocsrr
|
9c573fb5fef95e93027b5e3f953883d9
|
Rumor source detection with multiple observations: fundamental limits and algorithms
|
[
{
"docid": "1b2cdbc2e87fccef66aff9e67347cc73",
"text": "We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like.",
"title": ""
}
] |
[
{
"docid": "e8216c275a20be6706f5c2792bc6fd92",
"text": "Robust and reliable vehicle detection from images acquired by a moving vehicle is an important problem with numerous applications including driver assistance systems and self-guided vehicles. Our focus in this paper is on improving the performance of on-road vehicle detection by employing a set of Gabor filters specifically optimized for the task of vehicle detection. This is essentially a kind of feature selection, a critical issue when designing any pattern classification system. Specifically, we propose a systematic and general evolutionary Gabor filter optimization (EGFO) approach for optimizing the parameters of a set of Gabor filters in the context of vehicle detection. The objective is to build a set of filters that are capable of responding stronger to features present in vehicles than to nonvehicles, therefore improving class discrimination. The EGFO approach unifies filter design with filter selection by integrating genetic algorithms (GAs) with an incremental clustering approach. Filter design is performed using GAs, a global optimization approach that encodes the Gabor filter parameters in a chromosome and uses genetic operators to optimize them. Filter selection is performed by grouping filters having similar characteristics in the parameter space using an incremental clustering approach. This step eliminates redundant filters, yielding a more compact optimized set of filters. The resulting filters have been evaluated using an application-oriented fitness criterion based on support vector machines. We have tested the proposed framework on real data collected in Dearborn, MI, in summer and fall 2001, using Ford's proprietary low-light camera.",
"title": ""
},
{
"docid": "f0c0bbb0282d76da7146e05f4a371843",
"text": "We have proposed a claw pole type half-wave rectified variable field flux motor (CP-HVFM) with special self-excitation method. The claw pole rotor needs the 3D magnetic path core. This paper reports an analysis method with experimental BH and loss data of the iron powder core for FEM. And it shows a designed analysis model and characteristics such as torque, efficiency and loss calculation results.",
"title": ""
},
{
"docid": "2800046ff82a5bc43b42c1d2e2dc6777",
"text": "We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters—a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration)—we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate.",
"title": ""
},
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "52c160736ae0c82f3bdd9d4519fe320c",
"text": "OBJECT\nThere continues to be confusion over how best to preserve the branches of the facial nerve to the frontalis muscle when elevating a frontotemporal (pterional) scalp flap. The object of this study was to examine the full course of the branches of the facial nerve that must be preserved to maintain innervation of the frontalis muscle during elevation of a frontotemporal scalp flap.\n\n\nMETHODS\nDissection was performed to follow the temporal branches of facial nerves along their course in 5 adult, cadaveric heads (n = 10 extracranial facial nerves).\n\n\nRESULTS\nPreserving the nerves to the frontalis muscle requires an understanding of the course of the nerves in 3 areas. The first area is on the outer surface of the temporalis muscle lateral to the superior temporal line (STL) where the interfascial or subfascial approaches are applied, the second is in the area medial to the STL where subpericranial dissection is needed, and the third is along the STL. Preserving the nerves crossing the STL requires an understanding of the complex fascial relationships at this line. It is important to preserve the nerves crossing the lateral and medial parts of the exposure, and the continuity of the nerves as they pass across the STL. Prior descriptions have focused largely on the area superficial to the temporalis muscle lateral to the STL.\n\n\nCONCLUSIONS\nUsing the interfascial-subpericranial flap and the subfascial-subpericranial flap avoids opening the layer of loose areolar tissue between the temporal fascia and galea in the area lateral to the STL and between the galea and frontal pericranium in the area medial to the STL. It also preserves the continuity of the nerve crossing the STL. This technique allows for the preservation of the nerves to the frontalis muscle along their entire trajectory, from the uppermost part of the parotid gland to the frontalis muscle.",
"title": ""
},
{
"docid": "5e756f85b15812daf80221c8b9ae6a96",
"text": "PURPOSE\nRural-dwelling cancer survivors (CSs) are at risk for decrements in health and well-being due to decreased access to health care and support resources. This study compares the impact of cancer in rural- and urban-dwelling adult CSs living in 2 regions of the Pacific Northwest.\n\n\nMETHODS\nA convenience sample of posttreatment adult CSs (N = 132) completed the Impact of Cancer version 2 (IOCv2) and the Memorial Symptom Assessment Scale-short form. High and low scorers on the IOCv2 participated in an in-depth interview (n = 19).\n\n\nFINDINGS\nThe sample was predominantly middle-aged (mean age 58) and female (84%). Mean time since treatment completion was 6.7 years. Cancer diagnoses represented included breast (56%), gynecologic (9%), lymphoma (8%), head and neck (6%), and colorectal (5%). Comparisons across geographic regions show statistically significant differences in body concerns, worry, negative impact, and employment concerns. Rural-urban differences from interview data include access to health care, care coordination, connecting/community, thinking about death and dying, public/private journey, and advocacy.\n\n\nCONCLUSION\nThe insights into the differences and similarities between rural and urban CSs challenge the prevalent assumptions about rural-dwelling CSs and their risk for negative outcomes. A common theme across the study findings was community. Access to health care may not be the driver of the survivorship experience. Findings can influence health care providers and survivorship program development, building on the strengths of both rural and urban living and the engagement of the survivorship community.",
"title": ""
},
{
"docid": "5d7dced0ed875fed0f11440dc26fffd1",
"text": "Different from conventional mobile networks designed to optimize the transmission efficiency of one particular service (e.g., streaming voice/ video) primarily, the industry and academia are reaching an agreement that 5G mobile networks are projected to sustain manifold wireless requirements, including higher mobility, higher data rates, and lower latency. For this purpose, 3GPP has launched the standardization activity for the first phase 5G system in Release 15 named New Radio (NR). To fully understand this crucial technology, this article offers a comprehensive overview of the state-of-the-art development of NR, including deployment scenarios, numerologies, frame structure, new waveform, multiple access, initial/random access procedure, and enhanced carrier aggregation (CA) for resource requests and data transmissions. The provided insights thus facilitate knowledge of design and practice for further features of NR.",
"title": ""
},
{
"docid": "cebcd53ef867abb158445842cd0f4daf",
"text": "Let [ be a random variable over a finite set with an arbitrary probability distribution. In this paper we make improvements to a fast method of generating sample values for ( in constant time.",
"title": ""
},
{
"docid": "6393d61b229e7230e256922445534bdb",
"text": "Recently, region based methods for estimating the 3D pose of an object from a 2D image have gained increasing popularity. They do not require prior knowledge of the object’s texture, making them particularity attractive when the object’s texture is unknown a priori. Region based methods estimate the 3D pose of an object by finding the pose which maximizes the image segmentation in to foreground and background regions. Typically the foreground and background regions are described using global appearance models, and an energy function measuring their fit quality is optimized with respect to the pose parameters. Applying a region based approach on standard 2D-3D pose estimation databases shows its performance is strongly dependent on the scene complexity. In simple scenes, where the statistical properties of the foreground and background do not spatially vary, it performs well. However, in more complex scenes, where the statistical properties of the foreground or background vary, the performance strongly degrades. The global appearance models used to segment the image do not sufficiently capture the spatial variation. Inspired by ideas from local active contours, we propose a framework for simultaneous image segmentation and pose estimation using multiple local appearance models. The local appearance models are capable of capturing spatial variation in statistical properties, where global appearance models are limited. We derive an energy function, measuring the image segmentation, using multiple local regions and optimize it with respect to the pose parameters. Our experiments show a substantially higher probability of estimating the correct pose for heterogeneous objects, whereas for homogeneous objects there is minor improvement.",
"title": ""
},
{
"docid": "f267b329f52628d3c52a8f618485ae95",
"text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.",
"title": ""
},
{
"docid": "8814d6589ecea87015017feb3ba18b01",
"text": "Although pneumatic robots are expected to be physically friendly to humans and human-environments, large and heavy air sources and reservoir tanks are a problem to build a self-contained pneumatic robot. This paper proposes a compressor-embedded pneumatic-driven humanoid system consisting of a very small distributed compressors and hollow bones as air reservoir tanks as well as the structural parts. Musculoskeletal systems have possibility of doing dynamic motions using physical elasticity of muscles and tendons, coupled-driven systems of multi-articular muscles, and so on. We suppose a pneumatic driven flexible spine will be contribute to dynamic motions as well as physical adaptivity to environments. This paper presents the concept, design, and implementation of the compressor-embedded pneumatic-driven musculoskeletal humanoid robot named “buEnwa.” We have developed the pneumatic robot which embeds very small compressors and reservoir tanks, and has a multi-joint spine in which physically elastic elements such as rubber bands are attached, and the coupled-driving system of the spine and the shoulder. This paper also shows preliminary experiments of the real robot.",
"title": ""
},
{
"docid": "920748fbdcaf91346a40e3bf5ae53d42",
"text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].",
"title": ""
},
{
"docid": "c27d0db50a555d30f8e994cd72114d33",
"text": "We present a novel approach to generating photo-realistic images of a face with accurate lip sync, given an audio input. By using a recurrent neural network, we achieved mouth landmarks based on audio features. We exploited the power of conditional generative adversarial networks to produce highly-realistic face conditioned on a set of landmarks. These two networks together are capable of producing sequence of natural faces in sync with an input audio track.",
"title": ""
},
{
"docid": "4ceeffb061aed60299d4153bf48e2ad4",
"text": "Enhancing on line analytical processing through efficient cube computation plays a key role in Data Warehouse management. Hashing, grouping and mining techniques are commonly used to improve cube pre-computation. BitCube, a fast cubing method which uses bitmaps as inverted indexes for grouping, is presented. It horizontally partitions data according to the values of one dimension and for each resulting fragment it performs grouping following bottom-up criteria. BitCube allows also partial materialization based on iceberg conditions to treat large datasets for which a full cube pre-computation is too expensive. Space requirement of bitmaps is optimized by applying an adaption of the WAH compression technique. Experimental analysis, on both synthetic and real datasets, shows that BitCube outperforms previous algorithms for full cube computation and results comparable on iceberg cubing.",
"title": ""
},
{
"docid": "1cbaabb7514b7323aac7f0648dff6260",
"text": "While traditional database systems optimize for performance on one-shot query processing, emerging large-scale monitoring applications require continuous tracking of complex data-analysis queries over collections of physically distributed streams. Thus, effective solutions have to be simultaneously space/time efficient (at each remote monitor site), communication efficient (across the underlying communication network), and provide continuous, guaranteed-quality approximate query answers. In this paper, we propose novel algorithmic solutions for the problem of continuously tracking a broad class of complex aggregate queries in such a distributed-streams setting. Our tracking schemes maintain approximate query answers with provable error guarantees, while simultaneously optimizing the storage space and processing time at each remote site, and the communication cost across the network. In a nutshell, our algorithms rely on tracking general-purpose randomized sketch summaries of local streams at remote sites along with concise prediction models of local site behavior in order to produce highly communication- and space/time-efficient solutions. The end result is a powerful approximate query tracking framework that readily incorporates several complex analysis queries (including distributed join and multi-join aggregates, and approximate wavelet representations), thus giving the first known low-overhead tracking solution for such queries in the distributed-streams model. Experiments with real data validate our approach, revealing significant savings over naive solutions as well as our analytical worst-case guarantees.",
"title": ""
},
{
"docid": "27ea4d25d672b04632c53c711afe0ceb",
"text": "Many advancements have been taking place in unmanned aerial vehicle (UAV) technology lately. This is leading towards the design and development of UAVs with various sizes that possess increased on-board processing, memory, storage, and communication capabilities. Consequently, UAVs are increasingly being used in a vast amount of commercial, military, civilian, agricultural, and environmental applications. However, to take full advantages of their services, these UAVs must be able to communicate efficiently with each other using UAV-to-UAV (U2U) communication and with existing networking infrastructures using UAV-to-Infrastructure (U2I) communication. In this paper, we identify the functions, services and requirements of UAV-based communication systems. We also present networking architectures, underlying frameworks, and data traffic requirements in these systems as well as outline the various protocols and technologies that can be used at different UAV communication links and networking layers. In addition, the paper discusses middleware layer services that can be provided in order to provide seamless communication and support heterogeneous network interfaces. Furthermore, we discuss a new important area of research, which involves the use of UAVs in collecting data from wireless sensor networks (WSNs). We discuss and evaluate several approaches that can be used to collect data from different types of WSNs including topologies such as linear sensor networks (LSNs), geometric and clustered WSNs. We outline the benefits of using UAVs for this function, which include significantly decreasing sensor node energy consumption, lower interference, and offers considerably increased flexibility in controlling the density of the deployed nodes since the need for the multihop approach for sensor-tosink communication is either eliminated or significantly reduced. Consequently, UAVs can provide good connectivity to WSN clusters.",
"title": ""
},
{
"docid": "87c3c488f027ef96b1c2a096c122d1b4",
"text": "We study the label complexity of pool-based active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A2 algorithm proposed by Balcan, Beygelzimer & Langford (Balcan et al., 2006). This represents the first nontrivial general-purpose upper bound on label complexity in the agnostic PAC model.",
"title": ""
},
{
"docid": "1e6ea96d9aafb244955ff38423562a1c",
"text": "Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simulated annealing, on four econometric problems and compare it to three common conventional algorithms. Not only can simulated annealing find the global optimum, it is also less likely to fail on difficult functions because it is a very robust algorithm. The promise of simulated annealing is demonstrated on the four econometric problems.",
"title": ""
},
{
"docid": "717dd8e3c699d6cc22ba483002ab0a6f",
"text": "Our analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in realtime to opportunities and risks detected or environmental changes. We are the first to tackle this problem by providing active rule support embedded directly within the CEP engine, henceforth called Active Complex Event Processing technology, or short, Active CEP. We design the Active CEP model and associated rule language that allows rules to be triggered by CEP system state changes and correctly executed during the continuous query process. Moreover we design an Active CEP infrastructure, that integrates the active rule component into the CEP kernel, allowing finegrained and optimized rule processing. We demonstrate the power of Active CEP by applying it to the development of a collaborative project with UMass Medical School, which detects potential threads of infection and reminds healthcare workers to perform hygiene precautions in real-time. 1. BACKGROUND AND MOTIVATION Complex patterns of events often capture exceptions, threats or opportunities occurring across application space and time. Complex Event Processing (CEP) technology has thus increasingly gained popularity for efficiently detecting such event patterns in real-time. For example CEP has been employed by diverse applications ranging from healthcare systems , financial analysis , real-time business intelligence to RFID based surveillance. However, existing CEP technologies [3, 7, 2, 5], while effective for pattern matching, are limited in their capability of supporting active rules. We motivate the need for such capability based on our experience with the development of a real-world hospital infection control system, called HygieneReminder, or short HyReminder. Application: HyReminder. According to the U.S. Centers for Disease Control and Prevention [8], healthcareassociated infections hit 1.7 million people a year in the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were presented at The 36th International Conference on Very Large Data Bases, September 13-17, 2010, Singapore. Proceedings of the VLDB Endowment, Vol. 3, No. 2 Copyright 2010 VLDB Endowment 2150-8097/10/09... $ 10.00. United States, causing an estimated 99,000 deaths. HyReminder is a collaborated project between WPI and University of Massachusetts Medical School (UMMS) that uses advanced CEP technologies to solve this long-standing public health problem. HyReminder system aims to continuously track healthcare workers (HCW) for hygiene compliance (for example cleansing hands before entering a H1N1 patient’s room), and remind the HCW at the appropriate moments to perform hygiene precautions thus preventing spread of infections. CEP technologies are adopted to efficiently monitor event patterns, such as the sequence that a HCW left a patient room (this behavior is measured by a sensor reading and modeled as “exit” event), did not sanitize his hands (referred as “!sanitize”, where ! represents negation), and then entered another patient’s room (referred as “enter”). Such a sequence of behaviors, i.e. SEQ(exit,!sanitize,enter), would be deemed as a violation of hand hygiene regulations. Besides detecting complex events, the HyReminder system requires the ability to specify logic rules reminding HCWs to perform the respective appropriate hygiene upon detection of an imminent hand hygiene violation or an actual observed violation. A condensed version of example logic rules derived from HyReminder and modeled using CEP semantics is depicted in Figure 1. In the figure, the edge marked “Q1.1” expresses the logic that “if query Q1.1 is satisfied for a HCW, then change his hygiene status to warning and change his badge light to yellow”. This logic rule in fact specifies how the system should react to the observed change, here meaning the risk being detected by the continuous pattern matching query Q1.1, during the long running query process. The system’s streaming environment requires that such reactions be executed in a timely fashion. An additional complication arises in that the HCW status changed by this logic rule must be used as a condition by other continuous queries at run time, like Q2.1 and Q2.2. We can see that active rules and continuous queries over streaming data are tightly-coupled: continuous queries are monitoring the world while active rules are changing the world, both in real-time. Yet contrary to traditional databases, data is not persistently stored in a DSMS, but rather streamed through the system in fluctuating arrival rate. Thus processing active rules in CEP systems requires precise synchronization between queries and rules and careful consideration of latency and resource utilization. Limitations of Existing CEP Technology. In summary, the following active functionalities are needed by many event stream applications, but not supported by the existing",
"title": ""
}
] |
scidocsrr
|
33817226d171c1c9c82f36374d801fdc
|
Parallel graph analytics
|
[
{
"docid": "666b9e88e881bbaa70037ba6f2548acf",
"text": "Since the early 1990s, there has been a significant research activity in efficient parallel algorithms and novel computer architectures for problems that have been already solved sequentially (sorting, maximum flow, searching, etc). In this handout, we are interested in parallel algorithms and we avoid particular hardware details. The primary architectural model for our algorithms is a simplified machine called Parallel RAM (or PRAM). In essence, the PRAM model consists of a number p of processors that can read and/or write on a shared “global” memory in parallel (i.e., at the same time). The processors can also perform various arithmetic and logical operations in parallel.",
"title": ""
}
] |
[
{
"docid": "141b333f0c7b256be45c478a79e8f8eb",
"text": "Communications regulators over the next decade will spend increasing time on conflicts between the private interests of broadband providers and the public’s interest in a competitive innovation environment centered on the Internet. As the policy questions this conflict raises are basic to communications policy, they are likely to reappear in many different forms. So far, the first major appearance has come in the ‘‘open access’’ (or ‘‘multiple access’’) debate, over the desirability of allowing vertical integration between Internet Service Providers and cable operators. Proponents of open access see it as a structural remedy to guard against an erosion of the ‘‘neutrality’’ of the network as between competing content and applications. Critics, meanwhile, have taken open-access regulation as unnecessary and likely to slow the pace of broadband deployment.",
"title": ""
},
{
"docid": "01809d609802d949aa8c1604db29419d",
"text": "Do convolutional networks really need a fixed feed-forward structure? What if, after identifying the high-level concept of an image, a network could move directly to a layer that can distinguish finegrained differences? Currently, a network would first need to execute sometimes hundreds of intermediate layers that specialize in unrelated aspects. Ideally, the more a network already knows about an image, the better it should be at deciding which layer to compute next. In this work, we propose convolutional networks with adaptive inference graphs (ConvNet-AIG) that adaptively define their network topology conditioned on the input image. Following a high-level structure similar to residual networks (ResNets), ConvNet-AIG decides for each input image on the fly which layers are needed. In experiments on ImageNet we show that ConvNet-AIG learns distinct inference graphs for different categories. Both ConvNet-AIG with 50 and 101 layers outperform their ResNet counterpart, while using 20% and 33% less computations respectively. By grouping parameters into layers for related classes and only executing relevant layers, ConvNet-AIG improves both efficiency and overall classification quality. Lastly, we also study the effect of adaptive inference graphs on the susceptibility towards adversarial examples. We observe that ConvNet-AIG shows a higher robustness than ResNets, complementing other known defense mechanisms.",
"title": ""
},
{
"docid": "2752c235aea735a04b70272deb042ea6",
"text": "Psychophysiological studies with music have not examined what exactly in the music might be responsible for the observed physiological phenomena. The authors explored the relationships between 11 structural features of 16 musical excerpts and both self-reports of felt pleasantness and arousal and different physiological measures (respiration, skin conductance, heart rate). Overall, the relationships between musical features and experienced emotions corresponded well with those known between musical structure and perceived emotions. This suggests that the internal structure of the music played a primary role in the induction of the emotions in comparison to extramusical factors. Mode, harmonic complexity, and rhythmic articulation best differentiated between negative and positive valence, whereas tempo, accentuation, and rhythmic articulation best discriminated high arousal from low arousal. Tempo, accentuation, and rhythmic articulation were the features that most strongly correlated with physiological measures. Music that induced faster breathing and higher minute ventilation, skin conductance, and heart rate was fast, accentuated, and staccato. This finding corroborates the contention that rhythmic aspects are the major determinants of physiological responses to music.",
"title": ""
},
{
"docid": "bf71f7f57def7633a5390b572e983bc9",
"text": "With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.",
"title": ""
},
{
"docid": "ddc37b3b8c9de07f155d4dd569fe122d",
"text": "Heparin-induced thrombocytopenia, or HIT, represents an antibodymediated adverse drug reaction characterized by platelet-activating immunoglobulin G (IgG) that recognize platelet factor 4 (PF4)/heparin complexes. Because HIT is a clinical-pathological disorder, diagnosis requires integrating clinical and laboratory features. HIT is a profound hypercoagulability state strongly associated with thrombosis.Moreover,HIT is relativelycommon: although it occurs in only ;0.2% of hospitalized patients undergoing any heparin exposure, it is more common in certain high-risk patient populations. For example, the frequency of HIT is;5% in postorthopedic surgery patients receiving unfractionated heparin (UFH) for 10 to 14 days. In cardiac surgery patients who are given intraoperative UFH (for cardiopulmonary bypass [CPB]) and who receive postoperative UFH thromboprophylaxis, the frequency is;1% to 2%. The ubiquity of heparin and the relatively common occurrence of HIT explain the increasing numbers of patients with a history of HIT. Several issues are relevant when considering anticoagulant management of a patient with a history of HIT. First, there are 3 conditions—cardiac surgery, vascular surgery, hemodialysis—for which UFH is the clear anticoagulant of choice. Second, there are different laboratory tests to detect anti-PF4/heparin antibodies, for which implications regarding antibody pathogenicity vary, particularly for immunoassays versus platelet activation assays. Third, anti-PF4/ heparin antibodies are among the most transient in clinical medicine: a patient with HIT will generally have antibodies detectable by both PF4-dependent immunoassay and platelet activation assay at the time of acute HIT, but these antibodies can be difficult to detect as little as a few weeks or months later. Fourth, a minimum of 5 days is required to generate, or to regenerate, pathogenic platelet-activating antibodies. Thus, for certain patient populations, deliberate and planned reexposure to heparin has a rational basis. Indeed, this strategy was pioneered by Pötzsch et al, who described unremarkable outcomes of heparin reexposure for CPB in 10 patients with previous HIT who were antibody negative at reexposure (and who remained antibody negative 10 days after reexposure). For many clinical situations, alternative non-heparin anticoagulation is well established (eg, treatment and/or prevention of venous thromboembolismormanagement of acute coronary syndrome). Thus, one can usually identify 1 or more appropriate options among the large panoply of agents, such as direct oral anticoagulants, either factor Xa (rivaroxaban, apixaban, edoxaban) or thrombin (dabigatran) inhibiting, or a parenteral agent (anti-Xa [fondaparinux, danaparoid] or anti-IIa [bivalirudin, argatroban]). For these clinical settings, we do not discuss UFH and low molecular weight heparin (LMWH) as management options for patients with a previous history of HIT.",
"title": ""
},
{
"docid": "c258ca8e7c9d351fc8e380b0af0a529e",
"text": "Pervasive technology devices that intend to be worn must not only meet our functional requirements but also our social, emotional, and aesthetic needs. Current pervasive devices such as the PDA or cell phone are more portable than wearable, yet still they elicit strong consumer demand for intuitive interfaces and well-designed forms. Looking to the future of wearable pervasive devices, we can imagine an even greater demand for meaningful forms for objects nestled so close to our bodies. They will need to reflect our tastes and moods, and allow us to express our personalities, cultural beliefs, and values. Digital Jewelry explores a new wearable technology form that is based in jewelry design, not in technology. Through prototypes and meaningful scenarios, digital jewelry offers new ideas to consider in the design of wearable devices.",
"title": ""
},
{
"docid": "29e5afc2780455b398e7b7451c08e39f",
"text": "The recent special report, “Indiana’s Vision of Response to Intervention” issued by the Center for Evaluation & Education Policy (CEEP) was the first of a three-part series aimed to build a fundamental understanding of a Response-to-Intervention (RTI) framework in Indiana’s schools to aid in the prevention and intervention of both academic and behavioral problems for all students. The report also discussed the impetus for implementation of RTI, as well as what the state of Indiana is currently doing to respond to and guide schools through this new initiative. Specifically, Indiana’s Department of Education (IDOE) has developed a framework of RTI that addresses six core components on which to focus: (1) evidence-based curriculum, instruction, intervention and extension; (2) assessment and progress monitoring; (3) data-based decision making; (4) leadership; (5) family, school, and community partnerships; and (6) cultural responsivity.",
"title": ""
},
{
"docid": "2eff84064f1d9d183eddc7e048efa8e6",
"text": "Rupinder Kaur, Dr. Jyotsna Sengupta Abstract— The software process model consists of a set of activities undertaken to design, develop and maintain software systems. A variety of software process models have been designed to structure, describe and prescribe the software development process. The software process models play a very important role in software development, so it forms the core of the software product. Software project failure is often devastating to an organization. Schedule slips, buggy releases and missing features can mean the end of the project or even financial ruin for a company. Oddly, there is disagreement over what it means for a project to fail. In this paper, discussion is done on current process models and analysis on failure of software development, which shows the need of new research.",
"title": ""
},
{
"docid": "fec4f80f907d65d4b73480b9c224d98a",
"text": "This paper presents a novel finite position set-phase locked loop (FPS-PLL) for sensorless control of surface-mounted permanent-magnet synchronous generators (PMSGs) in variable-speed wind turbines. The proposed FPS-PLL is based on the finite control set-model predictive control concept, where a finite number of rotor positions are used to estimate the back electromotive force of the PMSG. Then, the estimated rotor position, which minimizes a certain cost function, is selected to be the optimal rotor position. This eliminates the need of a fixed-gain proportional-integral controller, which is commonly utilized in the conventional PLL. The performance of the proposed FPS-PLL has been experimentally investigated and compared with that of the conventional one using a 14.5 kW PMSG with a field-oriented control scheme utilized as the generator control strategy. Furthermore, the robustness of the proposed FPS-PLL is investigated against PMSG parameters variations.",
"title": ""
},
{
"docid": "17a364d7dedd5e503446d02242b3dda7",
"text": "Paratracheal air cysts are collections of air adjacent to the trachea. These lesions are usually an incidental finding at routine chest computed tomography (CT) scan, and their frequency is probably underestimated, because almost all patients are asymptomatic.1 Differential diagnosis of paratracheal air cysts includes tracheal diverticulum, pharyngocele, laryngocele, Zenker diverticulum, apical lung hernia, blebs and bulla, and pneumomediastinum.1-4 We present here a case of chronic obstructive pulmonary disease (COPD) with incidental finding of a large tracheal diverticulum.",
"title": ""
},
{
"docid": "35812bda0819769efb1310d1f6d5defd",
"text": "Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.",
"title": ""
},
{
"docid": "8bb0a1b97222c065fe1e3c4738ca969d",
"text": "\"Explicit concurrency should be abolished from all higher-level programming languages (i.e. everything except - perhaps- plain machine code.).\" Dijkstra [1] (paraphrased). A promising class of concurrency abstractions replaces explicit concurrency mechanisms with a single linguistic mechanism that combines state and control and uses asynchronous messages for communications, e.g. active objects or actors, but that doesn't remove the hurdle of understanding non-local control transfer. What if the programming model enabled programmers to simply do what they do best, that is, to describe a system in terms of its modular structure and write sequential code to implement the operations of those modules and handles details of concurrency? In a recently sponsored NSF project we are developing such a model that we call capsule-oriented programming and its realization in the Panini project. This model favors modularity over explicit concurrency, encourages concurrency correctness by construction, and exploits modular structure of programs to expose implicit concurrency.",
"title": ""
},
{
"docid": "c0650814388c7e1de19ee6e668d40e69",
"text": "In this paper we consider persuasion in the context of practical reasoning, and discuss the problems associated with construing reasoning about actions in a manner similar to reasoning about beliefs. We propose a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton. From this perspective, we articulate an interaction protocol, which we call PARMA, for dialogues over proposed actions based on this theory. We outline an axiomatic semantics for the PARMA Protocol, and discuss two implementations which use this protocol to mediate a discussion between humans. We then show how our proposal can be made computational within the framework of agents based on the Belief-Desire-Intention model, and illustrate this proposal with an example debate within a multi agent system.",
"title": ""
},
{
"docid": "9ddc451ee5509f69ffab3f3485ba5870",
"text": "GOAL\nThe aims are to establish the prevalence of newfound, unidentified cases of depressive disorder by screening with the Becks Depression scale; To establish a comparative relationship with self-identified cases of depression in the patients in the family medicine; To assess the significance of the BDI in screening practice of family medicine.\n\n\nPATIENTS AND METHODS\nA prospective study was conducted anonymously by Beck's Depression scale (Beck Depression Questionnaire org.-BDI) and specially created short questionnaire. The study included 250 randomly selected patients (20-60 years), users of services in family medicine in \"Dom Zdravlja\" Zenica, and the final number of respondents with included in the study was 126 (51 male, 75 female; response or response rate 50.4%). Exclusion factor was previously diagnosed and treated mental disorder. Participation was voluntary and respondents acknowledge the validity of completing the questionnaire. BDI consists of 21 items. Answers to questions about symptoms were ranked according to the Likert type scale responses from 0-4 (from irrelevant to very much). Respondents expressed themselves on personal perception of depression, whether are or not depressed.\n\n\nRESULTS\nDepression was observed in 48% of patients compared to 31% in self estimate depression analyzed the questionnaires. The negative trend in the misrecognition of depression is -17% (48:31). Depression was significantly more frequent in unemployed compared to employed respondents (p = 0.001). The leading symptom in both sexes is the perception of lost hope (59% of cases).\n\n\nCONCLUSION\nAll respondents in family medicine care in Zenica showed a high percentage of newly detected (17%) patients with previously unrecognized depression. BDI is a really simple and effective screening tool for the detection and identification of persons with symptoms of depression.",
"title": ""
},
{
"docid": "7c4c33097c12f55a08f8a7cc3634c5cb",
"text": "Pattern queries are widely used in complex event processing (CEP) systems. Existing pattern matching techniques, however, can provide only limited performance for expensive queries in real-world applications, which may involve Kleene closure patterns, flexible event selection strategies, and events with imprecise timestamps. To support these expensive queries with high performance, we begin our study by analyzing the complexity of pattern queries, with a focus on the fundamental understanding of which features make pattern queries more expressive and at the same time more computationally expensive. This analysis allows us to identify performance bottlenecks in processing those expensive queries, and provides key insights for us to develop a series of optimizations to mitigate those bottlenecks. Microbenchmark results show superior performance of our system for expensive pattern queries while most state-of-the-art systems suffer from poor performance. A thorough case study on Hadoop cluster monitoring further demonstrates the efficiency and effectiveness of our proposed techniques.",
"title": ""
},
{
"docid": "16124de7e93f0541eed8a3dceed82f7a",
"text": "Modern deep learning methods achieve state-ofthe-art results in many computer vision tasks. While these methods perform well when trained on large datasets, deep learning methods suffer from overfitting and lack of generalization given smaller datasets. Especially in medical image analysis, acquisition of both imaging data and corresponding ground-truth annotations (e.g. pixel-wise segmentation masks) as required for supervised tasks, is time consuming and costly, since experts are needed to manually annotate data. In this work we study this problem by proposing a new variant of Generative Adversarial Networks (GANs), which, in addition to synthesized medical images, also generates segmentation masks for the use in supervised medical image analysis applications. We evaluate our approach on a lung segmentation task involving thorax X-ray images, and show that GANs have the potential to be used for synthesizing training data in this specific application.",
"title": ""
},
{
"docid": "84dbffa5a04f442027a5fd25a6f62d96",
"text": "Data streams mining have become a novel research topic of growing interest in knowledge discovery. The data streams which are generated from applications, such as network analysis, real time surveillance systems, sensor networks and financial generate huge data streams. These data streams consist of millions or billions of updates and must be processed to extract the useful information. Because of the high speed and huge size of data set in data streams, the traditional classification technologies are no longer applicable. In recent years a great deal of research has been done on this problem, most intends to efficiently solve the data streams mining problem with concept drift. This paper presents a novel approach for data stream classification which handles concept drift. This approach uses weighted majority approach with adaptive sliding window strategies. The experimental result shows that this novel approach works better than other methods.",
"title": ""
},
{
"docid": "6b4e1e45ef1b91b7694c62bd5d3cd9fc",
"text": "Recently, academia and law enforcement alike have shown a strong demand for data that is collected from online social networks. In this work, we present a novel method for harvesting such data from social networking websites. Our approach uses a hybrid system that is based on a custom add-on for social networks in combination with a web crawling component. The datasets that our tool collects contain profile information (user data, private messages, photos, etc.) and associated meta-data (internal timestamps and unique identifiers). These social snapshots are significant for security research and in the field of digital forensics. We implemented a prototype for Facebook and evaluated our system on a number of human volunteers. We show the feasibility and efficiency of our approach and its advantages in contrast to traditional techniques that rely on application-specific web crawling and parsing. Furthermore, we investigate different use-cases of our tool that include consensual application and the use of sniffed authentication cookies. Finally, we contribute to the research community by publishing our implementation as an open-source project.",
"title": ""
},
{
"docid": "23d7eb4d414e4323c44121040c3b2295",
"text": "BACKGROUND\nThe use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality.\n\n\nOBJECTIVE\nTo describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine.\n\n\nRESULTS\nThe recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality.\n\n\nCONCLUSIONS\nAlthough the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.",
"title": ""
},
{
"docid": "6d3b9e3f51e45cb5ade883254c7844d8",
"text": "In this paper we have focused on the carbon nano tube field effect transistor technology. The advantages of CNTFET over MOS technology are also discussed. The structure and types of CNTFET are given in detail along with the variation of threshold voltage with respect to the alteration in CNT diameter. The characteristics curve between gate to source current and drain to source voltage is plotted. Various fixed and variable parameters of CNT are also focused.",
"title": ""
}
] |
scidocsrr
|
67c3b0a730893d241af4c6b7a2db6a7b
|
A digital controlled PV-inverter with grid impedance estimation for ENS detection
|
[
{
"docid": "819f6b62eb3f8f9d60437af28c657935",
"text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.",
"title": ""
}
] |
[
{
"docid": "4ee078123815eff49cc5d43550021261",
"text": "Generalized anxiety and major depression have become increasingly common in the United States, affecting 18.6 percent of the adult population. Mood disorders can be debilitating, and are often correlated with poor general health, life dissatisfaction, and the need for disability benefits due to inability to work. Recent evidence suggests that some mood disorders have a circadian component, and disruptions in circadian rhythms may even trigger the development of these disorders. However, the molecular mechanisms of this interaction are not well understood. Polymorphisms in a circadian clock-related gene, PER3, are associated with behavioral phenotypes (extreme diurnal preference in arousal and activity) and sleep/mood disorders, including seasonal affective disorder (SAD). Here we show that two PER3 mutations, a variable number tandem repeat (VNTR) allele and a single-nucleotide polymorphism (SNP), are associated with diurnal preference and higher Trait-Anxiety scores, supporting a role for PER3 in mood modulation. In addition, we explore a potential mechanism for how PER3 influences mood by utilizing a comprehensive circadian clock model that accurately predicts the changes in circadian period evident in knock-out phenotypes and individuals with PER3-related clock disorders.",
"title": ""
},
{
"docid": "fd2450f5b02a2599be29b90a599ad31d",
"text": "Male genital injuries, demand prompt management to prevent long-term sexual and psychological damage. Injuries to the scrotum and contents may produce impaired fertility.We report our experience in diagnosing and managing a case of a foreign body in the scrotum following a boat engine blast accident. This case report highlights the need for a good history and thorough general examination to establish the mechanism of injury in order to distinguish between an embedded penetrating projectile injury and an injury with an exit wound. Prompt surgical exploration with hematoma evacuation limits complications.",
"title": ""
},
{
"docid": "bf623afcf45d449bbfaa87c8fd41a7f6",
"text": "A noise power spectral density (PSD) estimation is an indispensable component of speech spectral enhancement systems. In this paper we present a noise PSD tracking algorithm, which employs a noise presence probability estimate delivered by a deep neural network (DNN). The algorithm provides a causal noise PSD estimate and can thus be used in speech enhancement systems for communication purposes. An extensive performance comparison has been carried out with ten causal state-of-the-art noise tracking algorithms taken from the literature and categorized acc. to applied techniques. The experiments showed that the proposed DNN-based noise PSD tracker outperforms all competing methods with respect to all tested performance measures, which include the noise tracking performance and the performance of a speech enhancement system employing the noise tracking component.",
"title": ""
},
{
"docid": "d521b14ee04dbf69656240ef47c3319c",
"text": "This paper presents a computationally efficient approach for temporal action detection in untrimmed videos that outperforms state-of-the-art methods by a large margin. We exploit the temporal structure of actions by modeling an action as a sequence of sub-actions. A novel and fully automatic sub-action discovery algorithm is proposed, where the number of sub-actions for each action as well as their types are automatically determined from the training videos. We find that the discovered sub-actions are semantically meaningful. To localize an action, an objective function combining appearance, duration and temporal structure of sub-actions is optimized as a shortest path problem in a network flow formulation. A significant benefit of the proposed approach is that it enables real-time action localization (40 fps) in untrimmed videos. We demonstrate state-of-the-art results on THUMOS’14 and MEXaction2 datasets.",
"title": ""
},
{
"docid": "ab1b9b18163d3e732a2f8fc8b4e04ab1",
"text": "We measure the knowledge flows between countries by analysing publication and citation data, arguing that not all citations are equally important. Therefore, in contrast to existing techniques that utilize absolute citation counts to quantify knowledge flows between different entities, our model employs a citation context analysis technique, using a machine-learning approach to distinguish between important and non-important citations. We use 14 novel features (including context-based, cue words-based and text-based) to train a Support Vector Machine (SVM) and Random Forest classifier on an annotated dataset of 20,527 publications downloaded from the Association for Computational Linguistics anthology (http://allenai.org/data.html). Our machine-learning models outperform existing state-of-the-art citation context approaches, with the SVM model reaching up to 61% and the Random Forest model up to a very encouraging 90% Precision–Recall Area Under the Curve, with 10-fold cross-validation. Finally, we present a case study to explain our deployed method for datasets of PLoS ONE full-text publications in the field of Computer and Information Sciences. Our results show that a significant volume of knowledge flows from the United States, based on important citations, are consumed by the international scientific community. Of the total knowledge flow from China, we find a relatively smaller proportion (only 4.11%) falling into the category of knowledge flow based on important citations, while The Netherlands and Germany show the highest proportions of knowledge flows based on important citations, at 9.06 and 7.35% respectively. Among the institutions, interestingly, the findings show that at the University of Malaya more than 10% of the knowledge produced falls into the category of important. We believe that such analyses are helpful to understand the dynamics of the relevant knowledge flows across nations and institutions.",
"title": ""
},
{
"docid": "7cc3da275067df8f6c017da37025856c",
"text": "A simple, green method is described for the synthesis of Gold (Au) and Silver (Ag) nanoparticles (NPs) from the stem extract of Breynia rhamnoides. Unlike other biological methods for NP synthesis, the uniqueness of our method lies in its fast synthesis rates (~7 min for AuNPs) and the ability to tune the nanoparticle size (and subsequently their catalytic activity) via the extract concentration used in the experiment. The phenolic glycosides and reducing sugars present in the extract are largely responsible for the rapid reduction rates of Au(3+) ions to AuNPs. Efficient reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of AuNPs (or AgNPs) and NaBH(4) was observed and was found to depend upon the nanoparticle size or the stem extract concentration used for synthesis.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "fe8386f75bb68d7cde398aab59cfb543",
"text": "Nutrition educators research, teach, and conduct outreach within the field of community food security (CFS), yet no clear consensus exists concerning what the field encompasses. Nutrition education needs to be integrated into the CFS movement for the fundamental reason that optimal health, well-being, and sustainability are at the core of both nutrition education and CFS. Establishing commonalities at the intersection of academic research, public policy development, and distinctive nongovernmental organizations expands opportunities for professional participation. Entry points for nutrition educators' participation are provided, including efforts dedicated to education, research, policy, programs and projects, and human rights.",
"title": ""
},
{
"docid": "556c0c1662a64f484aff9d7556b2d0b5",
"text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "5dde43ab080f516c0b485fcd951bf9e1",
"text": "Differential privacy is a framework to quantify to what extent individual privacy in a statistical database is preserved while releasing useful aggregate information about the database. In this paper, within the classes of mechanisms oblivious of the database and the queriesqueries beyond the global sensitivity, we characterize the fundamental tradeoff between privacy and utility in differential privacy, and derive the optimal ϵ-differentially private mechanism for a single realvalued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has staircase-shaped probability density functions which are symmetric (around the origin), monotonically decreasing and geometrically decaying. The staircase mechanism can be viewed as a geometric mixture of uniform probability distributions, providing a simple algorithmic description for the mechanism. Furthermore, the staircase mechanism naturally generalizes to discrete query output settings as well as more abstract settings. We explicitly derive the parameter of the optimal staircase mechanism for ℓ<sup>1</sup> and ℓ<sup>2</sup> cost functions. Comparing the optimal performances with those of the usual Laplacian mechanism, we show that in the high privacy regime (ϵ is small), the Laplacian mechanism is asymptotically optimal as ϵ → 0; in the low privacy regime (ϵ is large), the minimum magnitude and second moment of noise are Θ(Δe<sup>(-ϵ/2)</sup>) and Θ(Δ<sup>2</sup>e<sup>(-2ϵ/3)</sup>) as ϵ → +∞, respectively, while the corresponding figures when using the Laplacian mechanism are Δ/ϵ and 2Δ<sup>2</sup>/ϵ<sup>2</sup>, where Δ is the sensitivity of the query function. We conclude that the gains of the staircase mechanism are more pronounced in the moderate-low privacy regime.",
"title": ""
},
{
"docid": "37e936c375d34f356e195f844125ae84",
"text": "LEARNING OBJECTIVES\nThe reader is presumed to have a basic understanding of facial anatomy and facial rejuvenation procedures. After reading this article, the reader should also be able to: 1. Identify the essential anatomy of the face as it relates to facelift surgery. 2. Describe the common types of facelift procedures, including their strengths and weaknesses. 3. Apply appropriate preoperative and postoperative management for facelift patients. 4. Describe common adjunctive procedures. Physicians may earn 1.0 AMA PRA Category 1 Credit by successfully completing the examination based on material covered in this article. This activity should take one hour to complete. The examination begins on page 464. As a measure of the success of the education we hope you will receive from this article, we encourage you to log on to the Aesthetic Society website and take the preexamination before reading this article. Once you have completed the article, you may then take the examination again for CME credit. The Aesthetic Society will be able to compare your answers and use these data for future reference as we attempt to continually improve the CME articles we offer. ASAPS members can complete this CME examination online by logging on to the ASAPS members-only website (http://www.surgery.org/members) and clicking on \"Clinical Education\" in the menu bar. Modern aesthetic surgery of the face began in the first part of the 20th century in the United States and Europe. Initial limited excisions gradually progressed to skin undermining and eventually to a variety of methods for contouring the subcutaneous facial tissue. This particular review focuses on the cheek and neck. While the lid-cheek junction, eyelids, and brow must also be considered to obtain a harmonious appearance, those elements are outside the scope of this article. Overall patient management, including patient selection, preoperative preparation, postoperative care, and potential complications are discussed.",
"title": ""
},
{
"docid": "47b9da2d6f741419536879da699f7456",
"text": "We consider the problem of scientific literature search, and we suggest that citation relations between publications can be very helpful in the systematic retrieval of scientific literature. We introduce a new software tool called CitNetExplorer that can be used for citation-based scientific literature retrieval. To demonstrate the use of CitNetExplorer, we employ the tool to identify publications dealing with the topic of community detection in networks. Citationbased scientific literature retrieval can be especially helpful in situations in which one needs to obtain a comprehensive overview of the literature on a certain research topic, for instance in the preparation of a review article.",
"title": ""
},
{
"docid": "08f766ca84fc4cb70b0fc288e2f12a5a",
"text": "The authors present a unified account of 2 neural systems concerned with the development and expression of adaptive behaviors: a mesencephalic dopamine system for reinforcement learning and a \"generic\" error-processing system associated with the anterior cingulate cortex. The existence of the error-processing system has been inferred from the error-related negativity (ERN), a component of the event-related brain potential elicited when human participants commit errors in reaction-time tasks. The authors propose that the ERN is generated when a negative reinforcement learning signal is conveyed to the anterior cingulate cortex via the mesencephalic dopamine system and that this signal is used by the anterior cingulate cortex to modify performance on the task at hand. They provide support for this proposal using both computational modeling and psychophysiological experimentation.",
"title": ""
},
{
"docid": "3a6197322da0e5fe2c2d98a8fcba7a42",
"text": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory.",
"title": ""
},
{
"docid": "7846c66aa411507d44ff935607cdb3ab",
"text": "The orphan, membrane-bound estrogen receptor (GPER) is expressed at high levels in a large fraction of breast cancer patients and its expression is favorable for patients’ survival. We investigated the role of GPER as a potential tumor suppressor in triple-negative breast cancer cells MDA-MB-231 and MDA-MB-468 using cell cycle analysis and apoptosis assay. The constitutive activity of GPER was investigated. GPER-specific activation with G-1 agonist inhibited breast cancer cell growth in concentration-dependent manner via induction of the cell cycle arrest in G2/M phase, enhanced phosphorylation of histone H3 and caspase-3-mediated apoptosis. Analysis of the methylation status of the GPER promoter in the triple-negative breast cancer cells and in tissues derived from breast cancer patients revealed that GPER amount is regulated by epigenetic mechanisms and GPER expression is inactivated by promoter methylation. Furthermore, GPER expression was induced by stress factors, such as radiation, and GPER amount inversely correlated with the p53 expression level. Overall, our results establish the protective role in breast cancer tumorigenesis, and the cell surface expression of GPER makes it an excellent potential therapeutic target for triple-negative breast cancer.",
"title": ""
},
{
"docid": "83e897a37aca4c349b4a910c9c0787f4",
"text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.",
"title": ""
},
{
"docid": "462a0746875e35116f669b16d851f360",
"text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.",
"title": ""
},
{
"docid": "5431514a65d66d40e55b87a5d326d3b5",
"text": "The authors describe a theoretical framework for understanding when people interacting with a member of a stereotyped group activate that group's stereotype and apply it to that person. It is proposed that both stereotype activation and stereotype application during interaction depend on the strength of comprehension and self-enhancement goals that can be satisfied by stereotyping one's interaction partner and on the strength of one's motivation to avoid prejudice. The authors explain how these goals can promote and inhibit stereotype activation and application, and describe diverse chronic and situational factors that can influence the intensity of these goals during interaction and, thereby, influence stereotype activation and application. This approach permits integration of a broad range of findings on stereotype activation and application.",
"title": ""
},
{
"docid": "49f955fb928955da09a3bfe08efe78bc",
"text": "A novel macro model approach for modeling ESD MOS snapback is introduced. The macro model consists of standard components only. It includes a MOS transistor modeled by BSIM3v3, a bipolar transistor modeled by VBIC, and a resistor for substrate resistance. No external current source, which is essential in most publicly reported macro models, is included since both BSIM3vs and VBIC have formulations built in to model the relevant effects. The simplicity of the presented macro model makes behavior languages, such as Verilog-A, and special ESD equations not necessary in model implementation. This offers advantages of high simulation speed, wider availability, and less convergence issues. Measurement and simulation of the new approach indicates that good silicon correlation can be achieved.",
"title": ""
}
] |
scidocsrr
|
a61863ed5eb35a663276a1a23e705585
|
A Field-Based Representation of Surrounding Vehicle Motion from a Monocular Camera
|
[
{
"docid": "3fa8b8a93716a85f8573bd1cb8d215f2",
"text": "Vision-based research for intelligent vehicles have traditionally focused on specific regions around a vehicle, such as a front looking camera for, e.g., lane estimation. Traffic scenes are complex and vital information could be lost in unobserved regions. This paper proposes a framework that uses four visual sensors for a full surround view of a vehicle in order to achieve an understanding of surrounding vehicle behaviors. The framework will assist the analysis of naturalistic driving studies by automating the task of data reduction of the observed trajectories. To this end, trajectories are estimated using a vehicle detector together with a multiperspective optimized tracker in each view. The trajectories are transformed to a common ground plane, where they are associated between perspectives and analyzed to reveal tendencies around the ego-vehicle. The system is tested on sequences from 2.5 h of drive on US highways. The multiperspective tracker is tested in each view as well as for the ability to associate vehicles bet-ween views with a 92% recall score. A case study of vehicles approaching from the rear shows certain patterns in behavior that could potentially influence the ego-vehicle.",
"title": ""
},
{
"docid": "fc2c995d20c83a72ea46f5055d1847a1",
"text": "In this paper, we present a novel probabilistic compact representation of the on-road environment, i.e., the dynamic probabilistic drivability map (DPDM), and demonstrate its utility for predictive lane change and merge (LCM) driver assistance during highway and urban driving. The DPDM is a flexible representation and readily accepts data from a variety of sensor modalities to represent the on-road environment as a spatially coded data structure, encapsulating spatial, dynamic, and legal information. Using the DPDM, we develop a general predictive system for LCMs. We formulate the LCM assistance system to solve for the minimum-cost solution to merge or change lanes, which is solved efficiently using dynamic programming over the DPDM. Based on the DPDM, the LCM system recommends the required acceleration and timing to safely merge or change lanes with minimum cost. System performance has been extensively validated using real-world on-road data, including urban driving, on-ramp merges, and both dense and free-flow highway conditions.",
"title": ""
}
] |
[
{
"docid": "cdc77cc0dfb4dc9c91e20c3118b1d1ee",
"text": "Maximum entropy models are considered by many to be one of the most promising avenues of language modeling research. Unfortunately, long training times make maximum entropy research difficult. We present a novel speedup technique: we change the form of the model to use classes. Our speedup works by creating two maximum entropy models, the first of which predicts the class of each word, and the second of which predicts the word itself. This factoring of the model leads to fewer nonzero indicator functions, and faster normalization, achieving speedups of up to a factor of 35 over one of the best previous techniques. It also results in typically slightly lower perplexities. The same trick can be used to speed training of other machine learning techniques, e.g. neural networks, applied to any problem with a large number of outputs, such as language modeling.",
"title": ""
},
{
"docid": "7ebff2391401cef25b27d510675e9acd",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "94f23b8710342512c84da0c7ab9492d8",
"text": "Transferring knowledge across a sequence of related tasks is an important challenge in reinforcement learning. Despite much encouraging empirical evidence that shows benefits of transfer, there has been very little theoretical analysis. In this paper, we study a class of lifelong reinforcementlearning problems: the agent solves a sequence of tasks modeled as finite Markov decision processes (MDPs), each of which is from a finite set of MDPs with the same state/action spaces and different transition/reward functions. Inspired by the need for cross-task exploration in lifelong learning, we formulate a novel online discovery problem and give an optimal learning algorithm to solve it. Such results allow us to develop a new lifelong reinforcement-learning algorithm, whose overall sample complexity in a sequence of tasks is much smaller than that of single-task learning, with high probability, even if the sequence of tasks is generated by an adversary. Benefits of the algorithm are demonstrated in a simulated problem.",
"title": ""
},
{
"docid": "4661b378eda6cd44c95c40ebf06b066b",
"text": "Speech signal degradation in real environments mainly results from room reverberation and concurrent noise. While human listening is robust in complex auditory scenes, current speech segregation algorithms do not perform well in noisy and reverberant environments. We treat the binaural segregation problem as binary classification, and employ deep neural networks (DNNs) for the classification task. The binaural features of the interaural time difference and interaural level difference are used as the main auditory features for classification. The monaural feature of gammatone frequency cepstral coefficients is also used to improve classification performance, especially when interference and target speech are collocated or very close to one another. We systematically examine DNN generalization to untrained spatial configurations. Evaluations and comparisons show that DNN-based binaural classification produces superior segregation performance in a variety of multisource and reverberant conditions.",
"title": ""
},
{
"docid": "a00201271997f398ec8e5eb4160fbe2e",
"text": "We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the full-detection approaches in terms of time performance optimization. A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.",
"title": ""
},
{
"docid": "6620aa5b1ecaac765112f0f1f15ef920",
"text": "In this paper we present the tangible 3D tabletop and discuss the design potential of this novel interface. The tangible 3D tabletop combines tangible tabletop interaction with 3D projection in such a way that the tangible objects may be augmented with visual material corresponding to their physical shapes, positions, and orientation on the tabletop. In practice, this means that both the tabletop and the tangibles can serve as displays. We present the basic design principles for this interface, particularly concerning the interplay between 2D on the tabletop and 3D for the tangibles, and present examples of how this kind of interface might be used in the domain of maps and geolocalized data. We then discuss three central design considerations concerning 1) the combination and connection of content and functions of the tangibles and tabletop surface, 2) the use of tangibles as dynamic displays and input devices, and 3) the visual effects facilitated by the combination of the 2D tabletop surface and the 3D tangibles.",
"title": ""
},
{
"docid": "52d31aa77302bbf50fa193759f37d393",
"text": "Nonnegative matrix factorization (NMF) has been widely used for discovering physically meaningful latent components in audio signals to facilitate source separation. Most of the existing NMF algorithms require that the number of latent components is provided a priori, which is not always possible. In this paper, we leverage developments from the Bayesian nonparametrics and compressive sensing literature to propose a probabilistic Beta Process Sparse NMF (BP-NMF) model, which can automatically infer the proper number of latent components based on the data. Unlike previous models, BP-NMF explicitly assumes that these latent components are often completely silent. We derive a novel mean-field variational inference algorithm for this nonconjugate model and evaluate it on both synthetic data and real recordings on various tasks.",
"title": ""
},
{
"docid": "19cb14825c6654101af1101089b66e16",
"text": "Critical infrastructures, such as power grids and transportation systems, are increasingly using open networks for operation. The use of open networks poses many challenges for control systems. The classical design of control systems takes into account modeling uncertainties as well as physical disturbances, providing a multitude of control design methods such as robust control, adaptive control, and stochastic control. With the growing level of integration of control systems with new information technologies, modern control systems face uncertainties not only from the physical world but also from the cybercomponents of the system. The vulnerabilities of the software deployed in the new control system infrastructure will expose the control system to many potential risks and threats from attackers. Exploitation of these vulnerabilities can lead to severe damage as has been reported in various news outlets [1], [2]. More recently, it has been reported in [3] and [4] that a computer worm, Stuxnet, was spread to target Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes.",
"title": ""
},
{
"docid": "c77a3fcd6c689a58a8eebfef9a89af70",
"text": "Previously, neural methods in grammatical error correction (GEC) did not reach state-ofthe-art results compared to phrase-based statistical machine translation (SMT) baselines. We demonstrate parallels between neural GEC and low-resource neural MT and successfully adapt several methods from low-resource MT to neural GEC. We further establish guidelines for trustable results in neural GEC and propose a set of model-independent methods for neural GEC that can be easily applied in most GEC settings. Proposed methods include adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models. The combined effects of these methods result in better than state-of-the-art neural GEC models that outperform previously best neural GEC systems by more than 10% M on the CoNLL-2014 benchmark and 5.9% on the JFLEG test set. Non-neural state-of-the-art systems are outperformed by more than 2% on the CoNLL-2014 benchmark and by 4% on JFLEG.",
"title": ""
},
{
"docid": "e04bc357c145c38ed555b3c1fa85c7da",
"text": "This paper presents Hybrid (RSA & AES) encryption algorithm to safeguard data security in Cloud. Security being the most important factor in cloud computing has to be dealt with great precautions. This paper mainly focuses on the following key tasks: 1. Secure Upload of data on cloud such that even the administrator is unaware of the contents. 2. Secure Download of data in such a way that the integrity of data is maintained. 3. Proper usage and sharing of the public, private and secret keys involved for encryption and decryption. The use of a single key for both encryption and decryption is very prone to malicious attacks. But in hybrid algorithm, this problem is solved by the use of three separate keys each for encryption as well as decryption. Out of the three keys one is the public key, which is made available to all, the second one is the private key which lies only with the user. In this way, both the secure upload as well as secure download of the data is facilitated using the two respective keys. Also, the key generation technique used in this paper is unique in its own way. This has helped in avoiding any chances of repeated or redundant key.",
"title": ""
},
{
"docid": "2a7c77985e3fca58ee8a69dd9b6f36d2",
"text": "New types of machine learning hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. We already see the limitations of existing algorithms for models that exploit structured input via complex and instancedependent control flow, which prohibits minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently even for small minibatch sizes, resulting in significantly shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.",
"title": ""
},
{
"docid": "2af670323d2857cd79ac967bd71c61c1",
"text": "This paper describes a new architecture for synthetic aperture radar (SAR) automatic target recognition (ATR) based on the premise that the pose of the target is estimated within a high degree of precision. The advantage of our classifier design is that the input space complexity is decreased with the pose information, which enables fewer features to classify targets with a higher degree of accuracy. Moreover, the training of the classifier can be done discriminantely, which also improves performance and decreases the complexity of the classifier. Three strategies of learning and representation to build the pattern space and discriminant functions are compared: Vapnik's support vector machine (SVM), a newly developed quadratic mutual information (QMI) cost function for neural networks, and a principal component analysis extended recently with multi-resolution (PCA-M). Experimental results obtained in the MSTAR database show that the performance of our classifiers is better than that of standard template matching in the same dataset. We also rate the quality of the classifiers for detection using confusers, and show significant improvement in rejection.",
"title": ""
},
{
"docid": "1b7d19d41164bda14c688224cce700d5",
"text": "Urethral duplication is a rare congenital malformation affecting mainly boys. The authors report a case in a Cameroonian child who was diagnosed and managed at the Gynaeco-Obstetric and Paediatric Hospital, Yaounde. The malformation was characterized by the presence of an incontinent epispadic urethra and a normal apical urethra. We describe the difficulties faced in the management of this disorder in a developing country.",
"title": ""
},
{
"docid": "5636a228fea893cd48cebe15f72c0bb0",
"text": "A familicide is a multiple-victim homicide incident in which the killer’s spouse and one or more children are slain. National archives of Canadian and British homicides, containing 109 familicide incidents, permit some elucidation of the characteristic and epidemiology of this crime. Familicides were almost exclusively perpetrated by men, unlike other spouse-killings and other filicides. Half the familicidal men killed themselves as well, a much higher rate of suicide than among other uxoricidal or filicidal men. De facto unions were overrepresented, compared to their prevalence in the populations-atlarge, but to a much lesser extent in familicides than in other uxoricides. Stepchildren were overrepresented as familicide victims, compared to their numbers in the populations-at-large, but to a much lesser extent than in other filicides; unlike killers of their genetic offspring, men who killed their stepchildren were rarely suicidal. An initial binary categorization of familicides as accusatory versus despondent is tentatively proposed. @ 19% wiley-Liss, Inc.",
"title": ""
},
{
"docid": "0368fdfe05918134e62e0f7b106130ee",
"text": "Scientific charts are an effective tool to visualize numerical data trends. They appear in a wide range of contexts, from experimental results in scientific papers to statistical analyses in business reports. The abundance of scientific charts in the web has made it inevitable for search engines to include them as indexed content. However, the queries based on only the textual data used to tag the images can limit query results. Many studies exist to address the extraction of data from scientific diagrams in order to improve search results. In our approach to achieving this goal, we attempt to enhance the semantic labeling of the charts by using the original data values that these charts were designed to represent. In this paper, we describe a method to extract data values from a specific class of charts, bar charts. The extraction process is fully automated using image processing and text recognition techniques combined with various heuristics derived from the graphical properties of bar charts. The extracted information can be used to enrich the indexing content for bar charts and improve search results. We evaluate the effectiveness of our method on bar charts drawn from the web as well as charts embedded in digital documents.",
"title": ""
},
{
"docid": "e0d553cc4ca27ce67116c62c49c53d23",
"text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.",
"title": ""
},
{
"docid": "262302228a88025660c0add90d500518",
"text": "Social network analysis provides meaningful information about behavior of network members that can be used for diverse applications such as classification, link prediction. However, network analysis is computationally expensive because of feature learning for different applications. In recent years, many researches have focused on feature learning methods in social networks. Network embedding represents the network in a lower dimensional representation space with the same properties which presents a compressed representation of the network. In this paper, we introduce a novel algorithm named “CARE” for network embedding that can be used for different types of networks including weighted, directed and complex. Current methods try to preserve local neighborhood information of nodes, whereas the proposed method utilizes local neighborhood and community information of network nodes to cover both local and global structure of social networks. CARE builds customized paths, which are consisted of local and global structure of network nodes, as a basis for network embedding and uses the Skip-gram model to learn representation vector of nodes. Subsequently, stochastic gradient descent is applied to optimize our objective function and learn the final representation of nodes. Our method can be scalable when new nodes are appended to network without information loss. Parallelize generation of customized random walks is also used for speeding up CARE. We evaluate the performance of CARE on multi label classification and link prediction tasks. Experimental results on various networks indicate that the proposed method outperforms others in both Micro and Macro-f1 measures for different size of training data.",
"title": ""
},
{
"docid": "e86ce9f0a1beb982f8358930e8ef776d",
"text": "We study the function g(n, y) := i≤n P (i)≤y gcd(i, n), where P (n) denotes the largest prime factor of n, and we derive some estimates for its summatory function.",
"title": ""
},
{
"docid": "9953909d2e520abf8227fd9025260d55",
"text": "Silicones are used in the plastics industry as additives for improving the processing and surface properties of plastics, as well as the rubber phase in a novel family of thermoplastic vulcanizate (TPV) materials. As additives, silicones, and in particular polydimethylsiloxane (PDMS), are used to improve mold filling, surface appearance, mold release, surface lubricity and wear resistance. As the rubber portion of a TPV, the cross-linked silicone rubber imparts novel properties, such as lower hardness, reduced coefficient of friction and improved low and high temperature properties.",
"title": ""
},
{
"docid": "1839d9e6ef4bad29381105f0a604b731",
"text": "Our focus is on the effects that dated ideas about the nature of science (NOS) have on curriculum, instruction and assessments. First we examine historical developments in teaching about NOS, beginning with the seminal ideas of James Conant. Next we provide an overview of recent developments in philosophy and cognitive sciences that have shifted NOS characterizations away from general heuristic principles toward cognitive and social elements. Next, we analyze two alternative views regarding ‘explicitly teaching’ NOS in pre-college programs. Version 1 is grounded in teachers presenting ‘Consensus-based Heuristic Principles’ in science lessons and activities. Version 2 is grounded in learners experience of ‘Building and Refining Model-Based Scientific Practices’ in critique and communication enactments that occur in longer immersion units and learning progressions. We argue that Version 2 is to be preferred over Version 1 because it develops the critical epistemic cognitive and social practices that scientists and science learners use when (1) developing and evaluating scientific evidence, explanations and knowledge and (2) critiquing and communicating scientific ideas and information; thereby promoting science literacy. 1 NOS and Science Education When and how did knowledge about science, as opposed to scientific content knowledge, become a targeted outcome of science education? From a US perspective, the decades of interest are the 1940s and 1950s when two major post-war developments in science education policy initiatives occurred. The first, in post secondary education, was the GI Bill An earlier version of this paper was presented as a plenary session by the first author at the ‘How Science Works—And How to Teach It’ workshop, Aarhus University, 23–25 June, 2011, Denmark. R. A. Duschl (&) The Pennsylvania State University, University Park, PA, USA e-mail: rad19@psu.edu R. Grandy Rice University, Houston, TX, USA 123 Sci & Educ DOI 10.1007/s11191-012-9539-4",
"title": ""
}
] |
scidocsrr
|
17a6c77c9c98ac4baca278b03b0b58c0
|
URLNet: Learning a URL Representation with Deep Learning for Malicious URL Detection
|
[
{
"docid": "2af711baba40a79b259c8d9c1f14518c",
"text": "Twitter can suffer from malicious tweets containing suspicious URLs for spam, phishing, and malware distribution. Previous Twitter spam detection schemes have used account features such as the ratio of tweets containing URLs and the account creation date, or relation features in the Twitter graph. Malicious users, however, can easily fabricate account features. Moreover, extracting relation features from the Twitter graph is time and resource consuming. Previous suspicious URL detection schemes have classified URLs using several features including lexical features of URLs, URL redirection, HTML content, and dynamic behavior. However, evading techniques exist, such as time-based evasion and crawler evasion. In this paper, we propose WARNINGBIRD, a suspicious URL detection system for Twitter. Instead of focusing on the landing pages of individual URLs in each tweet, we consider correlated redirect chains of URLs in a number of tweets. Because attackers have limited resources and thus have to reuse them, a portion of their redirect chains will be shared. We focus on these shared resources to detect suspicious URLs. We have collected a large number of tweets from the Twitter public timeline and trained a statistical classifier with features derived from correlated URLs and tweet context information. Our classifier has high accuracy and low false-positive and falsenegative rates. We also present WARNINGBIRD as a realtime system for classifying suspicious URLs in the Twitter stream. ∗This research was supported by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-2011-C1090-1131-0009) and World Class University program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea(R31-10100).",
"title": ""
},
{
"docid": "da7d45d2cbac784d31e4d3957f4799e6",
"text": "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5% out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.",
"title": ""
}
] |
[
{
"docid": "ab5f79671bcd56a733236b089bd5e955",
"text": "Conversational modeling is an important task in natural language processing as well as machine learning. Like most important tasks, it’s not easy. Previously, conversational models have been focused on specific domains, such as booking hotels or recommending restaurants. They were built using hand-crafted rules, like ChatScript [11], a popular rule-based conversational model. In 2014, the sequence to sequence model being used for translation opened the possibility of phrasing dialogues as a translation problem: translating from an utterance to its response. The systems built using this principle, while conversing fairly fluently, aren’t very convincing because of their lack of personality and inconsistent persona [10] [5]. In this paper, we experiment building open-domain response generator with personality and identity. We built chatbots that imitate characters in popular TV shows: Barney from How I Met Your Mother, Sheldon from The Big Bang Theory, Michael from The Office, and Joey from Friends. A successful model of this kind can have a lot of applications, such as allowing people to speak with their favorite celebrities, creating more life-like AI assistants, or creating virtual alter-egos of ourselves. The model was trained end-to-end without any hand-crafted rules. The bots talk reasonably fluently, have distinct personalities, and seem to have learned certain aspects of their identity. The results of standard automated translation model evaluations yielded very low scores. However, we designed an evaluation metric with a human judgment element, for which the chatbots performed well. We are able to show that for a bot’s response, a human is more than 50% likely to believe that the response actually came from the real character. Keywords—Seq2seq, attentional mechanism, chatbot, dialogue system.",
"title": ""
},
{
"docid": "d7310e830f85541aa1d4b94606c1be0c",
"text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.",
"title": ""
},
{
"docid": "7f76401d9d635460bde256bbd6c8e84e",
"text": "This article presents a review of the methods used in recognition and analysis of the human gait from three different approaches: image processing, floor sensors and sensors placed on the body. Progress in new technologies has led the development of a series of devices and techniques which allow for objective evaluation, making measurements more efficient and effective and providing specialists with reliable information. Firstly, an introduction of the key gait parameters and semi-subjective methods is presented. Secondly, technologies and studies on the different objective methods are reviewed. Finally, based on the latest research, the characteristics of each method are discussed. 40% of the reviewed articles published in late 2012 and 2013 were related to non-wearable systems, 37.5% presented inertial sensor-based systems, and the remaining 22.5% corresponded to other wearable systems. An increasing number of research works demonstrate that various parameters such as precision, conformability, usability or transportability have indicated that the portable systems based on body sensors are promising methods for gait analysis.",
"title": ""
},
{
"docid": "3380a9a220e553d9f7358739e3f28264",
"text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "26142d27adc7a682d7e6698532578811",
"text": "X-ray imaging has been developed not only for its use in medical imaging for human beings, but also for materials or objects, where the aim is to analyze (nondestructively) those inner parts that are undetectable to the naked eye. Thus, X-ray testing is used to determine if a test object deviates from a given set of specifications. Typical applications are analysis of food products, screening of baggage, inspection of automotive parts, and quality control of welds. In order to achieve efficient and effective X-ray testing, automated and semi-automated systems are being developed to execute this task. In this paper, we present a general overview of computer vision methodologies that have been used in X-ray testing. In addition, we review some techniques that have been applied in certain relevant applications, and we introduce a public database of X-ray images that can be used for testing and evaluation of image analysis and computer vision algorithms. Finally, we conclude that the following: that there are some areas -like casting inspection- where automated systems are very effective, and other application areas -such as baggage screening- where human inspection is still used, there are certain application areas -like weld and cargo inspections- where the process is semi-automatic, and there is some research in areas -including food analysis- where processes are beginning to be characterized by the use of X-ray imaging.",
"title": ""
},
{
"docid": "69d296d1302d9e0acd7fb576f551118d",
"text": "Event detection is a research area that attracted attention during the last years due to the widespread availability of social media data. The problem of event detection has been examined in multiple social media sources like Twitter, Flickr, YouTube and Facebook. The task comprises many challenges including the processing of large volumes of data and high levels of noise. In this article, we present a wide range of event detection algorithms, architectures and evaluation methodologies. In addition, we extensively discuss on available datasets, potential applications and open research issues. The main objective is to provide a compact representation of the recent developments in the field and aid the reader in understanding the main challenges tackled so far as well as identifying interesting future research directions.",
"title": ""
},
{
"docid": "ae80dd046027bcefc8aaa6d4d3a06f59",
"text": "We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.",
"title": ""
},
{
"docid": "799573bf08fb91b1ac644c979741e7d2",
"text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.",
"title": ""
},
{
"docid": "955ae6e1dffbe580217b812f943b4339",
"text": "Successful applications of reinforcement learning in realworld problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent’s entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we study reinforcement learning with deep neural networks, including RNN and LSTM, which are equipped with the desired property of being able to capture long-term dependency on history, and thus providing an effective way of learning the representation of hidden states. We further develop a hybrid approach that combines the strength of both supervised learning (for representing hidden states) and reinforcement learning (for optimizing control) with joint training. Extensive experiments based on a KDD Cup 1998 direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best across the board.",
"title": ""
},
{
"docid": "0df56ee771c5ddaafd01f63a151b11fe",
"text": "Genes play a central role in all biological processes. DNA microarray technology has made it possible to study the expression behavior of thousands of genes in one go. Often, gene expression data is used to generate features for supervised and unsupervised learning tasks. At the same time, advances in the field of deep learning have made available a plethora of architectures. In this paper, we use deep architectures pre-trained in an unsupervised manner using denoising autoencoders as a preprocessing step for a popular unsupervised learning task. Denoising autoencoders (DA) can be used to learn a compact representation of input, and have been used to generate features for further supervised learning tasks. We propose that our deep architectures can be treated as empirical versions of Deep Belief Networks (DBNs). We use our deep architectures to regenerate gene expression time series data for two different data sets. We test our hypothesis on two popular datasets for the unsupervised learning task of clustering and find promising improvements in performance.",
"title": ""
},
{
"docid": "5a11ab9ece5295d4d1d16401625ab3d4",
"text": "The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.",
"title": ""
},
{
"docid": "2abd75766d4875921edd4d6d63d5d617",
"text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.",
"title": ""
},
{
"docid": "198ad1ba78ac0aa315dac6f5730b4f88",
"text": "Life history theory posits that behavioral adaptation to various environmental (ecological and/or social) conditions encountered during childhood is regulated by a wide variety of different traits resulting in various behavioral strategies. Unpredictable and harsh conditions tend to produce fast life history strategies, characterized by early maturation, a higher number of sexual partners to whom one is less attached, and less parenting of offspring. Unpredictability and harshness not only affects dispositional social and emotional functioning, but may also promote the development of personality traits linked to higher rates of instability in social relationships or more self-interested behavior. Similarly, detrimental childhood experiences, such as poor parental care or high parent-child conflict, affect personality development and may create a more distrustful, malicious interpersonal style. The aim of this brief review is to survey and summarize findings on the impact of negative early-life experiences on the development of personality and fast life history strategies. By demonstrating that there are parallels in adaptations to adversity in these two domains, we hope to lend weight to current and future attempts to provide a comprehensive insight of personality traits and functions at the ultimate and proximate levels.",
"title": ""
},
{
"docid": "9eccf674ee3b3826b010bc142ed24ef0",
"text": "We present an architecture of a recurrent neural network (RNN) with a fullyconnected deep neural network (DNN) as its feature extractor. The RNN is equipped with both causal temporal prediction and non-causal look-ahead, via auto-regression (AR) and moving-average (MA), respectively. The focus of this paper is a primal-dual training method that formulates the learning of the RNN as a formal optimization problem with an inequality constraint that provides a sufficient condition for the stability of the network dynamics. Experimental results demonstrate the effectiveness of this new method, which achieves 18.86% phone recognition error on the TIMIT benchmark for the core test set. The result approaches the best result of 17.7%, which was obtained by using RNN with long short-term memory (LSTM). The results also show that the proposed primal-dual training method produces lower recognition errors than the popular RNN methods developed earlier based on the carefully tuned threshold parameter that heuristically prevents the gradient from exploding.",
"title": ""
},
{
"docid": "3b2c9aebbf8f08b08b7630661f8ccfe7",
"text": "This study investigated the convergent, discriminant, and incremental validity of one ability test of emotional intelligence (EI)--the Mayer-Salovey-Caruso-Emotional Intelligence Test (MSCEIT)--and two self-report measures of EI--the Emotional Quotient Inventory (EQ-i) and the self-report EI test (SREIT). The MSCEIT showed minimal relations to the EQ-i and SREIT, whereas the latter two measures were moderately interrelated. Among EI measures, the MSCEIT was discriminable from well-studied personality and well-being measures, whereas the EQ-i and SREIT shared considerable variance with these measures. After personality and verbal intelligence were held constant, the MSCEIT was predictive of social deviance, the EQ-i was predictive of alcohol use, and the SREIT was inversely related to academic achievement. In general, results showed that ability EI and self-report EI are weakly related and yield different measurements of the same person.",
"title": ""
},
{
"docid": "f6899520472f9a5513ca5d1e0c16ad7c",
"text": "The high volume of monitoring information generated by large-scale cloud infrastructures poses a challenge to the capacity of cloud providers in detecting anomalies in the infrastructure. Traditional anomaly detection methods are resource-intensive and computationally complex for training and/or detection, what is undesirable in very dynamic and large-scale environment such as clouds. Isolation-based methods have the advantage of low complexity for training and detection and are optimized for detecting failures. In this work, we explore the feasibility of Isolation Forest, an isolation-based anomaly detection method, to detect anomalies in large-scale cloud data centers. We propose a method to code time-series information as extra attributes that enable temporal anomaly detection and establish its feasibility to adapt to seasonality and trends in the time-series and to be applied on-line and in real-time. Copyright c © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "fb4fcc4d5380c4123b24467c1ca2a8e3",
"text": "Deep neural networks are traditionally trained using humandesigned stochastic optimization algorithms, such as SGD and Adam. Recently, the approach of learning to optimize network parameters has emerged as a promising research topic. However, these learned black-box optimizers sometimes do not fully utilize the experience in human-designed optimizers, therefore have limitation in generalization ability. In this paper, a new optimizer, dubbed as HyperAdam, is proposed that combines the idea of “learning to optimize” and traditional Adam optimizer. Given a network for training, its parameter update in each iteration generated by HyperAdam is an adaptive combination of multiple updates generated by Adam with varying decay rates. The combination weights and decay rates in HyperAdam are adaptively learned depending on the task. HyperAdam is modeled as a recurrent neural network with AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for various network training, such as multilayer perceptron, CNN and LSTM.",
"title": ""
},
{
"docid": "7515938d82cf5f9e6682cdf4793ac27d",
"text": "Glioblastoma is an immunosuppressive, fatal brain cancer that contains glioblastoma stem-like cells (GSCs). Oncolytic herpes simplex virus (oHSV) selectively replicates in cancer cells while inducing anti-tumor immunity. oHSV G47Δ expressing murine IL-12 (G47Δ-mIL12), antibodies to immune checkpoints (CTLA-4, PD-1, PD-L1), or dual combinations modestly extended survival of a mouse glioma model. However, the triple combination of anti-CTLA-4, anti-PD-1, and G47Δ-mIL12 cured most mice in two glioma models. This treatment was associated with macrophage influx and M1-like polarization, along with increased T effector to T regulatory cell ratios. Immune cell depletion studies demonstrated that CD4+ and CD8+ T cells as well as macrophages are required for synergistic curative activity. This combination should be translatable to the clinic and other immunosuppressive cancers.",
"title": ""
},
{
"docid": "60c03017f7254c28ba61348d301ae612",
"text": "Code flaws or vulnerabilities are prevalent in software systems and can potentially cause a variety of problems including deadlock, information loss, or system failure. A variety of approaches have been developed to try and detect the most likely locations of such code vulnerabilities in large code bases. Most of them rely on manually designing features (e.g. complexity metrics or frequencies of code tokens) that represent the characteristics of the code. However, all suffer from challenges in sufficiently capturing both semantic and syntactic representation of source code, an important capability for building accurate prediction models. In this paper, we describe a new approach, built upon the powerful deep learning Long Short Term Memory model, to automatically learn both semantic and syntactic features in code. Our evaluation on 18 Android applications demonstrates that the prediction power obtained from our learned features is equal or even superior to what is achieved by state of the art vulnerability prediction models: 3%–58% improvement for within-project prediction and 85% for cross-project prediction.",
"title": ""
},
{
"docid": "3bc897662b39bcd59b7c7831fb1df091",
"text": "The proliferation of wearable devices has contributed to the emergence of mobile crowdsensing, which leverages the power of the crowd to collect and report data to a third party for large-scale sensing and collaborative learning. However, since the third party may not be honest, privacy poses a major concern. In this paper, we address this concern with a two-stage privacy-preserving scheme called RG-RP: the first stage is designed to mitigate maximum a posteriori (MAP) estimation attacks by perturbing each participant's data through a nonlinear function called repeated Gompertz (RG); while the second stage aims to maintain accuracy and reduce transmission energy by projecting high-dimensional data to a lower dimension, using a row-orthogonal random projection (RP) matrix. The proposed RG-RP scheme delivers better recovery resistance to MAP estimation attacks than most state-of-the-art techniques on both synthetic and real-world datasets. For collaborative learning, we proposed a novel LSTM-CNN model combining the merits of Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN). Our experiments on two representative movement datasets captured by wearable sensors demonstrate that the proposed LSTM-CNN model outperforms standalone LSTM, CNN and Deep Belief Network. Together, RG+RP and LSTM-CNN provide a privacy-preserving collaborative learning framework that is both accurate and privacy-preserving.",
"title": ""
}
] |
scidocsrr
|
d3c9ae59fe1571fe921fa5369dfd3aa8
|
Interactive cluster analysis of diverse types of spatiotemporal data
|
[
{
"docid": "56d8fe382c30c19b8b700a2509e0edd8",
"text": "In exploratory data analysis, the choice of tools depends on the data to be analyzed and the analysis tasks, i.e. the questions to be answered. The same applies to design of new analysis tools. In this paper, we consider a particular type of data: data that describe transient events having spatial and temporal references, such as earthquakes, traffic incidents, or observations of rare plants or animals. We focus on the task of detecting spatio-temporal patterns in event occurrences. We demonstrate the insufficiency of the existing techniques and approaches to event exploration and substantiate the need in a new exploratory tool. The technique of space-time cube, which has been earlier proposed for the visualization of movement in geographical space, possesses the required properties. However, it must be implemented so as to allow particular interactive manipulations: changing the viewing perspective, temporal focusing, and dynamic linking with a map display through simultaneous highlighting of corresponding symbols. We describe our implementation of the space-time cube technique and demonstrate by an example how it can be used for detecting spatio-temporal clusters of events.",
"title": ""
}
] |
[
{
"docid": "4f7bcfbbc49a974ebb1c58d35e8c7f99",
"text": "Several studies have highlighted that the IEEE 802.15.4 standard presents a number of limitations such as low reliability, unbounded packet delays and no protection against interference/fading, that prevent its adoption in applications with stringent requirements in terms of reliability and latency. Recently, the IEEE has released the 802.15.4e amendment that introduces a number of enhancements/modifications to the MAC layer of the original standard in order to overcome such limitations. In this paper we provide a clear and structured overview of all the new 802.15.4e mechanisms. After a general introduction to the 802.15.4e standard, we describe the details of the main 802.15.4e MAC behavior modes, namely Time Slotted Channel Hopping (TSCH), Deterministic and Synchronous Multichannel Extension (DSME), and Low Latency Deterministic Network (LLDN). For each of them, we provide a detailed description and highlight the main features and possible application domains. Also, we survey the current literature and summarize open research issues.",
"title": ""
},
{
"docid": "2b6b1fef68dede7066dddb4b111e1828",
"text": "Collecting labeling information of time-to-event analysis is naturally very time consuming, i.e., one has to wait for the occurrence of the event of interest, which may not always be observed for every instance. By taking advantage of censored instances, survival analysis methods internally consider more samples than standard regression methods, which partially alleviates this data insufficiency problem. Whereas most existing survival analysis models merely focus on a single survival prediction task, when there are multiple related survival prediction tasks, we may benefit from the tasks relatedness. Simultaneously learning multiple related tasks, multi-task learning (MTL) provides a paradigm to alleviate data insufficiency by bridging data from all tasks and improves generalization performance of all tasks involved. Even though MTL has been extensively studied, there is no existing work investigating MTL for survival analysis. In this paper, we propose a novel multi-task survival analysis framework that takes advantage of both censored instances and task relatedness. Specifically, based on two common used task relatedness assumptions, i.e., low-rank assumption and cluster structure assumption, we formulate two concrete models, COX-TRACE and COX-cCMTL, under the proposed framework, respectively. We develop efficient algorithms and demonstrate the performance of the proposed multi-task survival analysis models on the The Cancer Genome Atlas (TCGA) dataset. Our results show that the proposed approaches can significantly improve the prediction performance in survival analysis and can also discover some inherent relationships among different cancer types.",
"title": ""
},
{
"docid": "e31fd6ce6b78a238548e802d21b05590",
"text": "Machine learning techniques have long been used for various purposes in software engineering. This paper provides a brief overview of the state of the art and reports on a number of novel applications I was involved with in the area of software testing. Reflecting on this personal experience, I draw lessons learned and argue that more research should be performed in that direction as machine learning has the potential to significantly help in addressing some of the long-standing software testing problems.",
"title": ""
},
{
"docid": "9404d1fd58dbd1d83c2d503e54ffd040",
"text": "This work examines the association between the Big Five personality dimensions, the most relevant demographic factors (sex, age and relationship status), and subjective well-being. A total of 236 nursing professionals completed the NEO Five Factor Inventory (NEO-FFI) and the Affect-Balance Scale (ABS). Regression analysis showed personality as one of the most important correlates of subjective well-being, especially through Extraversion and Neuroticism. There was a positive association between Openness to experience and the positive and negative components of affect. Likewise, the most basic demographic variables (sex, age and relationship status) are found to be differentially associated with the different elements of subjective well-being, and the explanation for these associations is highly likely to be found in the links between demographic variables and personality. In the same way as control of the effect of demographic variables is necessary for isolating the effect of personality on subjective well-being, control of personality should permit more accurate analysis of the role of demographic variables in relation to the subjective well-being construct. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c2425f6cf45858f04bf51d7066e244e1",
"text": "Healthcare is a natural arena for the application of machine learning, especially as modern electronic health records (EHRs) provide increasingly large amounts of data to answer clinically meaningful questions. However, clinical data and practice present unique challenges that complicate the use of common methodologies. This article serves as a primer on addressing these challenges and highlights opportunities for members of the machine learning and data science communities to contribute to this growing domain.",
"title": ""
},
{
"docid": "7d23d8d233a3fc7ff75edf361acbe642",
"text": "The diagnosis and treatment of chronic patellar instability caused by trochlear dysplasia can be challenging. A dysplastic trochlea leads to biomechanical and kinematic changes that often require surgical correction when symptomatic. In the past, trochlear dysplasia was classified using the 4-part Dejour classification system. More recently, new classification systems have been proposed. Future studies are needed to investigate long-term outcomes after trochleoplasty.",
"title": ""
},
{
"docid": "6cd9df79a38656597b124b139746462e",
"text": "Load balancing is a technique which allows efficient parallelization of irregular workloads, and a key component of many applications and parallelizing runtimes. Work-stealing is a popular technique for implementing load balancing, where each parallel thread maintains its own work set of items and occasionally steals items from the sets of other threads.\n The conventional semantics of work stealing guarantee that each inserted task is eventually extracted exactly once. However, correctness of a wide class of applications allows for relaxed semantics, because either: i) the application already explicitly checks that no work is repeated or ii) the application can tolerate repeated work.\n In this paper, we introduce idempotent work tealing, and present several new algorithms that exploit the relaxed semantics to deliver better performance. The semantics of the new algorithms guarantee that each inserted task is eventually extracted at least once-instead of exactly once.\n On mainstream processors, algorithms for conventional work stealing require special atomic instructions or store-load memory ordering fence instructions in the owner's critical path operations. In general, these instructions are substantially slower than regular memory access instructions. By exploiting the relaxed semantics, our algorithms avoid these instructions in the owner's operations.\n We evaluated our algorithms using common graph problems and micro-benchmarks and compared them to well-known conventional work stealing algorithms, the THE Cilk and Chase-Lev algorithms. We found that our best algorithm (with LIFO extraction) outperforms existing algorithms in nearly all cases, and often by significant margins.",
"title": ""
},
{
"docid": "9218a87b0fba92874e5f7917c925843a",
"text": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than 1% of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback.",
"title": ""
},
{
"docid": "a4dd8ab8b45a8478ca4ac7e19debf777",
"text": "Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.",
"title": ""
},
{
"docid": "44d8cb42bd4c2184dc226cac3adfa901",
"text": "Several descriptions of redundancy are presented in the literature , often from widely dif ferent perspectives . Therefore , a discussion of these various definitions and the salient points would be appropriate . In particular , any definition and redundancy needs to cover the following issues ; the dif ference between multiple solutions and an infinite number of solutions ; degenerate solutions to inverse kinematics ; task redundancy ; and the distinction between non-redundant , redundant and highly redundant manipulators .",
"title": ""
},
{
"docid": "cca61271fe31513cb90c2ac7ecb0b708",
"text": "This paper deals with the synthesis of fuzzy state feedback controller of induction motor with optimal performance. First, the Takagi-Sugeno (T-S) fuzzy model is employed to approximate a non linear system in the synchronous d-q frame rotating with electromagnetic field-oriented. Next, a fuzzy controller is designed to stabilise the induction motor and guaranteed a minimum disturbance attenuation level for the closed-loop system. The gains of fuzzy control are obtained by solving a set of Linear Matrix Inequality (LMI). Finally, simulation results are given to demonstrate the controller’s effectiveness. Keywords—Rejection disturbance, fuzzy modelling, open-loop control, Fuzzy feedback controller, fuzzy observer, Linear Matrix Inequality (LMI)",
"title": ""
},
{
"docid": "3e7e4b5c2a73837ac5fa111a6dc71778",
"text": "Merging the best features of RBAC and attribute-based systems can provide effective access control for distributed and rapidly changing applications.",
"title": ""
},
{
"docid": "b9e299d5909211488bd7b068d22df718",
"text": "Fast alignment is essential for many natural language tasks. But in the setting of monolingual alignment, previous work has not been able to align more than one sentence pair per second. We describe a discriminatively trained monolingual word aligner that uses a Conditional Random Field to globally decode the best alignment with features drawn from source and target sentences. Using just part-of-speech tags and WordNet as external resources, our aligner gives state-of-the-art result, while being an order-of-magnitude faster than the previous best performing system.",
"title": ""
},
{
"docid": "df3e7333a8eac87bc828bd80e8f72ace",
"text": "In this paper, we propose a multimodal biometrics system that combines fingerprint and palmprint features to overcome several limitations of unimodal biometrics—such as the inability to tolerate noise, distorted data and etc.—and thus able to improve the performance of biometrics for personal verification. The quality of fingerprint and palmprint images are first enhanced using a series of pre-processing techniques. Following, a bank of 2D Gabor filters is used to independently extract fingerprint and palmprint features, which are then concatenated into a single feature vector. We conclude that the proposed methodology has better performance and is more reliable compared to unimodal approaches using solely fingerprint or palmprint biometrics. This is supported by our experiments which are able to achieve equal error rate (EER) as low as 0.91% using the combined biometrics features.",
"title": ""
},
{
"docid": "7a82c189c756e9199ae0d394ed9ade7f",
"text": "Since the late 1970s, globalization has become a phenomenon that has elicited polarizing responses from scholars, politicians, activists, and the business community. Several scholars and activists, such as labor unions, see globalization as an anti-democratic movement that would weaken the nation-state in favor of the great powers. There is no doubt that globalization, no matter how it is defined, is here to stay, and is causing major changes on the globe. Given the rapid proliferation of advances in technology, communication, means of production, and transportation, globalization is a challenge to health and well-being worldwide. On an international level, the average human lifespan is increasing primarily due to advances in medicine and technology. The trends are a reflection of increasing health care demands along with the technological advances needed to prevent, diagnose, and treat disease (IOM, 1997). Along with this increase in longevity comes the concern of finding commonalities in the treatment of health disparities for all people. In a seminal work by Friedman (2005), it is posited that the connecting of knowledge into a global network will result in eradication of most of the healthcare translational barriers we face today. Since healthcare is a knowledge-driven profession, it is reasonable to presume that global healthcare will become more than just a buzzword. This chapter looks at all aspects or components of globalization but focuses specifically on how the movement impacts the health of the people and the nations of the world. The authors propose to use the concept of health as a measuring stick of the claims made on behalf of globalization.",
"title": ""
},
{
"docid": "27fb2d589c7296a8b7f11c81fd93e8bf",
"text": "Coarse-to-Fine Natural Language Processing",
"title": ""
},
{
"docid": "89f85a4a20735222867c5f0b4623f0a1",
"text": "Arabic is one of the major languages in the world. Unfortunately not so much research in Arabic speaker recognition has been done. One main reason for this lack of research is the unavailability of rich Arabic speech databases. In this paper, we present a rich and comprehensive Arabic speech database that we developed for the Arabic speaker / speech recognition research and/or applications. The database is rich in different aspects: (a) it has 752 speakers; (b) the speakers are from different ethnic groups: Saudis, Arabs, and non-Arabs; (c) utterances are both read text and spontaneous; (d) scripts are of different dimensions, such as, isolated words, digits, phonetically rich words, sentences, phonetically balanced sentences, paragraphs, etc.; (e) different sets of microphones with medium and high quality; (f) telephony and non-telephony speech; (g) three different recording environments: office, sound proof room, and cafeteria; (h) three different sessions, where the recording sessions are scheduled at least with 2 weeks interval. Because of the richness of this database, it can be used in many Arabic, and non-Arabic, speech processing researches, such as speaker / speech recognition, speech analysis, accent identification, ethnic groups / nationality recognition, etc. The richness of the database makes it a valuable resource for research in Arabic speech processing in particular and for research in speech processing in general. The database was carefully manually verified. The manual verification was complemented with automatic verification. Validation was performed on a subset of the database where the recognition rate reached 100% for Saudi speakers and 96% for non-Saudi speakers by using a system with 12 Mel frequency Cepstral coefficients, and 32 Gaussian mixtures.",
"title": ""
},
{
"docid": "875e98c4bd34e8c4131467a632b7d68f",
"text": "Human activity recognition is a challenging task, especially when its background is unknown or changing, and when scale or illumination differs in each video. Approaches utilizing spatio-temporal local features have proved that they are able to cope with such difficulties, but they mainly focused on classifying short videos of simple periodic actions. In this paper, we present a new activity recognition methodology that overcomes the limitations of the previous approaches using local features. We introduce a novel matching, spatio-temporal relationship match, which is designed to measure structural similarity between sets of features extracted from two videos. Our match hierarchically considers spatio-temporal relationships among feature points, thereby enabling detection and localization of complex non-periodic activities. In contrast to previous approaches to ‘classify’ videos, our approach is designed to ‘detect and localize’ all occurring activities from continuous videos where multiple actors and pedestrians are present. We implement and test our methodology on a newly-introduced dataset containing videos of multiple interacting persons and individual pedestrians. The results confirm that our system is able to recognize complex non-periodic activities (e.g. ‘push’ and ‘hug’) from sets of spatio-temporal features even when multiple activities are present in the scene",
"title": ""
},
{
"docid": "3840b8c709a8b2780b3d4a1b56bd986b",
"text": "A new scheme to resolve the intra-cell pilot collision for machine-to-machine (M2M) communication in crowded massive multiple-input multiple-output (MIMO) systems is proposed. The proposed scheme permits those failed user equipments (UEs), judged by a strongest-user collision resolution (SUCR) protocol, to contend for the idle pilots, i.e., the pilots that are not selected by any UE in the initial step. This scheme is called as SUCR combined idle pilots access (SUCR-IPA). To analyze the performance of the SUCR-IPA scheme, we develop a simple method to compute the access success probability of the UEs in each random access slot. The simulation results coincide well with the analysis. It is also shown that, compared with the SUCR protocol, the proposed SUCR-IPA scheme increases the throughput of the system significantly, and thus decreases the number of access attempts dramatically.",
"title": ""
}
] |
scidocsrr
|
9889875b9d6840a54f0714e27f95c2c2
|
Evidence-based interventions for myofascial trigger points
|
[
{
"docid": "6318c9d0e62f1608c105b114c6395e6f",
"text": "Myofascial pain associated with myofascial trigger points (MTrPs) is a common cause of nonarticular musculoskeletal pain. Although the presence of MTrPs can be determined by soft tissue palpation, little is known about the mechanisms and biochemical milieu associated with persistent muscle pain. A microanalytical system was developed to measure the in vivo biochemical milieu of muscle in near real time at the subnanogram level of concentration. The system includes a microdialysis needle capable of continuously collecting extremely small samples (approximately 0.5 microl) of physiological saline after exposure to the internal tissue milieu across a 105-microm-thick semi-permeable membrane. This membrane is positioned 200 microm from the tip of the needle and permits solutes of <75 kDa to diffuse across it. Three subjects were selected from each of three groups (total 9 subjects): normal (no neck pain, no MTrP); latent (no neck pain, MTrP present); active (neck pain, MTrP present). The microdialysis needle was inserted in a standardized location in the upper trapezius muscle. Due to the extremely small sample size collected by the microdialysis system, an established microanalytical laboratory, employing immunoaffinity capillary electrophoresis and capillary electrochromatography, performed analysis of selected analytes. Concentrations of protons, bradykinin, calcitonin gene-related peptide, substance P, tumor necrosis factor-alpha, interleukin-1beta, serotonin, and norepinephrine were found to be significantly higher in the active group than either of the other two groups (P < 0.01). pH was significantly lower in the active group than the other two groups (P < 0.03). In conclusion, the described microanalytical technique enables continuous sampling of extremely small quantities of substances directly from soft tissue, with minimal system perturbation and without harmful effects on subjects. The measured levels of analytes can be used to distinguish clinically distinct groups.",
"title": ""
}
] |
[
{
"docid": "bcf4f735cd0a3269adb8e65fba4d21b1",
"text": "An optimal &OHgr;(<italic>n</italic><supscrpt>2</supscrpt>) lower bound is shown for the time-space product of any <italic>R</italic>-way branching program that determines those values which occur exactly once in a list of <italic>n</italic> integers in the range [1, <italic>R</italic>] where <italic>R</italic> ≥ <italic>n</italic>. This &OHgr;(<italic>n</italic><supscrpt>2</supscrpt>) tradeoff also applies to the sorting problem and thus improves the previous time-space tradeoffs for sorting. Because the <italic>R</italic>-way branching program is a such a powerful model these time-space product tradeoffs also apply to all models of sequential computation that have a fair measure of space such as off-line multi-tape Turing machines and off-line log-cost RAMs.",
"title": ""
},
{
"docid": "da3876613301b46645408e474c1f5247",
"text": "The Strength Pareto Evolutionary Algorithm (SPEA) (Zitzle r and Thiele 1999) is a relatively recent technique for finding or approximatin g the Pareto-optimal set for multiobjective optimization problems. In different st udies (Zitzler and Thiele 1999; Zitzler, Deb, and Thiele 2000) SPEA has shown very good performance in comparison to other multiobjective evolutionary algorith ms, and therefore it has been a point of reference in various recent investigations, e.g., (Corne, Knowles, and Oates 2000). Furthermore, it has been used in different a pplic tions, e.g., (Lahanas, Milickovic, Baltas, and Zamboglou 2001). In this pap er, an improved version, namely SPEA2, is proposed, which incorporates in cont rast o its predecessor a fine-grained fitness assignment strategy, a density estima tion technique, and an enhanced archive truncation method. The comparison of SPEA 2 with SPEA and two other modern elitist methods, PESA and NSGA-II, on diffe rent test problems yields promising results.",
"title": ""
},
{
"docid": "b8f6411673d866c6464509b6fa7e9498",
"text": "In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"title": ""
},
{
"docid": "609fa8716f97a1d30683997d778e4279",
"text": "The role of behavior for the acquisition of sensory representations has been underestimated in the past. We study this question for the task of learning vergence eye movements allowing proper fixation of objects. We model the development of this skill with an artificial neural network based on reinforcement learning. A biologically plausible reward mechanism that is responsible for driving behavior and learning of the representation of disparity is proposed. The network learns to perform vergence eye movements between natural images of objects by receiving a reward whenever an object is fixated with both eyes. Disparity tuned neurons emerge robustly in the hidden layer during development. The characteristics of the cells' tuning curves depend strongly on the task: if mostly small vergence movements are to be performed, tuning curves become narrower at small disparities, as has been measured experimentally in barn owls. Extensive training to discriminate between small disparities leads to an effective enhancement of sensitivity of the tuning curves.",
"title": ""
},
{
"docid": "825888e4befcbf6b492143a13928a34e",
"text": "Sentiment analysis is one of the prominent fields of data mining that deals with the identification and analysis of sentimental contents generally available at social media. Twitter is one of such social medias used by many users about some topics in the form of tweets. These tweets can be analyzed to find the viewpoints and sentiments of the users by using clustering-based methods. However, due to the subjective nature of the Twitter datasets, metaheuristic-based clustering methods outperforms the traditional methods for sentiment analysis. Therefore, this paper proposes a novel metaheuristic method (CSK) which is based on K-means and cuckoo search. The proposed method has been used to find the optimum cluster-heads from the sentimental contents of Twitter dataset. The efficacy of proposed method has been tested on different Twitter datasets and compared with particle swarm optimization, differential evolution, cuckoo search, improved cuckoo search, gauss-based cuckoo search, and two n-grams methods. Experimental results and statistical analysis validate that the proposed method outperforms the existing methods. The proposed method has theoretical implications for the future research to analyze the data generated through social networks/medias. This method has also very generalized practical implications for designing a system that can provide conclusive reviews on any social issues. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5916147ceb3e0bb236798abb394d1106",
"text": "One of the fundamental questions of enzymology is how catalytic power is derived. This review focuses on recent developments in the structure--function relationships of chorismate-utilizing enzymes involved in siderophore biosynthesis to provide insight into the biocatalysis of pericyclic reactions. Specifically, salicylate synthesis by the two-enzyme pathway in Pseudomonas aeruginosa is examined. The isochorismate-pyruvate lyase is discussed in the context of its homologues, the chorismate mutases, and the isochorismate synthase is compared to its homologues in the MST family (menaquinone, siderophore, or tryptophan biosynthesis) of enzymes. The tentative conclusion is that the activities observed cannot be reconciled by inspection of the active site participants alone. Instead, individual activities must arise from unique dynamic properties of each enzyme that are tuned to promote specific chemistries.",
"title": ""
},
{
"docid": "7604942913928dfb0e0ef486eccbcf8b",
"text": "We connect two scenarios in structured learning: adapting a parser trained on one corpus to another annotation style, and projecting syntactic annotations from one language to another. We propose quasisynchronous grammar (QG) features for these structured learning tasks. That is, we score a aligned pair of source and target trees based on local features of the trees and the alignment. Our quasi-synchronous model assigns positive probability to any alignment of any trees, in contrast to a synchronous grammar, which would insist on some form of structural parallelism. In monolingual dependency parser adaptation, we achieve high accuracy in translating among multiple annotation styles for the same sentence. On the more difficult problem of cross-lingual parser projection, we learn a dependency parser for a target language by using bilingual text, an English parser, and automatic word alignments. Our experiments show that unsupervised QG projection improves on parses trained using only highprecision projected annotations and far outperforms, by more than 35% absolute dependency accuracy, learning an unsupervised parser from raw target-language text alone. When a few target-language parse trees are available, projection gives a boost equivalent to doubling the number of target-language trees. ∗The first author would like to thank the Center for Intelligent Information Retrieval at UMass Amherst. We would also like to thank Noah Smith and Rebecca Hwa for helpful discussions and the anonymous reviewers for their suggestions for improving the paper.",
"title": ""
},
{
"docid": "212619e09ee7dfe0f32d90e2da25c8f0",
"text": "This paper tackles anomaly detection in videos, which is an extremely challenging task because anomaly is unbounded. We approach this task by leveraging a Convolutional Neural Network (CNN or ConvNet) for appearance encoding for each frame, and leveraging a Convolutional Long Short Term Memory (ConvLSTM) for memorizing all past frames which corresponds to the motion information. Then we integrate ConvNet and ConvLSTM with Auto-Encoder, which is referred to as ConvLSTM-AE, to learn the regularity of appearance and motion for the ordinary moments. Compared with 3D Convolutional Auto-Encoder based anomaly detection, our main contribution lies in that we propose a ConvLSTM-AE framework which better encodes the change of appearance and motion for normal events, respectively. To evaluate our method, we first conduct experiments on a synthesized Moving-MNIST dataset under controlled settings, and results show that our method can easily identify the change of appearance and motion. Extensive experiments on real anomaly datasets further validate the effectiveness of our method for anomaly detection.",
"title": ""
},
{
"docid": "339c367d71b4b51ad24aa59799b13416",
"text": "One of the biggest challenges of the current big data landscape is our inability to process vast amounts of information in a reasonable time. In this work, we explore and compare two distributed computing frameworks implemented on commodity cluster architectures: MPI/OpenMP on Beowulf that is high-performance oriented and exploits multi-machine/multicore infrastructures, and Apache Spark on Hadoop which targets iterative algorithms through in-memory computing. We use the Google Cloud Platform service to create virtual machine clusters, run the frameworks, and evaluate two supervised machine learning algorithms: KNN and Pegasos SVM. Results obtained from experiments with a particle physics data set show MPI/OpenMP outperforms Spark by more than one order of magnitude in terms of processing speed and provides more consistent performance. However, Spark shows better data management infrastructure and the possibility of dealing with other aspects such as node failure and data replication.",
"title": ""
},
{
"docid": "643599f9b0dcfd270f9f3c55567ed985",
"text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.",
"title": ""
},
{
"docid": "4746703f20b8fd902c451e658e44f49b",
"text": "This paper describes the development of a Latvian speech-to-text (STT) system at LIMSI within the Quaero project. One of the aims of the speech processing activities in the Quaero project is to cover all official European languages. However, for some of the languages only very limited, if any, training resources are available via corpora agencies such as LDC and ELRA. The aim of this study was to show the way, taking Latvian as example, an STT system can be rapidly developed without any transcribed training data. Following the scheme proposed in this paper, the Latvian STT system was developed in about a month and obtained a word error rate of 20% on broadcast news and conversation data in the Quaero 2012 evaluation campaign.",
"title": ""
},
{
"docid": "73f24b296deb64f2477fe54f9071f14f",
"text": "Intersection-collision warning systems use vehicle-to-infrastructure communication to avoid accidents at urban intersections. However, they are costly because additional roadside infrastructure must be installed, and they suffer from problems related to real-time information delivery. In this paper, an intersection-collision warning system based on vehicle-to-vehicle communication is proposed in order to solve such problems. The distance to the intersection is computed to evaluate the risk that the host vehicle will collide at the intersection, and a time-to-intersection index is computed to establish the risk of a collision. The proposed system was verified through simulations, confirming its potential as a new intersection-collision warning system based on vehicle-to-vehicle communication.",
"title": ""
},
{
"docid": "30d723478faf6ef20776e057c666f3e1",
"text": "India has 790+ million active mobile connections and 80.57 million smartphone users. However, as per Reserve Bank of India, the number of transactions performed using smartphone based mobile banking applicationsis less than 12% of the overall banking transactions. One of the major reasons for such low numbers is the usability of the mobile banking app. In this paper, we focus on usability issues related tomobile banking apps and propose a Mobile App Usability Index (MAUI) for enhancing the usability of a mobile banking app. The proposed Index has been validatedwith mobile banking channel managers, chief information security officers, etc.",
"title": ""
},
{
"docid": "f8fc4910745911ae369fe625997de128",
"text": "A new 17-Watt, 8.4 GHz, solid state power amplifier (SSPA) has been developed for the Jet Propulsion Laboratory's Mars Exploration Rover mission. The SSPA consists of a power amplifier microwave module and a highly efficient DC-DC power converter module integrated into a compact package that can be installed near the spacecraft antenna to minimize downlink transmission loss. The SSPA output power is 17 Watts nominal with an input DC power of 59 Watts and nominal input signal of +1 dBm. The unit is qualified to operate over a temperature range of -40/spl deg/C to +70/spl deg/C in vacuum or Martian atmosphere.",
"title": ""
},
{
"docid": "3c8cc4192ee6ddd126e53c8ab242f396",
"text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.",
"title": ""
},
{
"docid": "7b8dffab502fae2abbea65464e2727aa",
"text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.",
"title": ""
},
{
"docid": "bd7841688d039371f85d34f982130105",
"text": "Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following.",
"title": ""
},
{
"docid": "4f49a5cc49f1eeb864b4a6f347263710",
"text": "Future wireless applications will take advantage of rapidly deployable, self-configuring multihop ad hoc networks. Because of the difficulty of obtaining IEEE 802.11 feedback about link connectivity in real networks, many multihop ad hoc networks utilize hello messages to determine local connectivity. This paper uses an implementation of the Ad hoc On-demand Distance Vector (AODV) routing protocol to examine the effectiveness of hello messages for monitoring link status. In this study, it is determined that many factors influence the utility of hello messages, including allowed hello message loss settings, discrepancy between data and hello message size and 802.11b packet handling. This paper examines these factors and experimentally evaluates a variety of approaches for improving the accuracy of hello messages as an indicator of local connectivity.",
"title": ""
},
{
"docid": "155411fe242dd4f3ab39649d20f5340f",
"text": "Two studies are presented that investigated 'fear of movement/(re)injury' in chronic musculoskeletal pain and its relation to behavioral performance. The 1st study examines the relation among fear of movement/(re)injury (as measured with the Dutch version of the Tampa Scale for Kinesiophobia (TSK-DV)) (Kori et al. 1990), biographical variables (age, pain duration, gender, use of supportive equipment, compensation status), pain-related variables (pain intensity, pain cognitions, pain coping) and affective distress (fear and depression) in a group of 103 chronic low back pain (CLBP) patients. In the 2nd study, motoric, psychophysiologic and self-report measures of fear are taken from 33 CLBP patients who are exposed to a single and relatively simple movement. Generally, findings demonstrated that the fear of movement/(re)injury is related to gender and compensation status, and more closely to measures of catastrophizing and depression, but in a much lesser degree to pain coping and pain intensity. Furthermore, subjects who report a high degree of fear of movement/(re)injury show more fear and escape/avoidance when exposed to a simple movement. The discussion focuses on the clinical relevance of the construct of fear of movement/(re)injury and research questions that remain to be answered.",
"title": ""
},
{
"docid": "b16f7a4242a9ff353d7726e66669ba97",
"text": "The ARPA MT Evaluation methodology effort is intended to provide a basis for measuring and thereby facilitating the progress of MT systems of the ARPAsponsored research program. The evaluation methodologies have the further goal of being useful for identifying the context of that progress among developed, production MT systems in use today. Since 1991, the evaluations have evolved as we have discovered more about what properties are valuable to measure, what properties are not, and what elements of the tests/evaluations can be adjusted to enhance significance of the results while still remaining relatively portable. This paper describes this evolutionary process, along with measurements of the most recent MT evaluation (January 1994) and the current evaluation process now underway.",
"title": ""
}
] |
scidocsrr
|
39585e28426c98d401ffd3a38dd2b403
|
Proof Protocol for a Machine Learning Technique Making Longitudinal Predictions in Dynamic Contexts
|
[
{
"docid": "6a2584657154d6c9fd0976c30469349a",
"text": "A major challenge for managers in turbulent environments is to make sound decisions quickly. Dynamic capabilities have been proposed as a means for addressing turbulent environments by helping managers extend, modify, and reconfigure existing operational capabilities into new ones that better match the environment. However, because dynamic capabilities have been viewed as an elusive black box, it is difficult for managers to make sound decisions in turbulent environments if they cannot effectively measure dynamic capabilities. Therefore, we first seek to propose a measurable model of dynamic capabilities by conceptualizing, operationalizing, and measuring dynamic capabilities. Specifically, drawing upon the dynamic capabilities literature, we identify a set of capabilities—sensing the environment, learning, coordinating, and integrating— that help reconfigure existing operational capabilities into new ones that better match the environment. Second, we propose a structural model where dynamic capabilities influence performance by reconfiguring existing operational capabilities in the context of new product development (NPD). Data from 180 NPD units support both the measurable model of dynamic capabilities and also the structural model by which dynamic capabilities influence performance in NPD by reconfiguring operational capabilities, particularly in higher levels of environmental turbulence. The study’s implications for managerial decision making in turbulent environments by capturing the elusive black box of dynamic capabilities are discussed. Subject Areas: Decision Making in Turbulent Environments, Dynamic Capabilities, Environmental Turbulence, New Product Development, and Operational Capabilities.",
"title": ""
}
] |
[
{
"docid": "7ac6fea42fc232ea8effd09521da32a0",
"text": "There appears to be growing consensus that Small Business Enterprises (SBEs) exert a major influence on the economy of Trinidad and Tobago. This study investigated how and to what extent small businesses influenced macroeconomic variables such as employment, growth and productivity in the important sectors of manufacturing and services. The paper used a methodology that traverses the reader though a combination of various literatures, and theories coupled with relevant statistics on small business. This process is aimed at accessing SBEs’ impact on the Trinidad & Tobago’s non-petroleum economy, especially as it relates to economic diversification. Although, there exists significant room for both improvement and expansion of these entities especially in light of the country’s dependence on hydro-carbons – this study provides collaborating evidence that small business enterprises perform an essential role in the future of Trinidad and Tobago’s economy. E c o n o m i c I m p a c t o f S B E s | 2",
"title": ""
},
{
"docid": "fb5f04974fe6cf406ed955fb9ef0cac0",
"text": "We motivate the need for a new requirements engineering methodology for systematically helping businesses and users to adopt cloud services and for mitigating risks in such transition. The methodology is grounded in goal oriented approaches for requirements engineering. We argue that Goal Oriented Requirements Engineering (GORE) is a promising paradigm to adopt for goals that are generic and flexible statements of users' requirements, which could be refined, elaborated, negotiated, mitigated for risks and analysed for economics considerations. We describe the steps of the proposed process and exemplify the use of the methodology through an example. The methodology can be used by small to large scale organisations to inform crucial decisions related to cloud adoption.",
"title": ""
},
{
"docid": "6c8e1e77efea6fd82f9ec6146689a011",
"text": "BACKGROUND\nHigh incidences of neck pain morbidity are challenging in various situations for populations based on their demographic, physiological and pathological characteristics. Chinese proprietary herbal medicines, as Complementary and Alternative Medicine (CAM) products, are usually developed from well-established and long-standing recipes formulated as tablets or capsules. However, good quantification and strict standardization are still needed for implementation of individualized therapies. The Qishe pill was developed and has been used clinically since 2009. The Qishe pill's personalized medicine should be documented and administered to various patients according to the ancient TCM system, a classification of personalized constitution types, established to determine predisposition and prognosis to diseases as well as therapy and life-style administration. Therefore, we describe the population pharmacokinetic profile of the Qishe pill and compare its metabolic rate in the three major constitution types (Qi-Deficiency, Yin-Deficiency and Blood-Stasis) to address major challenges to individualized standardized TCM.\n\n\nMETHODS/DESIGN\nHealthy subjects (N = 108) selected based on constitutional types will be assessed, and standardized pharmacokinetic protocol will be used for assessing demographic, physiological, and pathological data. Laboratory biomarkers will be evaluated and blood samples collected for pharmacokinetics(PK) analysis and second-generation gene sequencing. In single-dose administrations, subjects in each constitutional type cohort (N = 36) will be randomly divided into three groups to receive different Qishe pill doses (3.75, 7.5 and 15 grams). Multiomics, including next generation sequencing, metabolomics, and proteomics, will complement the Qishe pill's multilevel assessment, with cytochrome P450 genes as targets. In a comparison with the general population, a systematic population pharmacokinetic (PopPK) model for the Qishe pill will be established and verified.\n\n\nTRIAL REGISTRATION\nThis study is registered at ClinicalTrials.gov, NCT02294448 .15 November 2014.",
"title": ""
},
{
"docid": "72b2bb4343c81576e208c2f678dae153",
"text": "We propose a novel class of statistical divergences called Relaxed Wasserstein (RW) divergence. RW divergence generalizes Wasserstein divergence and is parametrized by a class of strictly convex and differentiable functions. We establish for RW divergence several probabilistic properties, which are critical for the success of Wasserstein divergence. In particular, we show that RW divergence is dominated by Total Variation (TV) and Wasserstein-L divergence, and that RW divergence has continuity, differentiability and duality representation. Finally, we provide a non-asymptotic moment estimate and a concentration inequality for RW divergence. Our experiments on image generation demonstrate that RW divergence is a suitable choice for GANs. The performance of RWGANs with Kullback-Leibler (KL) divergence is competitive with other state-of-the-art GANs approaches. Moreover, RWGANs possess better convergence properties than the existing WGANs with competitive inception scores. To the best of our knowledge, this new conceptual framework is the first to provide not only the flexibility in designing effective GANs scheme, but also the possibility in studying different loss functions under a unified mathematical framework.",
"title": ""
},
{
"docid": "46a0a6e652d9a2dd684bd790db7ca4d5",
"text": "An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly. Keywords—Bagging, Ensemble Learning, Extreme Learning Machine, Facial Expression Recognition, Histogram of Orientation Gradient",
"title": ""
},
{
"docid": "8f25b3b36031653311eee40c6c093768",
"text": "This paper provides a survey of the applications of computers in music teaching. The systems are classified by musical activity rather than by technical approach. The instructional strategies involved and the type of knowledge represented are highlighted and areas for future research are identified.",
"title": ""
},
{
"docid": "99f0826db209b29fbbd38a0ec157954d",
"text": "While Physics-Based Simulation (PBS) can highly accurately drape a 3D garment model on a 3D body, it remains too costly for real-time applications, such as virtual try-on. By contrast, inference in a deep network, that is, a single forward pass, is typically quite fast. In this paper, we leverage this property and introduce a novel architecture to fit a 3D garment template to a 3D body model. Specifically, we build upon the recent progress in 3D point-cloud processing with deep networks to extract garment features at varying levels of detail, including point-wise, patch-wise and global features. We then fuse these features with those extracted in parallel from the 3D body, so as to model the cloth-body interactions. The resulting two-stream architecture is trained with a loss function inspired by physics-based modeling, and delivers realistic garment shapes whose 3D points are, on average, less than 1.5cm away from those of a PBS method, while running 40 times faster.",
"title": ""
},
{
"docid": "df80b751fa78e0631ca51f6199cc822c",
"text": "OBJECTIVE\nHumane treatment and care of mentally ill people can be viewed from a historical perspective. Intramural (the institution) and extramural (the community) initiatives are not mutually exclusive.\n\n\nMETHOD\nThe evolution of the psychiatric institution in Canada as the primary method of care is presented from an historical perspective. A province-by-province review of provisions for mentally ill people prior to asylum construction reveals that humanitarian motives and a growing sensitivity to social and medical problems gave rise to institutional psychiatry. The influence of Great Britain, France, and, to a lesser extent, the United States in the construction of asylums in Canada is highlighted. The contemporary redirection of the Canadian mental health system toward \"dehospitalization\" is discussed and delineated.\n\n\nRESULTS\nEarly promoters of asylums were genuinely concerned with alleviating human suffering, which led to the separation of mental health services from the community and from those proffered to the criminal and indigent populations. While the results of the past institutional era were mixed, it is hoped that the \"care\" cycle will not repeat itself in the form of undesirable community alternatives.\n\n\nCONCLUSION\nSeverely psychiatrically disabled individuals can be cared for in the community if appropriate services exist.",
"title": ""
},
{
"docid": "32ce2215040d6315f1442719b0fc353a",
"text": "Introduction. Internal nasal valve incompetence (INVI) has been treated with various surgical methods. Large, single surgeon case series are lacking, meaning that the evidence supporting a particular technique has been deficient. We present a case series using alar batten grafts to reconstruct the internal nasal valve, all performed by the senior author. Methods. Over a 7-year period, 107 patients with nasal obstruction caused by INVI underwent alar batten grafting. Preoperative assessment included the use of nasal strips to evaluate symptom improvement. Visual analogue scale (VAS) assessment of nasal blockage (NB) and quality of life (QOL) both pre- and postoperatively were performed and analysed with the Wilcoxon signed rank test. Results. Sixty-seven patients responded to both pre- and postoperative questionnaires. Ninety-one percent reported an improvement in NB and 88% an improvement in QOL. The greatest improvement was seen at 6 months (median VAS 15 mm and 88 mm resp., with a P value of <0.05 for both). Nasal strips were used preoperatively and are a useful tool in predicting patient operative success in both NB and QOL (odds ratio 2.15 and 2.58, resp.). Conclusions. Alar batten graft insertion as a single technique is a valid technique in treating INVI and produces good outcomes.",
"title": ""
},
{
"docid": "38a5b1d2e064228ec498cf64d29d80e5",
"text": "Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.",
"title": ""
},
{
"docid": "9ee7bba03c4875ee7947b5e13e3d6bfb",
"text": "Connectivity has an important role in different discipline s of computer science including computer network. In the des ign of a network, it is important to analyze connections by the le v ls. The structural properties of bipolar fuzzy graphs pro vide a tool that allows for the solution of operations research problems. In this paper, we introduce various types of bipolar fuzzy brid ges, bipolar fuzzy cut-vertices, bipolar fuzzy cycles and bipolar fuzzy trees in bipolar fuzzy graphs, and investigate some of their prope rties. Most of these various types are defined in terms of levels. We also describe omparison of these types.",
"title": ""
},
{
"docid": "837803a140450d594d5693a06ba3be4b",
"text": "Allocation of very scarce medical interventions such as organs and vaccines is a persistent ethical challenge. We evaluate eight simple allocation principles that can be classified into four categories: treating people equally, favouring the worst-off, maximising total benefits, and promoting and rewarding social usefulness. No single principle is sufficient to incorporate all morally relevant considerations and therefore individual principles must be combined into multiprinciple allocation systems. We evaluate three systems: the United Network for Organ Sharing points systems, quality-adjusted life-years, and disability-adjusted life-years. We recommend an alternative system-the complete lives system-which prioritises younger people who have not yet lived a complete life, and also incorporates prognosis, save the most lives, lottery, and instrumental value principles.",
"title": ""
},
{
"docid": "3953a1a05e064b8211fe006af4595e70",
"text": "Sentiment analysis is a common task in natural language processing that aims to detect polarity of a text document (typically a consumer review). In the simplest settings, we discriminate only between positive and negative sentiment, turning the task into a standard binary classification problem. We compare several machine learning approaches to this problem, and combine them to achieve a new state of the art. We show how to use for this task the standard generative language models, which are slightly complementary to the state of the art techniques. We achieve strong results on a well-known dataset of IMDB movie reviews. Our results are easily reproducible, as we publish also the code needed to repeat the experiments. This should simplify further advance of the state of the art, as other researchers can combine their techniques with ours with little effort.",
"title": ""
},
{
"docid": "e75669b68e8736ee6044443108c00eb1",
"text": "UNLABELLED\nThe evolution in adhesive dentistry has broadened the indication of esthetic restorative procedures especially with the use of resin composite material. Depending on the clinical situation, some restorative techniques are best indicated. As an example, indirect adhesive restorations offer many advantages over direct techniques in extended cavities. In general, the indirect technique requires two appointments and a laboratory involvement, or it can be prepared chairside in a single visit either conventionally or by the use of computer-aided design/computer-aided manufacturing systems. In both cases, there will be an extra cost as well as the need of specific materials. This paper describes the clinical procedures for the chairside semidirect technique for composite onlay fabrication without the use of special equipments. The use of this technique combines the advantages of the direct and the indirect restoration.\n\n\nCLINICAL SIGNIFICANCE\nThe semidirect technique for composite onlays offers the advantages of an indirect restoration and low cost, and can be the ideal treatment option for extended cavities in case of financial limitations.",
"title": ""
},
{
"docid": "dfe56abd5b8fcd1dc0e3a0dba02832b6",
"text": "This paper presents a zero-voltage switching (ZVS) forward-flyback DC-DC converter, which is able to process and deliver power efficiently over very wide input voltage variation. The proposed ZVS forward flyback DC/DC converter is part of a Micro-inverter to perform input voltage regulation to achieving maximum power point tracking for Photo-voltaic panel. The converter operates at boundary between current continuous and discontinuous mode to achieve ZVS. Variable frequency with fixed off time is used for reducing core losses of the transformer, achieving high efficiency. In addition, non-dissipative LC snubber circuit is used to get both benefits: 1) the voltage spike is restrained effectively while the switch is turned off with high current in primary side; 2) the main switch still keeps ZVS feature. Finally, experiment results provided from a 200W prototype (30Vdc-50Vdc input, 230Vdc output) validate the feasibility and superior performance of the proposed converter.",
"title": ""
},
{
"docid": "29fc090c5d1e325fd28e6bbcb690fb8d",
"text": "Many forensic computing practitioners work in a high workload and low resource environment. With the move by the discipline to seek ISO 17025 laboratory accreditation, practitioners are finding it difficult to meet the demands of validation and verification of their tools and still meet the demands of the accreditation framework. Many agencies are ill-equipped to reproduce tests conducted by organizations such as NIST since they cannot verify the results with their equipment and in many cases rely solely on an independent validation study of other peoples' equipment. This creates the issue of tools in reality never being tested. Studies have shown that independent validation and verification of complex forensic tools is expensive and time consuming, and many practitioners also use tools that were not originally designed for forensic purposes. This paper explores the issues of validation and verification in the accreditation environment and proposes a paradigm that will reduce the time and expense required to validate and verify forensic software tools",
"title": ""
},
{
"docid": "1d9b50bf7fa39c11cca4e864bbec5cf3",
"text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.",
"title": ""
},
{
"docid": "2746379baa4c59fae63dc92a9c8057bc",
"text": "Twenty-five Semantic Web and Database researchers met at the 2011 STI Semantic Summit in Riga, Latvia July 6-8, 2011[1] to discuss the opportunities and challenges posed by Big Data for the Semantic Web, Semantic Technologies, and Database communities. The unanimous conclusion was that the greatest shared challenge was not only engineering Big Data, but also doing so meaningfully. The following are four expressions of that challenge from different perspectives.",
"title": ""
},
{
"docid": "8bb5794d38528ab459813ab1fa484a69",
"text": "We introduce the ACL Anthology Network (AAN), a manually curated networked database of citations, collaborations, and summaries in the field of Computational Linguistics. We also present a number of statistics about the network including the most cited authors, the most central collaborators, as well as network statistics about the paper citation, author citation, and author collaboration networks.",
"title": ""
}
] |
scidocsrr
|
c1cf294f54daffe255b8869a73d4f9ac
|
Tunneling Field-Effect Transistors (TFETs) With Subthreshold Swing (SS) Less Than 60 mV/dec
|
[
{
"docid": "002acd845aa9776840dfe9e8755d7732",
"text": "A detailed study on the mechanism of band-to-band tunneling in carbon nanotube field-effect transistors (CNFETs) is presented. Through a dual-gated CNFET structure tunneling currents from the valence into the conduction band and vice versa can be enabled or disabled by changing the gate potential. Different from a conventional device where the Fermi distribution ultimately limits the gate voltage range for switching the device on or off, current flow is controlled here by the valence and conduction band edges in a bandpass-filter-like arrangement. We discuss how the structure of the nanotube is the key enabler of this particular one-dimensional tunneling effect.",
"title": ""
}
] |
[
{
"docid": "9b2e025c6bb8461ddb076301003df0e4",
"text": "People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.",
"title": ""
},
{
"docid": "67825e84cb2e636deead618a0868fa4a",
"text": "Image compression is used specially for the compression of images where tolerable degradation is required. With the wide use of computers and consequently need for large scale storage and transmission of data, efficient ways of storing of data have become necessary. With the growth of technology and entrance into the Digital Age, the world has found itself amid a vast amount of information. Dealing with such enormous information can often present difficulties. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. In this paper, we discuss about lossy image compression techniques and reviews of different basic lossy image compression methods are considered. The methods such as JPEG and JPEG2000 are considered. A conclusion is derived on the basis of these methods Keywords— Data compression, Lossy image compression, JPEG, JPEG2000, DCT, DWT",
"title": ""
},
{
"docid": "ed0b269f861775550edd83b1eb420190",
"text": "The continuous innovation process of the Information and Communication Technology (ICT) sector shape the way businesses redefine their business models. Though, current drivers of innovation processes focus solely on a technical dimension, while disregarding social and environmental drivers. However, examples like Nokia, Yahoo or Hewlett-Packard show that even though a profitable business model exists, a sound strategic innovation process is needed to remain profitable in the long term. A sustainable business model innovation demands the incorporation of all dimensions of the triple bottom line. Nevertheless, current management processes do not take the responsible steps to remain sustainable and keep being in denial of the evolutionary direction in which the markets develop, because the effects are not visible in short term. The implications are of substantial effect and can bring the foundation of the company’s business model in danger. This work evaluates the decision process that lets businesses decide in favor of un-sustainable changes and points out the barriers that prevent the development towards a sustainable business model that takes the new balance of forces into account.",
"title": ""
},
{
"docid": "81e0b85a142a81f9e2012f050c43fb43",
"text": "The activation of under frequency load shedding (UFLS) is the last automated action against the severe frequency drops in order to rebalance the system. In this paper, the setting parameters of a multistage load shedding plan are obtained and optimized using a discretized model of dynamic system frequency response. The uncertainties of system parameters including inertia time constant, load damping, and generation deficiency are taken into account. The proposed UFLS model is formulated as a mixed-integer linear programming optimization problem to minimize the expected amount of load shedding. The activation of rate-of-change-of-frequency relays as the anti-islanding protection of distributed generators is considered. The Monte Carlo simulation method is utilized for modeling the uncertainties of system parameters. The results of probabilistic UFLS are then utilized to design four different UFLS strategies. The proposed dynamic UFLS plans are simulated over the IEEE 39-bus and the large-scale practical Iranian national grid.",
"title": ""
},
{
"docid": "e90b29baf65216807d80360083912cd4",
"text": "Software maintenance claims a large proportion of organizational resources. It is thought that many maintenance problems derive from inadequate software design and development practices. Poor design choices can result in complex software that is costly to support and difficult to change. However, it is difficult to assess the actual maintenance performance effects of software development practices because their impact is realized over the software life cycle. To estimate the impact of development activities in a more practical time frame, this research develops a two stage model in which software complexity is a key intermediate variable that links design and development decisions to their downstream effects on software maintenance. The research analyzes data collected from a national mass merchandising retailer on twenty-nine software enhancement projects and twenty-three software applications in a large IBM COBOL environment. Results indicate that the use of a code generator in development is associated with increased software complexity and software enhancement project effort. The use of packaged software is associated with decreased software complexity and software enhancement effort. These results suggest an important link between software development practices and maintenance performance.",
"title": ""
},
{
"docid": "b82c7c8f36ea16c29dfc5fa00a58b229",
"text": "Green cloud computing has become a major concern in both industry and academia, and efficient scheduling approaches show promising ways to reduce the energy consumption of cloud computing platforms while guaranteeing QoS requirements of tasks. Existing scheduling approaches are inadequate for realtime tasks running in uncertain cloud environments, because those approaches assume that cloud computing environments are deterministic and pre-computed schedule decisions will be statically followed during schedule execution. In this paper, we address this issue. We introduce an interval number theory to describe the uncertainty of the computing environment and a scheduling architecture to mitigate the impact of uncertainty on the task scheduling quality for a cloud data center. Based on this architecture, we present a novel scheduling algorithm (PRS) that dynamically exploits proactive and reactive scheduling methods, for scheduling real-time, aperiodic, independent tasks. To improve energy efficiency, we propose three strategies to scale up and down the system’s computing resources according to workload to improve resource utilization and to reduce energy consumption for the cloud data center. We conduct extensive experiments to compare PRS with four typical baseline scheduling algorithms. The experimental results show that PRS performs better than those algorithms, and can effectively improve the performance of a cloud data center.",
"title": ""
},
{
"docid": "d8d102c3d6ac7d937bb864c69b4d3cd9",
"text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.",
"title": ""
},
{
"docid": "ad5a8c3ee37219868d056b341300008e",
"text": "The challenges of 4G are multifaceted. First, 4G requires multiple-input, multiple-output (MIMO) technology, and mobile devices supporting MIMO typically have multiple antennas. To obtain the benefits of MIMO communications systems, antennas typically must be properly configured to take advantage of the independent signal paths that can exist in the communications channel environment. [1] With proper design, one antenna’s radiation is prevented from traveling into the neighboring antenna and being absorbed by the opposite load circuitry. Typically, a combination of antenna separation and polarization is used to achieve the required signal isolation and independence. However, when the area inside devices such as smartphones, USB modems, and tablets is extremely limited, this approach often is not effective in meeting industrial design and performance criteria. Second, new LTE networks are expected to operate alongside all the existing services, such as 3G voice/data, Wi-Fi, Bluetooth, etc. Third, this problem gets even harder in the 700 MHz LTE band because the typical handset is not large enough to properly resonate at that frequency.",
"title": ""
},
{
"docid": "7780162b3418d4c76300129ef4ee81bf",
"text": "A 39.8-44.6 Gb/s transmitter and receiver chipset designed in 40 nm CMOS is presented. The line-side TX implements a 2-tap FIR filter with delay-based pre-emphasis. The line-side RX uses a quarter-rate CDR architecture. The TX output shows 0.9 pspp ISI and 0.2 psrms RJ at 0.87 W. The RX achieves a jitter tolerance of 0.6 UIpp at 100 MHz and an input sensitivity of 20 mV pp\\mathchar\"702D diff at 1.05 W. The combined transmitter/receiver equalization enables 44.6 Gb/s data transmission using 231-1 PRBS at BER 10-12 over a channel with >21 dB loss at Nyquist frequency.",
"title": ""
},
{
"docid": "5350ffea7a4187f0df11fd71562aba43",
"text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.",
"title": ""
},
{
"docid": "176386fd6f456d818d7ebf81f65d5030",
"text": "Event-driven architecture is gaining momentum in research and application areas as it promises enhanced responsiveness and asynchronous communication. The combination of event-driven and service-oriented architectural paradigms and web service technologies provide a viable possibility to achieve these promises. This paper outlines an architectural design and accompanying implementation technologies for its realization as a web services-based event-driven SOA.",
"title": ""
},
{
"docid": "d6bd475e9929748bbb71ac0d82e4f067",
"text": "We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions – the math portion of the Scholastic Aptitude Test (SAT). By using a tree transducer cascade as its basic architecture, our system (called EUCLID) propagates uncertainty from multiple sources (e.g. coreference resolution or verb interpretation) until it can be confidently resolved. Experiments show the first-ever results (43% recall and 91% precision) on SAT algebra word problems. We also apply EUCLID to the public Dolphin algebra question set, and improve the state-of-the-art F1-score from 73.9% to 77.0%.",
"title": ""
},
{
"docid": "b6c0228cce65009d4d56ce8fcebe083c",
"text": "In this tutorial, we give an introduction to the field of and state of the art in music information retrieval (MIR). The tutorial particularly spotlights the question of music similarity, which is an essential aspect in music retrieval and recommendation. Three factors play a central role in MIR research: (1) the music content, i.e., the audio signal itself, (2) the music context, i.e., metadata in the widest sense, and (3) the listeners and their contexts, manifested in user-music interaction traces. We review approaches that extract features from all three data sources and combinations thereof and show how these features can be used for (large-scale) music indexing, music description, music similarity measurement, and recommendation. These methods are further showcased in a number of popular music applications, such as automatic playlist generation and personalized radio stationing, location-aware music recommendation, music search engines, and intelligent browsing interfaces. Additionally, related topics such as music identification, automatic music accompaniment and score following, and search and retrieval in the music production domain are discussed.",
"title": ""
},
{
"docid": "7f71e539817c80aaa0a4fe3b68d76948",
"text": "We propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. The source and target classes might share similar appearance (e.g. bear fur is similar to cat fur) or appear against similar background (e.g. horse and sheep appear against grass). To exploit this, we acquire three types of knowledge from the source set: a segmentation model trained on both thing and stuff classes; similarity relations between target and source classes; and cooccurrence relations between thing and stuff classes in the source. The segmentation model is used to generate thing and stuff segmentation maps on a target image, while the class similarity and co-occurrence knowledge help refining them. We then incorporate these maps as new cues into a multiple instance learning framework (MIL), propagating the transferred knowledge from the pixel level to the object proposal level. In extensive experiments, we conduct our transfer from the PASCAL Context dataset (source) to the ILSVRC, COCO and PASCAL VOC 2007 datasets (targets). We evaluate our transfer across widely different thing classes, including some that are not similar in appearance, but appear against similar background. The results demonstrate significant improvement over standard MIL, and we outperform the state-of-the-art in the transfer setting.",
"title": ""
},
{
"docid": "bcbcb23a0681ef063a37b94ccc26b00c",
"text": "Race and racism persist online in ways that are both new and unique to the Internet, alongside vestiges of centuries-old forms that reverberate significantly both offline and on. As we mark 15 years into the field of Internet studies, it becomes necessary to assess what the extant research tells us about race and racism. This paper provides an analysis of the literature on race and racism in Internet studies in the broad areas of (1) race and the structure of the Internet, (2) race and racism matters in what we do online, and (3) race, social control and Internet law. Then, drawing on a range of theoretical perspectives, including Hall’s spectacle of the Other and DuBois’s view of white culture, the paper offers an analysis and critique of the field, in particular the use of racial formation theory. Finally, the paper points to the need for a critical understanding of whiteness in Internet studies.",
"title": ""
},
{
"docid": "19ea9b23f8757804c23c21293834ff3f",
"text": "We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.",
"title": ""
},
{
"docid": "5ea59255b0ffd15285477fe5b997d48d",
"text": "Gastric cancer in humans arises in the setting of oxyntic atrophy (parietal cell loss) and attendant hyperplastic and metaplastic lineage changes within the gastric mucosa. Helicobacter infection in mice and humans leads to spasmolytic polypeptide-expressing metaplasia (SPEM). In a number of mouse models, SPEM arises after oxyntic atrophy. In mice treated with the parietal cell toxic protonophore DMP-777, SPEM appears to arise from the transdifferentiation of chief cells. These results support the concept that intrinsic mucosal influences regulate and modulate the appearance of gastric metaplasia even in the absence of significant inflammation, whereas chronic inflammation is required for the further neoplastic transition.",
"title": ""
},
{
"docid": "ebb43198da619d656c068f2ab1bfe47f",
"text": "Remote data integrity checking (RDIC) enables a server to prove to an auditor the integrity of a stored file. It is a useful technology for remote storage such as cloud storage. The auditor could be a party other than the data owner; hence, an RDIC proof is based usually on publicly available information. To capture the need of data privacy against an untrusted auditor, Hao et al. formally defined “privacy against third party verifiers” as one of the security requirements and proposed a protocol satisfying this definition. However, we observe that all existing protocols with public verifiability supporting data update, including Hao et al.’s proposal, require the data owner to publish some meta-data related to the stored data. We show that the auditor can tell whether or not a client has stored a specific file and link various parts of those files based solely on the published meta-data in Hao et al.’s protocol. In other words, the notion “privacy against third party verifiers” is not sufficient in protecting data privacy, and hence, we introduce “zero-knowledge privacy” to ensure the third party verifier learns nothing about the client’s data from all available information. We enhance the privacy of Hao et al.’s protocol, develop a prototype to evaluate the performance and perform experiment to demonstrate the practicality of our proposal.",
"title": ""
},
{
"docid": "3f207c3c622d1854a7ad6c5365354db1",
"text": "The field of Music Information Retrieval has always acknowledged the need for rigorous scientific evaluations, and several efforts have set out to develop and provide the infrastructure, technology and methodologies needed to carry out these evaluations. The community has enormously gained from these evaluation forums, but we have reached a point where we are stuck with evaluation frameworks that do not allow us to improve as much and as well as we want. The community recently acknowledged this problem and showed interest in addressing it, though it is not clear what to do to improve the situation. We argue that a good place to start is again the Text IR field. Based on a formalization of the evaluation process, this paper presents a survey of past evaluation work in the context of Text IR, from the point of view of validity, reliability and efficiency of the experiments. We show the problems that our community currently has in terms of evaluation, point to several lines of research to improve it and make various proposals in that line.",
"title": ""
}
] |
scidocsrr
|
e83088bb506326187a151acf48534dcf
|
Construal Levels and Psychological Distance: Effects on Representation, Prediction, Evaluation, and Behavior.
|
[
{
"docid": "e992ffd4ebbf9d096de092caf476e37d",
"text": "If self-regulation conforms to an energy or strength model, then self-control should be impaired by prior exertion. In Study 1, trying to regulate one's emotional response to an upsetting movie was followed by a decrease in physical stamina. In Study 2, suppressing forbidden thoughts led to a subsequent tendency to give up quickly on unsolvable anagrams. In Study 3, suppressing thoughts impaired subsequent efforts to control the expression of amusement and enjoyment. In Study 4, autobiographical accounts of successful versus failed emotional control linked prior regulatory demands and fatigue to self-regulatory failure. A strength model of self-regulation fits the data better than activation, priming, skill, or constant capacity models of self-regulation.",
"title": ""
}
] |
[
{
"docid": "50442aa4ef1d7c89822d77a5b3a0ee85",
"text": "The utilization of an AC induction motor (ACIM) ranges from consumer to automotive applications, with a variety of power and sizes. From the multitude of possible applications, some require the achievement of high speed while having a high torque value only at low speeds. Two applications needing this requirement are washing machines in consumer applications and traction in powertrain applications. These requirements impose a certain type of approach for induction motor control, which is known as “field weakening.”",
"title": ""
},
{
"docid": "49a2202592071a07109bd347563e4d6b",
"text": "To model deformation of anatomical shapes, non-linear statistics are required to take into account the non-linear structure of the data space. Computer implementations of non-linear statistics and differential geometry algorithms often lead to long and complex code sequences. The aim of the paper is to show how the Theano framework can be used for simple and concise implementation of complex differential geometry algorithms while being able to handle complex and high-dimensional data structures. We show how the Theano framework meets both of these requirements. The framework provides a symbolic language that allows mathematical equations to be directly translated into Theano code, and it is able to perform both fast CPU and GPU computations on highdimensional data. We show how different concepts from non-linear statistics and differential geometry can be implemented in Theano, and give examples of the implemented theory visualized on landmark representations of Corpus Callosum shapes.",
"title": ""
},
{
"docid": "6d2667dd550e14d4d46b24d9c8580106",
"text": "Deficits in gratification delay are associated with a broad range of public health problems, such as obesity, risky sexual behavior, and substance abuse. However, 6 decades of research on the construct has progressed less quickly than might be hoped, largely because of measurement issues. Although past research has implicated 5 domains of delay behavior, involving food, physical pleasures, social interactions, money, and achievement, no published measure to date has tapped all 5 components of the content domain. Existing measures have been criticized for limitations related to efficiency, reliability, and construct validity. Using an innovative Internet-mediated approach to survey construction, we developed the 35-item 5-factor Delaying Gratification Inventory (DGI). Evidence from 4 studies and a large, diverse sample of respondents (N = 10,741) provided support for the psychometric properties of the measure. Specifically, scores on the DGI demonstrated strong internal consistency and test-retest reliability for the 35-item composite, each of the 5 domains, and a 10-item short form. The 5-factor structure fit the data well and had good measurement invariance across subgroups. Construct validity was supported by correlations with scores on closely related self-control measures, behavioral ratings, Big Five personality trait measures, and measures of adjustment and psychopathology, including those on the Minnesota Multiphasic Personality Inventory-2-Restructured Form. DGI scores also showed incremental validity in accounting for well-being and health-related variables. The present investigation holds implications for improving public health, accelerating future research on gratification delay, and facilitating survey construction research more generally by demonstrating the suitability of an Internet-mediated strategy.",
"title": ""
},
{
"docid": "cf9d3c47ee93299f269484ffdbe44453",
"text": "As the complexity and variety of computer system hardware increases, its suitability as a pedagogical tool in computer organization/architecture courses diminishes. As a consequence, many instructors are turning to simulators as teaching aids, often using valuable teaching/research time to construct them. Many of these simulators have been made freely available on the Internet, providing a useful and time-saving resource for other instructors. However, finding the right simulator for a particular course or topic can itself be a time-consuming process. The goal of this paper is to provide an easy-to-use survey of free and Internet-accessible computer system simulators as a resource for all instructors of computer organization and computer architecture courses.",
"title": ""
},
{
"docid": "290869845a0ce3d1bf3722bfba7dd1c5",
"text": "Supplier selection is an important and widely studied topic since it has significant impact on purchasing management in supply chain. Recently, support vector machine has received much more attention from researchers, while studies on supplier selection based on it are few. In this paper, a new support vector machine technology, potential support vector machine, is introduced and then combined with decision tree to address issues on supplier selection including feature selection, multiclass classification and so on. So, hierarchical potential support vector machine and hierarchical system of features are put forward in the paper, and experiments show the proposed methodology has much better generalization performance and less computation consumptions than standard support vector machine. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0d5ca0e11363cae0b4d7f335cf832e24",
"text": "This paper presents an investigation into two fuzzy association rule mining models for enhancing prediction performance. The first model (the FCM-Apriori model) integrates Fuzzy C-Means (FCM) and the Apriori approach for road traffic performance prediction. FCM is used to define the membership functions of fuzzy sets and the Apriori approach is employed to identify the Fuzzy Association Rules (FARs). The proposed model extracts knowledge from a database for a Fuzzy Inference System (FIS) that can be used in prediction of a future value. The knowledge extraction process and the performance of the model are demonstrated through two case studies of road traffic data sets with different sizes. The experimental results show the merits and capability of the proposed KD model in FARs based knowledge extraction. The second model (the FCM-MSapriori model) integrates FCM and a Multiple Support Apriori (MSapriori) approach to extract the FARs. These FARs provide the knowledge base to be utilized within the FIS for prediction evaluation. Experimental results have shown that the FCM-MSapriori model predicted the future values effectively and outperformed the FCM-Apriori model and other models reported in the literature.",
"title": ""
},
{
"docid": "7834f32e3d6259f92f5e0beb3a53cc04",
"text": "An educational institution needs to have an approximate prior knowledge of enrolled students to predict their performance in future academics. This helps them to identify promising students and also provides them an opportunity to pay attention to and improve those who would probably get lower grades. As a solution, we have developed a system which can predict the performance of students from their previous performances using concepts of data mining techniques under Classification. We have analyzed the data set containing information about students, such as gender, marks scored in the board examinations of classes X and XII, marks and rank in entrance examinations and results in first year of the previous batch of students. By applying the ID3 (Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we have predicted the general and individual performance of freshly admitted students in future examinations.",
"title": ""
},
{
"docid": "e1885f9c373c355a4df9307c6d90bf83",
"text": "Ricinulei possess movable, slender pedipalps with small chelae. When ricinuleids walk, they occasionally touch the soil surface with the tips of their pedipalps. This behavior is similar to the exploration movements they perform with their elongated second legs. We studied the distal areas of the pedipalps of the cavernicolous Mexican species Pseudocellus pearsei with scanning and transmission electron microscopy. Five different surface structures are characteristic for the pedipalps: (1) slender sigmoidal setae with smooth shafts resembling gustatory terminal pore single-walled (tp-sw) sensilla; (2) conspicuous long, mechanoreceptive slit sensilla; (3) a single, short, clubbed seta inside a deep pit representing a no pore single walled (np-sw) sensillum; (4) a single pore organ containing one olfactory wall pore single-walled (wp-sw) sensillum; and (5) gustatory terminal pore sensilla in the fingers of the pedipalp chela. Additionally, the pedipalps bear sensilla which also occur on the other appendages. With this sensory equipment, the pedipalps are highly effective multimodal short range sensory organs which complement the long range sensory function of the second legs. In order to present the complete sensory equipment of all appendages of the investigated Pseudocellus a comparative overview is provided.",
"title": ""
},
{
"docid": "995376c324ff12a0be273e34f44056df",
"text": "Conventional Gabor representation and its extracted features often yield a fairly poor performance in retrieving the rotated and scaled versions of the texture image under query. To address this issue, existing methods exploit multiple stages of transformations for making rotation and/or scaling being invariant at the expense of high computational complexity and degraded retrieval performance. The latter is mainly due to the lost of image details after multiple transformations. In this paper, a rotation-invariant and a scale-invariant Gabor representations are proposed, where each representation only requires few summations on the conventional Gabor filter impulse responses. The optimum setting of the orientation parameter and scale parameter is experimentally determined over the Brodatz and MPEG-7 texture databases. Features are then extracted from these new representations for conducting rotation-invariant or scale-invariant texture image retrieval. Since the dimension of the new feature space is much reduced, this leads to a much smaller metadata storage space and faster on-line computation on the similarity measurement. Simulation results clearly show that our proposed invariant Gabor representations and their extracted invariant features significantly outperform the conventional Gabor representation approach for rotation-invariant and scale-invariant texture image retrieval. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a5614379a447180fe0ab5ab83770dafb",
"text": "This paper presents a novel method for performing an efficient cost aggregation in stereo matching. The cost aggregation problem is re-formulated with a perspective of a histogram, and it gives us a potential to reduce the complexity of the cost aggregation significantly. Different from the previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy which exists among the search range, caused by a repeated filtering for all disparity hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The trade-off between accuracy and complexity is extensively investigated into parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity. This work provides new insights into complexity-constrained stereo matching algorithm design.",
"title": ""
},
{
"docid": "c5a36e3b8196815fea6b5db825c09133",
"text": "In this paper, solutions for developing low cost electronics for antenna transceivers that take advantage of the stable electrical properties of the organic substrate liquid crystal polymer (LCP) has been presented. Three important ingredients in RF wireless transceivers namely embedded passives, a dual band filter and a RFid antenna have been designed and fabricated on LCP. Test results of all 3 of the structures show good agreement between the simulated and measured results over their respective bandwidths, demonstrating stable performance of the LCP substrate.",
"title": ""
},
{
"docid": "9faa8b39898eaa4ca0a0c23d29e7a0ff",
"text": "Highly emphasized in entrepreneurial practice, business models have received limited attention from researchers. No consensus exists regarding the definition, nature, structure, and evolution of business models. Still, the business model holds promise as a unifying unit of analysis that can facilitate theory development in entrepreneurship. This article synthesizes the literature and draws conclusions regarding a number of these core issues. Theoretical underpinnings of a firm's business model are explored. A sixcomponent framework is proposed for characterizing a business model, regardless of venture type. These components are applied at three different levels. The framework is illustrated using a successful mainstream company. Suggestions are made regarding the manner in which business models might be expected to emerge and evolve over time. a c Purchase Export",
"title": ""
},
{
"docid": "9ff22294cf279d757a84ae00d4e29473",
"text": "We usually endow the investigated objects with pairwise relationships, which can be illustrated as graphs. In many real-world problems, however, relationships among the objects of our interest are more complex than pairwise. Naively squeezing the complex relationships into pairwise ones will inevitably lead to loss of information which can be expected valuable for our learning tasks however. Therefore we consider using hypergraphs instead to completely represent complex relationships among the objects of our interest, and thus the problem of learning with hypergraphs arises. Our main contribution in this paper is to generalize the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classification on the basis of the spectral hypergraph clustering approach. Our experiments on a number of benchmarks showed the advantages of hypergraphs over usual graphs.",
"title": ""
},
{
"docid": "dc33e4c6352c885fb27e08fa1c310fb3",
"text": "Association rule mining algorithm is used to extract relevant information from database and transmit into simple and easiest form. Association rule mining is used in large set of data. It is used for mining frequent item sets in the database or in data warehouse. It is also one type of data mining procedure. In this paper some of the association rule mining algorithms such as apriori, partition, FP-growth, genetic algorithm etc., can be analyzed for generating frequent itemset in an effective manner. These association rule mining algorithms may differ depend upon their performance and effective pattern generation. So, this paper may concentrate on some of the algorithms used to generate efficient frequent itemset using some of association rule mining algorithms.",
"title": ""
},
{
"docid": "7d713780dd3f7ad0abc5ec02f2a5d8f2",
"text": "Pelvic discontinuity is a challenging complication encountered during revision total hip arthroplasty. Pelvic discontinuity is defined as a separation of the ilium superiorly from the ischiopubic segment inferiorly and is typically a chronic condition in failed total hip arthroplasties in the setting of bone loss. After a history and a physical examination have been completed and infection has been ruled out, appropriate imaging must be obtained, including plain hip radiographs, oblique Judet radiographs, and often a CT scan. The main management options are a hemispheric acetabular component with posterior column plating, a cup-cage construct, pelvic distraction, and a custom triflange construct. The techniques have unique pros and cons, but the goals are to obtain stable and durable acetabular component fixation and a healed or unitized pelvis while minimizing complications.",
"title": ""
},
{
"docid": "81ec51ca319ab957c0e951c9de31859c",
"text": "Photography has been striving to capture an ever increasing amount of visual information in a single image. Digital sensors, however, are limited to recording a small subset of the desired information at each pixel. A common approach to overcoming the limitations of sensing hardware is the optical multiplexing of high-dimensional data into a photograph. While this is a well-studied topic for imaging with color filter arrays, we develop a mathematical framework that generalizes multiplexed imaging to all dimensions of the plenoptic function. This framework unifies a wide variety of existing approaches to analyze and reconstruct multiplexed data in either the spatial or the frequency domain. We demonstrate many practical applications of our framework including high-quality light field reconstruction, the first comparative noise analysis of light field attenuation masks, and an analysis of aliasing in multiplexing applications.",
"title": ""
},
{
"docid": "f43d024b61620a19cfbc3d76b6253332",
"text": "Equipped with sensors that are capable of collecting physiological and environmental data continuously, wearable technologies have the potential to become a valuable component of personalized healthcare and health management. However, in addition to the potential benefits of wearable devices, the widespread and continuous use of wearables also poses many privacy challenges. In some instances, users may not be aware of the risks associated with wearable devices, while in other cases, users may be aware of the privacy-related risks, but may beunable to negotiate complicated privacy settings to meet their needs and preferences. This lack of awareness could have an adverse impact on users in the future, even becoming a \"skeleton in the closet.\" In this work, we conducted 32 semi-structured interviews to understand how users perceive privacy in wearable computing. Results suggest that user concerns toward wearable privacy have different levels of variety ranging from no concern to highly concerned. In addition, while user concerns and benefits are similar among participants in our study, these variablesshould be investigated more extensively for the development of privacy enhanced wearable technologies.",
"title": ""
},
{
"docid": "bc90b1e4d456ca75b38105cc90d7d51d",
"text": "Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.",
"title": ""
},
{
"docid": "e7bfafee5cfaaa1a6a41ae61bdee753d",
"text": "Borderline personality disorder (BPD) has been shown to be a valid and reliable diagnosis in adolescents and associated with a decrease in both general and social functioning. With evidence linking BPD in adolescents to poor prognosis, it is important to develop a better understanding of factors and mechanisms contributing to the development of BPD. This could potentially enhance our knowledge and facilitate the design of novel treatment programs and interventions for this group. In this paper, we outline a theoretical model of BPD in adolescents linking the original mentalization-based theory of BPD, with recent extensions of the theory that focuses on hypermentalizing and epistemic trust. We then provide clinical case vignettes to illustrate this extended theoretical model of BPD. Furthermore, we suggest a treatment approach to BPD in adolescents that focuses on the reduction of hypermentalizing and epistemic mistrust. We conclude with an integration of theory and practice in the final section of the paper and make recommendations for future work in this area. (PsycINFO Database Record",
"title": ""
},
{
"docid": "8f7368daec71ccb4b5c5a2daebda07be",
"text": "This paper presents a novel inkjet-printed humidity sensor tag for passive radio-frequency identification (RFID) systems operating at ultrahigh frequencies (UHFs). During recent years, various humidity sensors have been developed by researchers around the world for HF and UHF RFID systems. However, to our best knowledge, the humidity sensor presented in this paper is one of the first passive UHF RFID humidity sensor tags fabricated using inkjet technology. This paper describes the structure and operation principle of the sensor tag as well as discusses the method of performing humidity measurements in practice. Furthermore, measurement results are presented, which include air humidity-sensitivity characterization and tag identification performance measurements.",
"title": ""
}
] |
scidocsrr
|
9bde3ea71000d86a220bf8ce8bcb40c2
|
Earthquake detection through computationally efficient similarity search
|
[
{
"docid": "62376954e4974ea2d52e96b373c67d8a",
"text": "Imagine the following situation. You’re in your car, listening to the radio and suddenly you hear a song that catches your attention. It’s the best new song you have heard for a long time, but you missed the announcement and don’t recognize the artist. Still, you would like to know more about this music. What should you do? You could call the radio station, but that’s too cumbersome. Wouldn’t it be nice if you could push a few buttons on your mobile phone and a few seconds later the phone would respond with the name of the artist and the title of the music you’re listening to? Perhaps even sending an email to your default email address with some supplemental information. In this paper we present an audio fingerprinting system, which makes the above scenario possible. By using the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified. At the core of the presented system are a highly robust fingerprint extraction method and a very efficient fingerprint search strategy, which enables searching a large fingerprint database with only limited computing resources.",
"title": ""
}
] |
[
{
"docid": "ede29bc41058b246ceb451d5605cce2c",
"text": "Knowledge graphs have challenged the existing embedding-based approaches for representing their multifacetedness. To address some of the issues, we have investigated some novel approaches that (i) capture the multilingual transitions on different language-specific versions of knowledge, and (ii) encode the commonly existing monolingual knowledge with important relational properties and hierarchies. In addition, we propose the use of our approaches in a wide spectrum of NLP tasks that have not been well explored by related works.",
"title": ""
},
{
"docid": "8de601698db75c865bb84f69e48b399c",
"text": "Increasingly, software systems should self-adapt to satisfy new requirements and environmental conditions that may arise after deployment. Due to their high complexity, adaptive programs are difficult to specify, design, verify, and validate. Moreover, the current lack of reusable design expertise that can be leveraged from one adaptive system to another further exacerbates the problem. We studied over thirty adaptation-related research and project implementations available from the literature and open sources to harvest adaptation-oriented design patterns that support the development of adaptive systems. These adaptation-oriented patterns facilitate the separate development of the functional and adaptive logic. In order to support the assurance of adaptive systems, each design pattern includes templates that formally specify invariant properties of adaptive systems. To demonstrate their usefulness, we have applied a subset of our adaptation-oriented patterns to the design and implementation of ZAP.com, an adaptive news web server.",
"title": ""
},
{
"docid": "e929f0dfd36463c2ab251684c6cbfda1",
"text": "Lignocellulose—a major component of biomass available on earth is a renewable and abundantly available with great potential for bioconversion to value-added bio-products. The review aims at physio-chemical features of lignocellulosic biomass and composition of different lignocellulosic materials. This work is an overview about the conversion of lignocellulosic biomass into bio-energy products such as bio-ethanol, 1-butanol, bio-methane, bio-hydrogen, organic acids including citric acid, succinic acid and lactic acid, microbial polysaccharides, single cell protein and xylitol. The biotechnological aspect of bio-transformation of lignocelluloses research and its future prospects are also discussed.",
"title": ""
},
{
"docid": "f8fe22b2801a250a52e3d19ae23804e9",
"text": "Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here, we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies importation routes that contribute to malaria epidemiology on regional spatial scales.",
"title": ""
},
{
"docid": "d3c059d0889fc390a91d58aa82980fcc",
"text": "In recent trends industries, organizations and many companies are using personal identification strategies like finger print identification, RFID for tracking attendance and etc. Among of all these personal identification strategies face recognition is most natural, less time taken and high efficient one. It’s has several applications in attendance management systems and security systems. The main strategy involve in this paper is taking attendance in organizations, industries and etc. using face detection and recognition technology. A time period is settled for taking the attendance and after completion of time period attendance will directly stores into storage device mechanically without any human intervention. A message will send to absent student parent mobile using GSM technology. This attendance will be uploaded into web server using Ethernet. This raspberry pi 2 module is used in this system to achieve high speed of operation. Camera is interfaced to one USB port of raspberry pi 2. Eigen faces algorithm is used for face detection and recognition technology. Eigen faces algorithm is less time taken and high effective than other algorithms like viola-jones algorithm etc. the attendance will directly stores in storage device like pen drive that is connected to one of the USB port of raspberry pi 2. This system is most effective, easy and less time taken for tracking attendance in organizations with period wise without any human intervention.",
"title": ""
},
{
"docid": "bf1de492c30bd667711fe99bab58fb63",
"text": "The healthcare milieu of most developing countries is often characterized by multiplicity of health programs supported by myriad of donors geared towards reversing disease trends in these countries. However, donor policies tend to support implementation of vertical programs which maintain their own management structures and information systems. The emerging picture overtime is proliferation of multiple and uncoordinated health information systems (HIS), that are often in conflict with the primary health care goals of integrated district based health information systems. As a step towards HIS strengthening, most countries are pursuing an integration strategy of the vertical HIS. Nevertheless, the challenges presented by the vertical reporting HIS reinforced by funds from the donors renders the integration initiatives ineffective, some ending up as total failure or as mere pilot projects. The failure of the systems after implementation transcends technical fixes. This paper drew on an empirical case to analyze the challenges associated with the effort to integrate the HIS in a context characterized by multiple vertical health programs. The study revealed the tensions that exists between the ministry of health which strived to standardize and integrate the HIS and the vertical programs which pushed the agenda to maintain their systems alongside the national HIS. However, as implied from the study, attaining integration entails the ability to strike a balance between the two forces, which can be achieved by strengthening communication and collaboration linkages between the stakeholders.",
"title": ""
},
{
"docid": "381e173e41b085ad7a4a30e84b1d37dc",
"text": "Monarch butterfly optimization (MBO) is a new metaheuristic algorithm mimics the migration of butterflies from northern USA to Mexico. In MBO, there are mainly two processes. In the first process, the algorithm emulates how some of the butterflies move from the current position to the new position by the migration operator. In the latter process, the algorithm tunes the position of other butterflies by adjusting operator. In order to enhance the search ability of MBO, an innovation method called MBHS is introduced to tackle the optimization problem. In MBHS, the harmony search (HS) adds mutation operators to the process of adjusting operator to enhance the exploitation, exploration ability, and speed up the convergence rate of MBO. For the purpose to validate the performance of MBHS, 14 benchmark functions are used, and the performance is compared with well-known search algorithms. The experimental results demonstrate that MBHS performs better than the basic MBO and other algorithms.",
"title": ""
},
{
"docid": "10d14531df9190f5ffb217406fe8eb49",
"text": "Web technology has enabled e-commerce. However, in our review of the literature, we found little research on how firms can better position themselves when adopting e-commerce for revenue generation. Drawing upon technology diffusion theory, we developed a conceptual model for assessing e-commerce adoption and migration, incorporating six factors unique to e-commerce. A series of propositions were then developed. Survey data of 1036 firms in a broad range of industries were collected and used to test our model. Our analysis based on multi-nominal logistic regression demonstrated that technology integration, web functionalities, web spending, and partner usage were significant adoption predictors. The model showed that these variables could successfully differentiate non-adopters from adopters. Further, the migration model demonstrated that web functionalities, web spending, and integration of externally oriented inter-organizational systems tend to be the most influential drivers in firms’ migration toward e-commerce, while firm size, partner usage, electronic data interchange (EDI) usage, and perceived obstacles were found to negatively affect ecommerce migration. This suggests that large firms, as well as those that have been relying on outsourcing or EDI, tended to be slow to migrate to the internet platform. # 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1501d5173376a06a3b9c30c617abfe31",
"text": "^^ir jEdmund Hillary of Mount Everest \\ fajne liked to tell a story about one of ^J Captain Robert Falcon Scott's earlier attempts, from 1901 to 1904, to reach the South Pole. Scott led an expedition made up of men from thb Royal Navy and the merchant marine, as jwell as a group of scientists. Scott had considel'able trouble dealing with the merchant n|arine personnel, who were unaccustomed ip the rigid discipline of Scott's Royal Navy. S|:ott wanted to send one seaman home because he would not take orders, but the seaman refused, arguing that he had signed a contract and knew his rights. Since the seaman wds not subject to Royal Navy disciplinary action, Scott did not know what to do. Then Ernest Shackleton, a merchant navy officer in $cott's party, calmly informed the seaman th^t he, the seaman, was returning to Britain. Again the seaman refused —and Shackle^on knocked him to the ship's deck. After ar^other refusal, followed by a second flooring, the seaman decided he would retuijn home. Scott later became one of the victims of his own inadequacies as a leader in his 1911 race to the South Pole. Shackleton went qn to lead many memorable expeditions; once, seeking help for the rest of his party, who were stranded on the Antarctic Coast, he journeyed with a small crew in a small open boat from the edge of Antarctica to Souilh Georgia Island.",
"title": ""
},
{
"docid": "b72d0d187fe12d1f006c8e17834af60e",
"text": "Pseudoangiomatous stromal hyperplasia (PASH) is a rare benign mesenchymal proliferative lesion of the breast. In this study, we aimed to show a case of PASH with mammographic and sonographic features, which fulfill the criteria for benign lesions and to define its recently discovered elastography findings. A 49-year-old premenopausal female presented with breast pain in our outpatient surgery clinic. In ultrasound images, a hypoechoic solid mass located at the 3 o'clock position in the periareolar region of the right breast was observed. Due to it was not detected on earlier mammographies, the patient underwent a tru-cut biopsy, although the mass fulfilled the criteria for benign lesions on mammography, ultrasound, and elastography. Elastography is a new technique differentiating between benign and malignant lesions. It is also useful to determine whether a biopsy is necessary or follow-up is sufficient.",
"title": ""
},
{
"docid": "7c106fc6fc05ec2d35b89a1dec8e2ca2",
"text": "OBJECTIVE\nCurrent estimates of the prevalence of depression during pregnancy vary widely. A more precise estimate is required to identify the level of disease burden and develop strategies for managing depressive disorders. The objective of this study was to estimate the prevalence of depression during pregnancy by trimester, as detected by validated screening instruments (ie, Beck Depression Inventory, Edinburgh Postnatal Depression Score) and structured interviews, and to compare the rates among instruments.\n\n\nDATA SOURCES\nObservational studies and surveys were searched in MEDLINE from 1966, CINAHL from 1982, EMBASE from 1980, and HealthSTAR from 1975.\n\n\nMETHODS OF STUDY SELECTION\nA validated study selection/data extraction form detailed acceptance criteria. Numbers and percentages of depressed patients, by weeks of gestation or trimester, were reported.\n\n\nTABULATION, INTEGRATION, AND RESULTS\nTwo reviewers independently extracted data; a third party resolved disagreement. Two raters assessed quality by using a 12-point checklist. A random effects meta-analytic model produced point estimates and 95% confidence intervals (CIs). Heterogeneity was examined with the chi(2) test (no systematic bias detected). Funnel plots and Begg-Mazumdar test were used to assess publication bias (none found). Of 714 articles identified, 21 (19,284 patients) met the study criteria. Quality scores averaged 62%. Prevalence rates (95% CIs) were 7.4% (2.2, 12.6), 12.8% (10.7, 14.8), and 12.0% (7.4, 16.7) for the first, second, and third trimesters, respectively. Structured interviews found lower rates than the Beck Depression Inventory but not the Edinburgh Postnatal Depression Scale.\n\n\nCONCLUSION\nRates of depression, especially during the second and third trimesters of pregnancy, are substantial. Clinical and economic studies to estimate maternal and fetal consequences are needed.",
"title": ""
},
{
"docid": "d1756aa5f0885157bdad130d96350cd3",
"text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.",
"title": ""
},
{
"docid": "5635f52c3e02fd9e9ea54c9ea1ff0329",
"text": "As a digital version of word-of-mouth, online review has become a major information source for consumers and has very important implications for a wide range of management activities. While some researchers focus their studies on the impact of online product review on sales, an important assumption remains unexamined, that is, can online product review reveal the true quality of the product? To test the validity of this key assumption, this paper first empirically tests the underlying distribution of online reviews with data from Amazon. The results show that 53% of the products have a bimodal and non-normal distribution. For these products, the average score does not necessarily reveal the product's true quality and may provide misleading recommendations. Then this paper derives an analytical model to explain when the mean can serve as a valid representation of a product's true quality, and discusses its implication on marketing practices.",
"title": ""
},
{
"docid": "9b6a16b84d4aadf582c16a8adb4e4830",
"text": "This paper presents a new in-vehicle real-time vehicle detection strategy which hypothesizes the presence of vehicles in rectangular sub-regions based on the robust classification of features vectors result of a combination of multiple morphological vehicle features. One vector is extracted for each region of the image likely containing vehicles as a multidimensional likelihood measure with respect to a simplified vehicle model. A supervised training phase set the representative vectors of the classes vehicle and non-vehicle, so that the hypothesis is verified or not according to the Mahalanobis distance between the feature vector and the representative vectors. Excellent results have been obtained in several video sequences accurately detecting vehicles with very different aspect-ratio, color, size, etc, while minimizing the number of missing detections and false alarms.",
"title": ""
},
{
"docid": "a429888416cd5c175f3fb2ac90350a06",
"text": "Recent years, Software Defined Routers (SDRs) (programmable routers) have emerged as a viable solution to provide a cost-effective packet processing platform with easy extensibility and programmability. Multi-core platforms significantly promote SDRs’ parallel computing capacities, enabling them to adopt artificial intelligent techniques, i.e., deep learning, to manage routing paths. In this paper, we explore new opportunities in packet processing with deep learning to inexpensively shift the computing needs from rule-based route computation to deep learning based route estimation for high-throughput packet processing. Even though deep learning techniques have been extensively exploited in various computing areas, researchers have, to date, not been able to effectively utilize deep learning based route computation for high-speed core networks. We envision a supervised deep learning system to construct the routing tables and show how the proposed method can be integrated with programmable routers using both Central Processing Units (CPUs) and Graphics Processing Units (GPUs). We demonstrate how our uniquely characterized input and output traffic patterns can enhance the route computation of the deep learning based SDRs through both analysis and extensive computer simulations. In particular, the simulation results demonstrate that our proposal outperforms the benchmark method in terms of delay, throughput, and signaling overhead.",
"title": ""
},
{
"docid": "0a63bb79988efa4cc26dcb66647617a0",
"text": "Physical activity is one of the most promising nonpharmacological, noninvasive, and cost-effective methods of health-promotion, yet statistics show that only a small percentage of middle-aged and older adults engage in the recommended amount of regular exercise. This state of affairs is less likely due to a lack of knowledge about the benefits of exercise than to failures of motivation and self-regulatory mechanisms. Many types of intervention programs target exercise in later life, but they typically do not achieve sustained behavior change, and there has been very little increase in the exercise rate in the population over the last decade. The goal of this paper is to consider the use of effective low-cost motivational and behavioral strategies for increasing physical activity, which could have far-reaching benefits at the individual and population levels. We present a multicomponent framework to guide development of behavior change interventions to increase and maintain physical activity among sedentary adults and others at risk for health problems. This involves a personalized approach to motivation and behavior change, which includes social support, goal setting, and positive affect coupled with cognitive restructuring of negative and self-defeating attitudes and misconceptions. These strategies can lead to increases in exercise self-efficacy and control beliefs as well as self- management skills such as self-regulation and action planning, which in turn are expected to lead to long-term increases in activity. These changes in activity frequency and intensity can ultimately lead to improvements in physical and psychological well-being among middle-aged and older adults, including those from underserved, vulnerable populations. Even a modest increase in physical activity can have a significant impact on health and quality of life. Recommendations for future interventions include a focus on ways to achieve personalized approaches, broad outreach, and maintenance of behavior changes.",
"title": ""
},
{
"docid": "1c603902fb684005869d19be91970dd4",
"text": "Topic: A study to assess the knowledge of Cardiac Nurses about commonly administered drugs in Cardiac Surgical ICU. Nurses are responsible for preparing and administering potent drugs that affects the patient's cardiovascular functions. Nurses should be competent enough in medicine administration to prevent medication errors. Each nurse should be aware of indication, action, contraindications, adverse reactions and interactions of drugs. OBJECTIVES: -1. To identify knowledge about commonly administered drugs in Cardiac Surgical ICU among Cardiac Nurses. 2. To identify the relationship between knowledge level about commonly administered drugs in Cardiac Surgical ICU and selected variables. METHODS: -Pilot study was done in 5 cardiac speciality nursing students, then 25 cardiac nurses were selected randomly from the CSICU including permanent @ temporary registered nurses for the study; Convenient sampling technique was used for selecting the sample. Total period of study was from August 2011 to October 2011. A self-administered questionnaire was used in the form of multiple choices. RESULTS: Study shows that 3% of the sample had poor knowledge, 23% had average knowledge, 57% had fair knowledge and 17% had good knowledge about commonly administered drugs in CSICU. There was no statistically significant difference when comparing the mean knowledge score with age, professional qualification, year of experience and CPCR training programme attended. There was statistically significant higher knowledge score in nurses with increase in ICU experience. CONCLUSION: -Majority of cardiac nurses have above average knowledge about commonly administered drugs in CSICU.",
"title": ""
},
{
"docid": "3815a705e2bb17300e29f08d3ad12657",
"text": "We introduce an efficient preprocessing algorithm to reduce the number of cells in a filtered cell complex while preserving its persistent homology groups. The technique is based on an extension of combinatorial Morse theory from complexes to filtrations.",
"title": ""
},
{
"docid": "998fe25641f4f6dc6649b02226c5e86a",
"text": "We present the malicious administrator problem, in which one or more network administrators attempt to damage routing, forwarding, or network availability by misconfiguring controllers. While this threat vector has been acknowledged in previous work, most solutions have focused on enforcing specific policies for forwarding rules. We present a definition of this problem and a controller design called Fleet that makes a first step towards addressing this problem. We present two protocols that can be used with the Fleet controller, and argue that its lower layer deployed on top of switches eliminates many problems of using multiple controllers in SDNs. We then present a prototype simulation and show that as long as a majority of non-malicious administrators exists, we can usually recover from link failures within several seconds (a time dominated by failure detection speed and inter-administrator latency).",
"title": ""
}
] |
scidocsrr
|
6970436fc7413a5cf5b1ee436a820561
|
BabelRelate! A Joint Multilingual Approach to Computing Semantic Relatedness
|
[
{
"docid": "86820c43e63066930120fa5725b5b56d",
"text": "We introduce Wiktionary as an emerging lexical semantic resource that can be used as a substitute for expert-made resources in AI applications. We evaluate Wiktionary on the pervasive task of computing semantic relatedness for English and German by means of correlation with human rankings and solving word choice problems. For the first time, we apply a concept vector based measure to a set of different concept representations like Wiktionary pseudo glosses, the first paragraph of Wikipedia articles, English WordNet glosses, and GermaNet pseudo glosses. We show that: (i) Wiktionary is the best lexical semantic resource in the ranking task and performs comparably to other resources in the word choice task, and (ii) the concept vector based approach yields the best results on all datasets in both evaluations.",
"title": ""
}
] |
[
{
"docid": "a2622b1e0c1c58a535ec11a5075d1222",
"text": "The condition of a machine can automatically be identified by creating and classifying features that summarize characteristics of measured signals. Currently, experts, in their respective fields, devise these features based on their knowledge. Hence, the performance and usefulness depends on the expert's knowledge of the underlying physics or statistics. Furthermore, if new and additional conditions should be detectable, experts have to implement new feature extraction methods. To mitigate the drawbacks of feature engineering, a method from the subfield of feature learning, i.e., deep learning (DL), more specifically convolutional neural networks (NNs), is researched in this paper. The objective of this paper is to investigate if and how DL can be applied to infrared thermal (IRT) video to automatically determine the condition of the machine. By applying this method on IRT data in two use cases, i.e., machine-fault detection and oil-level prediction, we show that the proposed system is able to detect many conditions in rotating machinery very accurately (i.e., 95 and 91.67% accuracy for the respective use cases), without requiring any detailed knowledge about the underlying physics, and thus having the potential to significantly simplify condition monitoring using complex sensor data. Furthermore, we show that by using the trained NNs, important regions in the IRT images can be identified related to specific conditions, which can potentially lead to new physical insights.",
"title": ""
},
{
"docid": "2fd06457db3dfb09af108d22607a923d",
"text": "An analysis of an on-chip buck converter is presented in this paper. A high switching frequency is the key design parameter that simultaneously permits monolithic integration and high efficiency. A model of the parasitic impedances of a buck converter is developed. With this model, a design space is determined that allows integration of active and passive devices on the same die for a target technology. An efficiency of 88.4% at a switching frequency of 477 MHz is demonstrated for a voltage conversion from 1.2–0.9 volts while supplying 9.5 A average current. The area occupied by the buck converter is 12.6 mm assuming an 80-nm CMOS technology. An estimate of the efficiency is shown to be within 2.4% of simulation at the target design point. Full integration of a high-efficiency buck converter on the same die with a dualmicroprocessor is demonstrated to be feasible.",
"title": ""
},
{
"docid": "6ee0c9832d82d6ada59025d1c7bb540e",
"text": "Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.",
"title": ""
},
{
"docid": "41c35407c55878910f5dfc2dfe083955",
"text": "This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.",
"title": ""
},
{
"docid": "0b18f7966a57e266487023d3a2f3549d",
"text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful",
"title": ""
},
{
"docid": "08255cbafcf9a3dd9dd9d084c1de543e",
"text": "The sustained growth of data traffic volume calls for an introduction of an efficient and scalable transport platform for links of 100 Gb/s and beyond in the future optical network. In this article, after briefly reviewing the existing major technology options, we propose a novel, spectrum- efficient, and scalable optical transport network architecture called SLICE. The SLICE architecture enables sub-wavelength, superwavelength, and multiple-rate data traffic accommodation in a highly spectrum-efficient manner, thereby providing a fractional bandwidth service. Dynamic bandwidth variation of elastic optical paths provides network operators with new business opportunities offering cost-effective and highly available connectivity services through time-dependent bandwidth sharing, energy-efficient network operation, and highly survivable restoration with bandwidth squeezing. We also discuss an optical orthogonal frequency-division multiplexing-based flexible-rate transponder and a bandwidth-variable wavelength cross-connect as the enabling technologies of SLICE concept. Finally, we present the performance evaluation and technical challenges that arise in this new network architecture.",
"title": ""
},
{
"docid": "7e2c5184ca6c738f3db3c0ada7cdf37a",
"text": "DNA microarray technology has led to an explosion of oncogenomic analyses, generating a wealth of data and uncovering the complex gene expression patterns of cancer. Unfortunately, due to the lack of a unifying bioinformatic resource, the majority of these data sit stagnant and disjointed following publication, massively underutilized by the cancer research community. Here, we present ONCOMINE, a cancer microarray database and web-based data-mining platform aimed at facilitating discovery from genome-wide expression analyses. To date, ONCOMINE contains 65 gene expression datasets comprising nearly 48 million gene expression measurements form over 4700 microarray experiments. Differential expression analyses comparing most major types of cancer with respective normal tissues as well as a variety of cancer subtypes and clinical-based and pathology-based analyses are available for exploration. Data can be queried and visualized for a selected gene across all analyses or for multiple genes in a selected analysis. Furthermore, gene sets can be limited to clinically important annotations including secreted, kinase, membrane, and known gene-drug target pairs to facilitate the discovery of novel biomarkers and therapeutic targets.",
"title": ""
},
{
"docid": "66f3db25d6cb91556b6dbfd5c0d2bf41",
"text": "Many real-world applications wish to collect tamperevident logs for forensic purposes. This paper considers the case of an untrusted logger, serving a number of clients who wish to store their events in the log, and kept honest by a number of auditors who will challenge the logger to prove its correct behavior. We propose semantics of tamper-evident logs in terms of this auditing process. The logger must be able to prove that individual logged events are still present, and that the log, as seen now, is consistent with how it was seen in the past. To accomplish this efficiently, we describe a tree-based data structure that can generate such proofs with logarithmic size and space, improving over previous linear constructions. Where a classic hash chain might require an 800 MB trace to prove that a randomly chosen event is in a log with 80 million events, our prototype returns a 3 KB proof with the same semantics. We also present a flexible mechanism for the log server to present authenticated and tamper-evident search results for all events matching a predicate. This can allow large-scale log servers to selectively delete old events, in an agreed-upon fashion, while generating efficient proofs that no inappropriate events were deleted. We describe a prototype implementation and measure its performance on an 80 million event syslog trace at 1,750 events per second using a single CPU core. Performance improves to 10,500 events per second if cryptographic signatures are offloaded, corresponding to 1.1 TB of logging throughput per week.",
"title": ""
},
{
"docid": "699f4b29e480d89b158326ec4c778f7b",
"text": "Much attention is currently being paid in both the academic and practitioner literatures to the value that organisations could create through the use of big data and business analytics (Gillon et al, 2012; Mithas et al, 2013). For instance, Chen et al (2012, p. 1166–1168) suggest that business analytics and related technologies can help organisations to ‘better understand its business and markets’ and ‘leverage opportunities presented by abundant data and domain-specific analytics’. Similarly, LaValle et al (2011, p. 22) report that topperforming organisations ‘make decisions based on rigorous analysis at more than double the rate of lower performing organisations’ and that in such organisations analytic insight is being used to ‘guide both future strategies and day-to-day operations’. We argue here that while there is some evidence that investments in business analytics can create value, the thesis that ‘business analytics leads to value’ needs deeper analysis. In particular, we argue here that the roles of organisational decision-making processes, including resource allocation processes and resource orchestration processes (Helfat et al, 2007; Teece, 2009), need to be better understood in order to understand how organisations can create value from the use of business analytics. Specifically, we propose that the firstorder effects of business analytics are likely to be on decision-making processes and that improvements in organisational performance are likely to be an outcome of superior decision-making processes enabled by business analytics. This paper is set out as follows. Below, we identify prior research traditions in the Information Systems (IS) literature that discuss the potential of data and analytics to create value. This is to put into perspective the current excitement around ‘analytics’ and ‘big data’, and to position those topics within prior research traditions. We then draw on a number of existing literatures to develop a research agenda to understand the relationship between business analytics, decision-making processes and organisational performance. Finally, we discuss how the three papers in this Special Issue advance the research agenda. Disciplines Engineering | Science and Technology Studies Publication Details Sharma, R., Mithas, S. and Kankanhalli, A. (2014). Transforming decision-making processes: a research agenda for understanding the impact of business analytics on organisations. European Journal of Information Systems, 23 (4), 433-441. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/3231 EJISEditorialFinal 16 May 2014 RS.docx 1 of 17",
"title": ""
},
{
"docid": "7babd48cd74c959c6630a7bc8d1150d7",
"text": "This paper discusses a novel hybrid approach for text categorization that combines a machine learning algorithm, which provides a base model trained with a labeled corpus, with a rule-based expert system, which is used to improve the results provided by the previous classifier, by filtering false positives and dealing with false negatives. The main advantage is that the system can be easily fine-tuned by adding specific rules for those noisy or conflicting categories that have not been successfully trained. We also describe an implementation based on k-Nearest Neighbor and a simple rule language to express lists of positive, negative and relevant (multiword) terms appearing in the input text. The system is evaluated in several scenarios, including the popular Reuters-21578 news corpus for comparison to other approaches, and categorization using IPTC metadata, EUROVOC thesaurus and others. Results show that this approach achieves a precision that is comparable to top ranked methods, with the added value that it does not require a demanding human expert workload to train.",
"title": ""
},
{
"docid": "1979fa5a3384477602c0e81ba62199da",
"text": "Language style transfer is the problem of migrating the content of a source sentence to a target style. In many of its applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles. Under this problem setting, we propose an encoder-decoder framework. First, each sentence is encoded into its content and style latent representations. Then, by recombining the content with the target style, we decode a sentence aligned in the target domain. To adequately constrain the encoding and decoding functions, we couple them with two loss functions. The first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style. The second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style. We validate the effectiveness of our model in three tasks: sentiment modification of restaurant reviews, dialog response revision with a romantic style, and sentence rewriting with a Shakespearean style.",
"title": ""
},
{
"docid": "627b14801c8728adf02b75e8eb62896f",
"text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.",
"title": ""
},
{
"docid": "a79d4b0a803564f417236f2450658fe0",
"text": "Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (especially when <inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function. Specifically, the problems of noise and outliers are well addressed by the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (<inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function, while the discriminative representations of both the labeled and unlabeled data are simultaneously learned by explicitly exploring the block-diagonal structure. The proposed problem is formulated as an optimization problem with a well-defined objective function solved by the proposed iterative algorithm. The convergence of the proposed optimization algorithm is analyzed both theoretically and empirically. In addition, we also discuss the relationships between the proposed method and some previous methods. Extensive experiments on both the synthetic and real-world data sets are conducted, and the experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "ef065f2471d9b940e9167ff8daf1c735",
"text": "Fano’s inequality lower bounds the probability of transmission error through a communication channel. Applied to classification problems, it provides a lower bound on the Bayes error rate and motivates the widely used Infomax principle. In modern machine learning, we are often interested in more than just the error rate. In medical diagnosis, different errors incur different cost; hence, the overall risk is cost-sensitive. Two other popular criteria are balanced error rate (BER) and F-score. In this work, we focus on the two-class problem and use a general definition of conditional entropy (including Shannon’s as a special case) to derive upper/lower bounds on the optimal F-score, BER and cost-sensitive risk, extending Fano’s result. As a consequence, we show that Infomax is not suitable for optimizing F-score or cost-sensitive risk, in that it can potentially lead to low F-score and high risk. For cost-sensitive risk, we propose a new conditional entropy formulation which avoids this inconsistency. In addition, we consider the common practice of using a threshold on the posterior probability to tune performance of a classifier. As is widely known, a threshold of 0.5, where the posteriors cross, minimizes error rate—we derive similar optimal thresholds for F-score and BER.",
"title": ""
},
{
"docid": "a7123f38dc30813bf82262ae711897a6",
"text": "s Crime is a behavior disorder that is an integrated result of social, economical and environmental factors. Crimes are a social nuisance and cost our society dearly in several ways. Any research that can help in solving crimes faster will pay for itself. In this paper we look at use of missing value and clustering algorithm for crime data using data mining. We will look at MV algorithm and Apriori algorithm with some enhancements to aid in the process of filling the missing value and identification of crime patterns. We applied these techniques to real crime data from a city police department. We also use semi-supervised learning technique here for knowledge discovery from the crime records and to help increase the predictive accuracy.",
"title": ""
},
{
"docid": "1d7b7ea9f0cc284f447c11902bad6685",
"text": "In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damg̊ard et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.",
"title": ""
},
{
"docid": "e5f2e7b7dfdfaee33a2187a0a7183cfb",
"text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.",
"title": ""
},
{
"docid": "74da0fe221dd6a578544e6b4896ef60e",
"text": "This paper outlines a new approach to the study of power, that of the sociology of translation. Starting from three principles, those of agnosticism (impartiality between actors engaged in controversy), generalised symmetry (the commitment to explain conflicting viewpoints in the same terms) and free association (the abandonment of all a priori distinctions between the natural and the social), the paper describes a scientific and economic controversy about the causes for the decline in the population of scallops in St. Brieuc Bay and the attempts by three marine biologists to develop a conservation strategy for that population. Four ‘moments’ of translation are discerned in the attempts by these researchers to impose themselves and their definition of the situation on others: (a) problematisation: the researchers sought to become indispensable to other actors in the drama by defining the nature and the problems of the latter and then suggesting that these would be resolved if the actors negotiated the ‘obligatory passage point’ of the researchers’ programme of investigation; (b) interessement: a series of processes by which the researchers sought to lock the other actors into the roles that had been proposed for them in that programme; (c) enrolment: a set of strategies in which the researchers sought to define and interrelate the various roles they had allocated to others; (d) mobilisation: a set of methods used by the researchers to ensure that supposed spokesmen for various relevant collectivities were properly able to represent those collectivities and not betrayed by the latter. In conclusion it is noted that translation is a process, never a completed accomplishment, and it may (as in the empirical case considered) fail.",
"title": ""
},
{
"docid": "e6e91ce66120af510e24a10dee6d64b7",
"text": "AI plays an increasingly prominent role in society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it’s not difficult to envision that they will in the future underpin most of the decisions in society. Despite the high complexity entailed by this task, there is still not much understanding of basic properties of such systems. For instance, we currently cannot detect (neither explain nor correct) whether an AI system is operating fairly (i.e., is abiding by the decision-constraints agreed by society) or it is reinforcing biases and perpetuating a preceding prejudicial practice. Issues of discrimination have been discussed extensively in legal circles, but there exists still not much understanding of the formal conditions that a system must adhere to be deemed fair. In this paper, we use the language of structural causality (Pearl, 2000) to fill in this gap. We start by introducing three new fine-grained measures of transmission of change from stimulus to effect, which we called counterfactual direct (Ctf-DE), indirect (Ctf-IE), and spurious (Ctf-SE) effects. We then derive the causal explanation formula, which allows the AI designer to quantitatively evaluate fairness and explain the total observed disparity of decisions through different discriminatory mechanisms. We apply these results to various discrimination analysis tasks and run extensive simulations, including detection, evaluation, and optimization of decision-making under fairness constraints. We conclude studying the trade-off between different types of fairness criteria (outcome and procedural), and provide a quantitative approach to policy implementation and the design of fair decision-making systems.",
"title": ""
}
] |
scidocsrr
|
c7f4b16c199e00851e8f667598fe4514
|
Force Control of Series Elastic Actuator: Implications for Series Elastic Actuator Design
|
[
{
"docid": "d8ec0c507217500a97c1664c33b2fe72",
"text": "To realize ideal force control of robots that interact with a human, a very precise actuating system with zero impedance is desired. For such applications, a rotary series elastic actuator (RSEA) has been introduced recently. This paper presents the design of RSEA and the associated control algorithms. To generate joint torque as desired, a torsional spring is installed between a motor and a human joint, and the motor is controlled to produce a proper spring deflection for torque generation. When the desired torque is zero, the motor must follow the human joint motion, which requires that the friction and the inertia of the motor be compensated. The human joint and the body part impose the load on the RSEA. They interact with uncertain environments and their physical properties vary with time. In this paper, the disturbance observer (DOB) method is applied to make the RSEA precisely generate the desired torque under such time-varying conditions. Based on the nominal model preserved by the DOB, feedback and feedforward controllers are optimally designed for the desired performance, i.e., the RSEA: (1) exhibits very low impedance and (2) generates the desired torque precisely while interacting with a human. The effectiveness of the proposed design is verified by experiments.",
"title": ""
}
] |
[
{
"docid": "090286ed539394be3ee14300772af98c",
"text": "Cryptography is essential to protect and secure data using a key. Different types of cryptographic techniques are found for data security. Genetic Algorithm is essentially used for obtaining optimal solution. Also, it can be efficiently used for random number generation which are very important in cryptography. This paper discusses the application of genetic algorithms for stream ciphers. Key generation is the most important factor in stream ciphers. In this paper Genetic Algorithm is used in the key generation process where key selection depends upon the fitness function. Here genetic algorithm is repeated for key selection. In each iteration, the key having highest fitness value is selected which further be compared with the threshold value. Selected key was unique and non-repeating. Therefore encryption with selected key are highly encrypted because of more randomness of key. This paper shows that the generated keys using GA are unique and more secure for encryption of data.",
"title": ""
},
{
"docid": "7c4ae542eb8809b2c7566898814fb5a1",
"text": "The accurate localization of facial landmarks is at the core of face analysis tasks, such as face recognition and facial expression analysis, to name a few. In this work we propose a novel localization approach based on a Deep Learning architecture that utilizes dual cascaded CNN subnetworks of the same length, where each subnetwork in a cascade refines the accuracy of its predecessor. The first set of cascaded subnetworks estimates heatmaps that encode the landmarks’ locations, while the second set of cascaded subnetworks refines the heatmaps-based localization using regression, and also receives as input the output of the corresponding heatmap estimation subnetwork. The proposed scheme is experimentally shown to compare favorably with contemporary state-of-the-art schemes.",
"title": ""
},
{
"docid": "f3375c52900c245ede8704a2c1cfbc9b",
"text": "In 2000 Hone and Graham [4] published ‘Towards a tool for the subjective assessment of speech system interfaces (SASSI)’. This position paper argues that the time is right to turn the theoretical foundations established in this earlier paper into a fully validated and score-able real world tool which can be applied to the usability measurement of current speech based systems. We call for a collaborative effort to refine the current question set and then collect and share sufficient data using the revised tool to allow establishment of its psychometric properties as a valid and reliable measure of speech system usability.",
"title": ""
},
{
"docid": "54722f4851707c2bf51d18910728a31c",
"text": "Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully-fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals. To this aim, we use specific suitable fragments from the Datalog± family of languages, and we introduce the vadalog system, which puts these swift logics into action. This system exploits the theoretical underpinning of relevant Datalog± languages and combines it with existing and novel techniques from database and AI practice.",
"title": ""
},
{
"docid": "547423c409d466bcb537a7b0ae0e1758",
"text": "Sequential Bayesian estimation fornonlinear dynamic state-space models involves recursive estimation of filtering and predictive distributions of unobserved time varying signals based on noisy observations. This paper introduces a new filter called the Gaussian particle filter1. It is based on the particle filtering concept, and it approximates the posterior distributions by single Gaussians, similar to Gaussian filters like the extended Kalman filter and its variants. It is shown that under the Gaussianity assumption, the Gaussian particle filter is asymptotically optimal in the number of particles and, hence, has much-improved performance and versatility over other Gaussian filters, especially when nontrivial nonlinearities are present. Simulation results are presented to demonstrate the versatility and improved performance of the Gaussian particle filter over conventional Gaussian filters and the lower complexity than known particle filters. The use of the Gaussian particle filter as a building block of more complex filters is addressed in a companion paper.",
"title": ""
},
{
"docid": "2b8efba9363b5f177089534edeb877a9",
"text": "This article presents a methodology that allows the development of new converter topologies for single-input, multiple-output (SIMO) from different basic configurations of single-input, single-output dc-dc converters. These typologies have in common the use of only one power-switching device, and they are all nonisolated converters. Sixteen different topologies are highlighted, and their main features are explained. The 16 typologies include nine twooutput-type, five three-output-type, one four-output-type, and one six-output-type dc-dc converter configurations. In addition, an experimental prototype of a three-output-type configuration with six different output voltages based on a single-ended primary inductance (SEPIC)-Cuk-boost combination converter was developed, and the proposed design methodology for a basic converter combination was experimentally verified.",
"title": ""
},
{
"docid": "c0010c41640a2ecd1ea85f709a3f14c7",
"text": "Due to global climate change as well as economic concern of network operators, energy consumption of the infrastructure of cellular networks, or “Green Cellular Networking,” has become a popular research topic. While energy saving can be achieved by adopting renewable energy resources or improving design of certain hardware (e.g., power amplifier) to make it more energy-efficient, the cost of purchasing, replacing, and installing new equipment (including manpower, transportation, disruption to normal operation, as well as associated energy and direct cost) is often prohibitive. By comparison, approaches that work on the operating protocols of the system do not require changes to current network architecture, making them far less costly and easier for testing and implementation. In this survey, we first present facts and figures that highlight the importance of green mobile networking and then review existing green cellular networking research with particular focus on techniques that incorporate the concept of the “sleep mode” in base stations. It takes advantage of changing traffic patterns on daily or weekly basis and selectively switches some lightly loaded base stations to low energy consumption modes. As base stations are responsible for the large amount of energy consumed in cellular networks, these approaches have the potential to save a significant amount of energy, as shown in various studies. However, it is noticed that certain simplifying assumptions made in the published papers introduce inaccuracies. This review will discuss these assumptions, particularly, an assumption that ignores the effect of traffic-load-dependent factors on energy consumption. We show here that considering this effect may lead to noticeably lower benefit than in models that ignore this effect. Finally, potential future research directions are discussed.",
"title": ""
},
{
"docid": "f31cbd5b8594e27b9aea23bdb2074a24",
"text": "The hyphenation algorithm of OpenOffice.org 2.0.2 is a generalization of TEX’s hyphenation algorithm that allows automatic non-standard hyphenation by competing standard and non-standard hyphenation patterns. With the suggested integration of linguistic tools for compound decomposition and word sense disambiguation, this algorithm would be able to do also more precise non-standard and standard hyphenation for several languages.",
"title": ""
},
{
"docid": "29816f0358cfff1c1dddce203003ad41",
"text": "Increasing volumes of trajectory data require analysis methods which go beyond the visual. Methods for computing trajectory analysis typically assume linear interpolation between quasi-regular sampling points. This assumption, however, is often not realistic, and can lead to a meaningless analysis for sparsely and/or irregularly sampled data. We propose to use the space-time prism model instead, allowing to represent the influence of speed on possible trajectories within a volume. We give definitions for the similarity of trajectories in this model and describe algorithms for its computation using the Fréchet and the equal time distance.",
"title": ""
},
{
"docid": "00cabf8e41382d8a1b206da952b8633a",
"text": "Autonomous vehicle operations in outdoor environments challenge robotic perception. Construction, mining, agriculture, and planetary exploration environments are examples in which the presence of dust, fog, rain, changing illumination due to low sun angles, and lack of contrast can dramatically degrade conventional stereo and laser sensing. Nonetheless, environment perception can still succeed under compromised visibility through the use of a millimeter-wave radar. Radar also allows for multiple object detection within a single beam, whereas other range sensors are limited to one target return per emission. However, radar has shortcomings as well, such as a large footprint, specularity effects, and limited range resolution, all of which may result in poor environment survey or difficulty in interpretation. This paper presents a novel method for ground segmentation using a millimeter-wave radar mounted on a ground vehicle. Issues relevant to short-range perception in an outdoor environment are described along with field experiments and a quantitative comparison to laser data. The ability to classify the ground is successfully demonstrated in clear and low-visibility conditions, and significant improvement in range accuracy is shown. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios. C © 2011 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "15cde62b96f8c87bedb6f721befa3ae4",
"text": "To investigate the dispersion mechanism(s) of ternary dry powder inhaler (DPI) formulations by comparison of the interparticulate adhesions and in vitro performance of a number of carrier–drug–fines combinations. The relative levels of adhesion and cohesion between a lactose carrier and a number of drugs and fine excipients were quantified using the cohesion–adhesion balance (CAB) approach to atomic force microscopy. The in vitro performance of formulations produced using these materials was quantified and the particle size distribution of the aerosol clouds produced from these formulations determined by laser diffraction. Comparison between CAB ratios and formulation performance suggested that the improvement in performance brought about by the addition of fines to which the drug was more adhesive than cohesive might have been due to the formation of agglomerates of drug and fines particles. This was supported by aerosol cloud particle size data. The mechanism(s) underlying the improved performance of ternary formulations where the drug was more cohesive than adhesive to the fines was unclear. The performance of ternary DPI formulations might be increased by the preferential formation of drug–fines agglomerates, which might be subject to greater deagglomeration forces during aerosolisation than smaller agglomerates, thus producing better formulation performance.",
"title": ""
},
{
"docid": "e29eb914db494aadd140b7b75298f1ef",
"text": "AbstractThe Ainu, a minority ethnic group from the northernmost island of Japan, was investigated for DNA polymorphisms both from maternal (mitochondrial DNA) and paternal (Y chromosome) lineages extensively. Other Asian populations inhabiting North, East, and Southeast Asia were also examined for detailed phylogeographic analyses at the mtDNA sequence type as well as Y-haplogroup levels. The maternal and paternal gene pools of the Ainu contained 25 mtDNA sequence types and three Y-haplogroups, respectively. Eleven of the 25 mtDNA sequence types were unique to the Ainu and accounted for over 50% of the population, whereas 14 were widely distributed among other Asian populations. Of the 14 shared types, the most frequently shared type was found in common among the Ainu, Nivkhi in northern Sakhalin, and Koryaks in the Kamchatka Peninsula. Moreover, analysis of genetic distances calculated from the mtDNA data revealed that the Ainu seemed to be related to both the Nivkhi and other Japanese populations (such as mainland Japanese and Okinawans) at the population level. On the paternal side, the vast majority (87.5%) of the Ainu exhibited the Asian-specific YAP+ lineages (Y-haplogroups D-M55* and D-M125), which were distributed only in the Japanese Archipelago in this analysis. On the other hand, the Ainu exhibited no other Y-haplogroups (C-M8, O-M175*, and O-M122*) common in mainland Japanese and Okinawans. It is noteworthy that the rest of the Ainu gene pool was occupied by the paternal lineage (Y-haplogroup C-M217*) from North Asia including Sakhalin. Thus, the present findings suggest that the Ainu retain a certain degree of their own genetic uniqueness, while having higher genetic affinities with other regional populations in Japan and the Nivkhi among Asian populations.",
"title": ""
},
{
"docid": "d7528de0c00c3d37fa31b8dcb5123fd3",
"text": "We propose and throughly investigate a temporalized version of the popular Massey’s technique for rating actors in sport competitions. The method can be described as a dynamic temporal process in which team ratings are updated at every match according to their performance during the match and the strength of the opponent team. Using the Italian soccer dataset, we empirically show that the method has a good foresight prediction accuracy.",
"title": ""
},
{
"docid": "8240df0c9498482522ef86b4b1e924ab",
"text": "The advent of the IT-led era and the increased competition have forced companies to react to the new changes in order to remain competitive. Enterprise resource planning (ERP) systems offer distinct advantages in this new business environment as they lower operating costs, reduce cycle times and (arguably) increase customer satisfaction. This study examines, via an exploratory survey of 26 companies, the underlying reasons why companies choose to convert from conventional information systems (IS) to ERP systems and the changes brought in, particularly in the accounting process. The aim is not only to understand the changes and the benefits involved in adopting ERP systems compared with conventional IS, but also to establish the best way forward in future ERP applications. The empirical evidence confirms a number of changes in the accounting process introduced with the adoption of ERP systems.",
"title": ""
},
{
"docid": "e54b9897e79391b86327883164781dff",
"text": "This review paper gives a detailed account of the development of mesh generation techniques on planar regions, over curved surfaces and within volumes for the past years. Emphasis will be on the generation of the unstructured meshes for purpose of complex industrial applications and adaptive refinement finite element analysis. Over planar domains and on curved surfaces, triangular and quadrilateral elements will be used, whereas for three-dimensional structures, tetrahedral and hexahedral elements have to be generated. Recent advances indicate that mesh generation on curved surfaces is quite mature now that elements following closely to surface curvatures could be generated more or less in an automatic manner. As the boundary recovery procedure are getting more and more robust and efficient, discretization of complex solid objects into tetrahedra by means of Delaunay triangulation and other techniques becomes routine work in industrial applications. However, the decomposition of a general object into hexahedral elements in a robust and efficient manner remains as a challenge for researchers in the mesh generation community. Algorithms for the generation of anisotropic meshes on 2D and 3D domains have also been proposed for problems where elongated elements along certain directions are required. A web-site for the latest development in meshing techniques is included for the interested readers.",
"title": ""
},
{
"docid": "f6df414f8f61dbdab32be2f05d921cb8",
"text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.",
"title": ""
},
{
"docid": "4dc50a9c0665b5e2a7dcbc369acefdb0",
"text": "Nature is the principal source for proposing new optimization methods such as genetic algorithms (GA) and simulated annealing (SA) methods. All traditional evolutionary algorithms are heuristic population-based search procedures that incorporate random variation and selection. The main contribution of this study is that it proposes a novel optimization method that relies on one of the theories of the evolution of the universe; namely, the Big Bang and Big Crunch Theory. In the Big Bang phase, energy dissipation produces disorder and randomness is the main feature of this phase; whereas, in the Big Crunch phase, randomly distributed particles are drawn into an order. Inspired by this theory, an optimization algorithm is constructed, which will be called the Big Bang–Big Crunch (BB–BC) method that generates random points in the Big Bang phase and shrinks those points to a single representative point via a center of mass or minimal cost approach in the Big Crunch phase. It is shown that the performance of the new (BB–BC) method demonstrates superiority over an improved and enhanced genetic search algorithm also developed by the authors of this study, and outperforms the classical genetic algorithm (GA) for many benchmark test functions. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c6ad8ac0c8e9c0bd86868128eee6a916",
"text": "Online reviews are a cornerstone of consumer decision making. However, their authenticity and quality has proven hard to control, especially as polluters target these reviews toward promoting products or in degrading competitors. In a troubling direction, the widespread growth of crowdsourcing platforms like Mechanical Turk has created a large-scale, potentially difficult-to-detect workforce of malicious review writers. Hence, this paper tackles the challenge of uncovering crowdsourced manipulation of online reviews through a three-part effort: (i) First, we propose a novel sampling method for identifying products that have been targeted for manipulation and a seed set of deceptive reviewers who have been enlisted through crowdsourcing platforms. (ii) Second, we augment this base set of deceptive reviewers through a reviewer-reviewer graph clustering approach based on a Markov Random Field where we define individual potentials (of single reviewers) and pair potentials (between two reviewers). (iii) Finally, we embed the results of this probabilistic model into a classification framework for detecting crowd-manipulated reviews. We find that the proposed approach achieves up to 0.96 AUC, outperforming both traditional detection methods and a SimRank-based alternative clustering approach.",
"title": ""
},
{
"docid": "e0e00fdfecc4a23994315579938f740e",
"text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.",
"title": ""
},
{
"docid": "91066155de090efcd3756f4f98b11e50",
"text": "Recently, the use of XML continues to grow in popularity, large repositories of XML documents are going to emerge, and users are likely to pose increasingly more complex queries on these data sets. In 2001 XQuery is decided by the World Wide Web Consortium (W3C) as the standard XML query language. In this article, we describe the design and implementation of an efficient and scalable purely relational XQuery processor which translates expressions of the XQuery language into their equivalent SQL evaluation scripts. The experiments of this article demonstrated the efficiency and scalability of our purely relational approach in comparison to the native XML/XQuery functionality supported by conventional RDBMSs and has shown that our purely relational approach for implementing XQuery processor deserves to be pursued further.",
"title": ""
}
] |
scidocsrr
|
77094e488d966e909bfbe54679c7923a
|
Investigating learners' attitudes toward virtual reality learning environments: Based on a constructivist approach
|
[
{
"docid": "00e13bca1066e54907394b75cb40d0c0",
"text": "This paper explores educational uses of virtual learning environment (VLE) concerned with issues of learning, training and entertainment. We analyze the state-of-art research of VLE based on virtual reality and augmented reality. Some examples for the purpose of education and simulation are described. These applications show that VLE can be means of enhancing, motivating and stimulating learners’ understanding of certain events, especially those for which the traditional notion of instructional learning have proven inappropriate or difficult. Furthermore, the users can learn in a quick and happy mode by playing in the virtual environments. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d7a31875b0d05c2bbd3522248d45ffbb",
"text": "The trend of using e-learning as a learning and/or teaching tool is now rapidly expanding into education. Although e-learning environments are popular, there is minimal research on instructors’ and learners’ attitudes toward these kinds of learning environments. The purpose of this study is to explore instructors’ and learners’ attitudes toward e-learning usage. Accordingly, 30 instructors and 168 college students are asked to answer two different questionnaires for investigating their perceptions. After statistical analysis, the results demonstrate that instructors have very positive perceptions toward using e-learning as a teaching assisted tool. Furthermore, behavioral intention to use e-learning is influenced by perceived usefulness and self-efficacy. Regarding to learners’ attitudes, self-paced, teacher-led, and multimedia instruction are major factors to affect learners’ attitudes toward e-learning as an effective learning tool. Based on the findings, this research proposes guidelines for developing e-learning environments. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "01b73e9e8dbaf360baad38b63e5eae82",
"text": "Received: 29 September 2009 Revised: 19 April 2010 2nd Revision: 5 July 2010 3rd Revision: 30 November 2010 Accepted: 8 December 2010 Abstract Throughout the world, sensitive personal information is now protected by regulatory requirements that have translated into significant new compliance oversight responsibilities for IT managers who have a legal mandate to ensure that individual employees are adequately prepared and motivated to observe policies and procedures designed to ensure compliance. This research project investigates the antecedents of information privacy policy compliance efficacy by individuals. Using Health Insurance Portability and Accountability Act compliance within the healthcare industry as a practical proxy for general organizational privacy policy compliance, the results of this survey of 234 healthcare professionals indicate that certain social conditions within the organizational setting (referred to as external cues and comprising situational support, verbal persuasion, and vicarious experience) contribute to an informal learning process. This process is distinct from the formal compliance training procedures and is shown to influence employee perceptions of efficacy to engage in compliance activities, which contributes to behavioural intention to comply with information privacy policies. Implications for managers and researchers are discussed. European Journal of Information Systems (2011) 20, 267–284. doi:10.1057/ejis.2010.72; published online 25 January 2011",
"title": ""
},
{
"docid": "2e21f67f01a37394a9f208f7e6d8696e",
"text": "We present a new neural sequence-tosequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new twolevel pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then combines them to form the extractive summary. Experiments on large scale benchmark corpora demonstrate the efficacy of SWAP-NET that outperforms state-of-the-art extractive summarizers.",
"title": ""
},
{
"docid": "6c1317ef88110756467a10c4502851bb",
"text": "Deciding query equivalence is an important problem in data management with many practical applications. Solving the problem, however, is not an easy task. While there has been a lot of work done in the database research community in reasoning about the semantic equivalence of SQL queries, prior work mainly focuses on theoretical limitations. In this paper, we present COSETTE, a fully automated prover that can determine the equivalence of SQL queries. COSETTE leverages recent advances in both automated constraint solving and interactive theorem proving, and returns a counterexample (in terms of input relations) if two queries are not equivalent, or a proof of equivalence otherwise. Although the problem of determining equivalence for arbitrary SQL queries is undecidable, our experiments show that COSETTE can determine the equivalences of a wide range of queries that arise in practice, including conjunctive queries, correlated queries, queries with outer joins, and queries with aggregates. Using COSETTE, we have also proved the validity of magic set rewrites, and confirmed various real-world query rewrite errors, including the famous COUNT bug. We are unaware of any prior tool that can automatically determine the equivalences of a broad range of queries as COSETTE, and believe that our tool represents a major step towards building provably-correct query optimizers for real-world database systems.",
"title": ""
},
{
"docid": "0a0e4219aa1e20886e69cb1421719c4e",
"text": "A wearable two-antenna system to be integrated on a life jacket and connected to Personal Locator Beacons (PLBs) of the Cospas-Sarsat system is presented. Each radiating element is a folded meandered dipole resonating at 406 MHz and includes a planar reflector realized by a metallic foil. The folded dipole and the metallic foil are attached on the opposite sides of the floating elements of the life jacket itself, so resulting in a mechanically stable antenna. The metallic foil improves antenna radiation properties even when the latter is close to the sea surface, shields the human body from EM radiation and makes the radiating system less sensitive to the human body movements. Prototypes have been realized and a measurement campaign has been carried out. The antennas show satisfactory performance also when the life jacket is worn by a user. The proposed radiating elements are intended for the use in a two-antenna scheme in which the transmitter can switch between them in order to meet Cospas-Sarsat system specifications. Indeed, the two antennas provide complementary radiation patterns so that Cospas-Sarsat requirements (satellite constellation coverage and EIRP profile) are fully satisfied.",
"title": ""
},
{
"docid": "8c2b0e93eae23235335deacade9660f0",
"text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.",
"title": ""
},
{
"docid": "6f26f4409d418fe69b1d43ec9b4f8b39",
"text": "Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge .",
"title": ""
},
{
"docid": "4added2e0e6ba286a1ef4bed1dfd6614",
"text": "Estimating the mechanisms that connect explanatory variables with the explained variable, also known as “mediation analysis,” is central to a variety of social-science fields, especially psychology, and increasingly to fields like epidemiology. Recent work on the statistical methodology behind mediation analysis points to limitations in earlier methods. We implement in Stata computational approaches based on recent developments in the statistical methodology of mediation analysis. In particular, we provide functions for the correct calculation of causal mediation effects using several different types of parametric models, as well as the calculation of sensitivity analyses for violations to the key identifying assumption required for interpreting mediation results causally.",
"title": ""
},
{
"docid": "29ce9730d55b55b84e195983a8506e5c",
"text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.",
"title": ""
},
{
"docid": "9cedc3f1a04fa51fb8ce1cf0cf01fbc3",
"text": "OBJECTIVES:The objective of this study was to provide updated explicit and relevant consensus statements for clinicians to refer to when managing hospitalized adult patients with acute severe ulcerative colitis (UC).METHODS:The Canadian Association of Gastroenterology consensus group of 23 voting participants developed a series of recommendation statements that addressed pertinent clinical questions. An iterative voting and feedback process was used to do this in conjunction with systematic literature reviews. These statements were brought to a formal consensus meeting held in Toronto, Ontario (March 2010), when each statement was discussed, reformulated, voted upon, and subsequently revised until group consensus (at least 80% agreement) was obtained. The modified GRADE (Grading of Recommendations Assessment, Development, and Evaluation) criteria were used to rate the strength of recommendations and the quality of evidence.RESULTS:As a result of the iterative process, consensus was reached on 21 statements addressing four themes (General considerations and nutritional issues, Steroid use and predictors of steroid failure, Cyclosporine and infliximab, and Surgical issues).CONCLUSIONS:Key recommendations for the treatment of hospitalized patients with severe UC include early escalation to second-line medical therapy with either infliximab or cyclosporine in individuals in whom parenteral steroids have failed after 72 h. These agents should be used in experienced centers where appropriate support is available. Sequential therapy with cyclosporine and infliximab is not recommended. Surgery is an option when first-line steroid therapy fails, and is indicated when second-line medical therapy fails and/or when complications arise during the hospitalization.",
"title": ""
},
{
"docid": "97c6914243c061491bc27837d2fdae2d",
"text": "During the last two years, the METIS project (\"Mobile and wireless communications Enablers for the Twenty-twenty Information Society\") has been conducting research on 5G-enabling technology components. This paper provides a summary of METIS work on 5G architectures. The architecture description is presented from different viewpoints. First, a functional architecture is presented that may lay a foundation for development of first novel 5G network functions. It is based on functional decomposition of most relevant 5G technology components provided by METIS. The logical orchestration & control architecture depicts the realization of flexibility, scalability and service orientation needed to fulfil diverse 5G requirements. Finally, a third viewpoint reveals deployment aspects and function placement options for 5G.",
"title": ""
},
{
"docid": "e4dbca720626a29f60a31ed9d22c30aa",
"text": "Text classification is the process of classifying documents into predefined categories based on their content. It is the automated assignment of natural language texts to predefined categories. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. Existing supervised learning algorithms to automatically classify text need sufficient documents to learn accurately. This paper presents a new algorithm for text classification using data mining that requires fewer documents for training. Instead of using words, word relation i.e. association rules from these words is used to derive feature set from pre-classified text documents. The concept of Naïve Bayes classifier is then used on derived features and finally only a single concept of Genetic Algorithm has been added for final classification. A system based on the proposed algorithm has been implemented and tested. The experimental results show that the proposed system works as a successful text classifier.",
"title": ""
},
{
"docid": "82bb1d74e1e2d4b7b412b2a921f5eaad",
"text": "This paper addresses the topic of community crime prevention. As in many other areas of public policy, there are widely divergent approaches that might be taken to crime prevention focused on local neighbourhoods. In what follows four major forms of prevention relevant to the Australian context will be discussed, with particular emphasis being placed on an approach to crime prevention which enhances the ability of a community to bring together, in an integrative way, divergent groups which can easily become isolated from each other as a result of contemporary economic and urban forces.",
"title": ""
},
{
"docid": "fea4f7992ec61eaad35872e3a800559c",
"text": "The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Mismatches often occur between the learning styles of students in a language class and the teaching style of the instructor, with unfortunate effects on the quality of the students’ learning and on their attitudes toward the class and the subject. This paper defines several dimensions of learning style thought to be particularly relevant to foreign and second language education, outlines ways in which certain learning styles are favored by the teaching styles of most language instructors, and suggests steps to address the educational needs of all students in foreign language classes. Students learn in many ways—by seeing and hearing; reflecting and acting; reasoning logically and intuitively; memorizing and visualizing. Teaching methods also vary. Some instructors lecture, others demonstrate or discuss; some focus on rules and others on examples; some emphasize memory and others understanding. How much a given student learns in a class is governed in part by that student’s native ability and prior preparation but also by the compatibility of his or her characteristic approach to learning and the instructor’s characteristic approach to teaching. The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Learning styles have been extensively discussed in the educational psychology literature (Claxton & Murrell 1987; Schmeck 1988) and specifically in the context Richard M. Felder (Ph.D., Princeton University) is the Hoechst Celanese Professor of Chemical Engineering at North Carolina State University,",
"title": ""
},
{
"docid": "326b8d8d5d128706796d3107a6c2c941",
"text": "Capturing security and privacy requirements in the early stages of system development is essential for creating sufficient public confidence in order to facilitate the adaption of novel systems such as the Internet of Things (IoT). However, security and privacy requirements are often not handled properly due to their wide variety of facets and aspects which make them difficult to formulate. In this study, security-related requirements of IoT heterogeneous systems are decomposed into a taxonomy of quality attributes, and existing security mechanisms and policies are proposed to alleviate the identified forms of security attacks and to reduce the vulnerabilities in the future development of the IoT systems. Finally, the taxonomy is applied on an IoT smart grid scenario.",
"title": ""
},
{
"docid": "42fd940e239ed3748b007fde8b583b25",
"text": "The ImageCLEF’s plant identification task provides a testbed for the system-oriented evaluation of plant identification, more precisely on the 126 tree species identification based on leaf images. Three types of image content are considered: Scan, Scan-like (leaf photographs with a white uniform background), and Photograph (unconstrained leaf with natural background). The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of eleven groups from eight countries and with a total of 30 runs submitted, involving distinct and original methods, this second year pilot task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification.",
"title": ""
},
{
"docid": "48019a3106c6d74e4cfcc5ac596d4617",
"text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.",
"title": ""
},
{
"docid": "5c5c21bd0c50df31c6ccec63d864568c",
"text": "Intellectual Property issues (IP) is a concern that refrains companies to cooperate in whatever of Open Innovation (OI) processes. Particularly, SME consider open innovation as uncertain, risky processes. Despite the opportunities that online OI platforms offer, SMEs have so far failed to embrace them, and proved reluctant to OI. We intend to find whether special collaborative spaces that facilitate a sort of preventive idea claiming, explicit claiming evolution of defensive publication, as so far patents and publications for prevailing innovation, can be the right complementary instruments in OI as to when stronger IP protection regimes might drive openness by SME in general. These spaces, which we name NIR (Networking Innovation Rooms), are a practical, smart paradigm to boost OI for SME. There users sign smart contracts as NDA which takes charge of timestamping any IP disclosure or creation and declares what corrective actions (if they might apply) might be taken for unauthorised IP usage or disclosure of any of the NDA signers. With Blockchain, a new technology emerges which enables decentralised, fine-grained IP management for OI.",
"title": ""
},
{
"docid": "7e6474de31f7d9cdee552a50a09bbeae",
"text": "BACKGROUND Demographics in America are beginning to shift toward an older population, with the number of patients aged 65 years or older numbering approximately 41.4 million in 2011, which represents an increase of 18% since 2000. Within the aging population, the incidence of vocal disorders is estimated to be between 12% and 35%. In a series reported by Davids et al., 25% of patients over age 65 years presenting with a voice complaint were found to have vocal fold atrophy (presbylarynges), where the hallmark physical signs are vocal fold bowing with an increased glottic gap and prominent vocal processes. The epithelial and lamina propria covering of the vocal folds begin to exhibit changes due to aging. In older adults, the collagen of the vocal folds lose their “wicker basket” type of organization, which leads to more disarrayed segments throughout all the layers of the lamina propria, and there is also a loss of hyaluronic acid and elastic fibers. With this loss of the viscoelastic properties and subsequent vocal fold thinning, along with thyroarytenoid muscle atrophy, this leads to the classic bowed membranous vocal fold. Physiologically, these anatomical changes to the vocal folds leads to incomplete glottal closure, air escape, changes in vocal fold tension, altered fundamental frequency, and decreased vocal endurance. Women’s voices will often become lower pitched initially and then gradually higher pitched and shrill, whereas older men’s voices will gradually become more high pitched as the vocal folds lengthen to try and achieve approximation. LITERATURE REVIEW The literature documents that voice therapy is a useful tool in the treatment of presbyphonia and improves voice-related quality of life. The goal of therapy is based on a causal model that suggests targeting the biological basis of the condition—degenerative respiratory and laryngeal changes—as a result of sarcopenia. Specifically, the voice therapy protocol should capitalize on high-intensity phonatory exercises to overload the respiratory and laryngeal system and improve vocal loudness, reduce vocal effort, and increase voice-related quality of life (VRQoL). In a small prospective, randomized, controlled trial, Ziegler et al. demonstrated that patients with vocal atrophy undergoing therapy—phonation resistance training exercise (PhoRTE) or vocal function exercise (VFE)—had a significantly improved VRQoL score preand post-therapy (88.5–95.0, P 5.049 for PhoRTE and 80.8–87.5, P 5.054 for VFE), whereas patients in the nonintervention group saw no improvement (87.5–91.5, P 5.70). Patients in the PhoRTE group exhibited a significant decrease in perceived phonatory effort, but not patients undergoing VFE or no therapy. Injection laryngoplasty (IL), initially developed for restoration of glottic competence in vocal fold paralysis, has also been increasingly used in treatment of the aging voice. A number of materials have been used over the years including Teflon, silicone, fat, Gelfoam, collagen, hyaluronic acid, carboxymethylcellulose, and calcium hydroxylapatite. Some of these are limited by safety or efficacy concerns, and some of them are not long lasting. With the growing use of in-office IL, the ease of use has made this technique more popular because of the ability to avoid general anesthesia in a sometimes already frail patient population. Davids et al. also examined changes in VRQoL scores for patients undergoing IL and demonstrated a significant improvement preand post-therapy (34.8 vs. 22, P<.0001). Due to a small sample size, however, the authors were unable to make any direct comparisons between patients undergoing voice therapy versus IL. Medialization thyroplasty (MT) remains as the otolaryngologist’s permanent technique for addressing the glottal insufficiency found in the aging larynx. In the same fashion as IL, the technique developed as a way to address the paralytic vocal fold and can use either Silastic or Gore-Tex implants. Postma et al. looked at the From the Emory Voice Center, Department of Otolaryngology/ Head and Neck Surgery, Emory University School of Medicine, Atlanta, Georgia, U.S.A. This work was performed at the Emory Voice Center in the Department of Otolaryngology/Head and Neck Surgery at the Emory School of Medicine in Atlanta, Georgia. This work was funded internally by the Emory Voice Center. The authors have no other funding, financial relationships, or conflicts of interest to disclose. Joseph Bradley has worked as a consultant for Merz Aesthetics teaching a vocal fold injection laryngoplasty course. Send correspondence to Michael M. Johns, III, MD, Emory University School of Medicine, 550 Peachtree St. NE, 9th Floor, Suite 4400, Atlanta, GA 30308. E-mail: michael.johns2@emory.edu",
"title": ""
},
{
"docid": "4571c73ba3182ad93d1fbcb9b5827dfc",
"text": "The use of consumer IT at work, known as \"IT Consumerization\", is changing the dynamic between employees and IT departments. Employees are empowered and now have greater freedom but also responsibility as to what technologies to use at work and how. At the same time, organizational factors such as rules on technology use still exert considerable influence on employees' actions. Drawing on Structuration Theory, we frame this interaction between organization and employee as one of structure and agency. In the process, we pursue an explorative approach and rely on qualitative data from interviews with public-sector employees. We identify four organizational structures that influence people's behavior with respect to IT Consumerization: policies, equipment, tasks and authority. By spotlighting the mutual influence of these organizational structures and Consumerization-related behavior, we show Giddens's duality of structure in action and demonstrate its relevance to the study of IT Consumerization.",
"title": ""
}
] |
scidocsrr
|
cc47ef4cd325b8aed4f114ed2257586f
|
Integrating Programming by Example and Natural Language Programming
|
[
{
"docid": "7d8dcb65acd5e0dc70937097ded83013",
"text": "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.",
"title": ""
},
{
"docid": "eb79d012c63ac7904c30a89f62349393",
"text": "Learning programs is a timely and interesting challenge. In Programming by Example (PBE), a system attempts to infer a program from input and output examples alone, by searching for a composition of some set of base functions. We show how machine learning can be used to speed up this seemingly hopeless search problem, by learning weights that relate textual features describing the provided input-output examples to plausible sub-components of a program. This generic learning framework lets us address problems beyond the scope of earlier PBE systems. Experiments on a prototype implementation show that learning improves search and ranking on a variety of text processing tasks found on help forums.",
"title": ""
}
] |
[
{
"docid": "119ba393df80bc197fda2bd893db1bc7",
"text": "Traditional electricity meters are replaced by Smart Meters in customers’ households. Smart Meters collect fine-grained utility consumption profiles from customers, which in turn enables the introduction of dynamic, time-of-use tariffs. However, the fine-grained usage data that is compiled in this process also allows to infer the inhabitant’s personal schedules and habits. We propose a privacy-preserving protocol that enables billing with time-of-use tariffs without disclosing the actual consumption profile to the supplier. Our approach relies on a zero-knowledge proof based on Pedersen Commitments performed by a plug-in privacy component that is put into the communication link between Smart Meter and supplier’s back-end system. We require no changes to the Smart Meter hardware and only small changes to the software of Smart Meter and back-end system. In this paper we describe the functional and privacy requirements, the specification and security proof of our solution and give a performance evaluation of a prototypical implementation.",
"title": ""
},
{
"docid": "8621fff78e92e1e0e9ba898d5e2433ca",
"text": "This paper aims at providing insight on the transferability of deep CNN features to unsupervised problems. We study the impact of different pretrained CNN feature extractors on the problem of image set clustering for object classification as well as fine-grained classification. We propose a rather straightforward pipeline combining deep-feature extraction using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of images. This approach is compared to state-of-the-art algorithms in image-clustering and provides better results. These results strengthen the belief that supervised training of deep CNN on large datasets, with a large variability of classes, extracts better features than most carefully designed engineering approaches, even for unsupervised tasks. We also validate our approach on a robotic application, consisting in sorting and storing objects smartly based on clustering.",
"title": ""
},
{
"docid": "3ebe9aecd4c84e9b9ed0837bd294b4ed",
"text": "A bond graph model of a hybrid electric vehicle (HEV) powertrain test cell is proposed. The test cell consists of a motor/generator coupled to a HEV powertrain and powered by a bidirectional power converter. Programmable loading conditions, including positive and negative resistive and inertial loads of any magnitude are modeled, avoiding the use of mechanical inertial loads involved in conventional test cells. The dynamics and control equations of the test cell are derived directly from the bond graph models. The modeling and simulation results of the dynamics of the test cell are validated through experiments carried out on a scaled-down system.",
"title": ""
},
{
"docid": "873aa095401a4f57359f27fcbac88fdd",
"text": "We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.",
"title": ""
},
{
"docid": "c25a59a97870c9296ebf2196d1d10cc7",
"text": "(Background) We proposed a novel computer-aided diagnosis (CAD) system based on the hybridization of biogeography-based optimization (BBO) and particle swarm optimization (PSO), with the goal of detecting pathological brains in MRI scanning. (Method) The proposed method used wavelet entropy (WE) to extract features from MR brain images, followed by feed-forward neural network (FNN) with training method of a Hybridization of BBO and PSO (HBP), which combined the exploration ability of BBO and exploitation ability of PSO. (Results) The 10 repetition of k-fold cross validation result showed that the proposed HBP outperformed existing FNN training methods and that the proposed WE + HBP-FNN outperformed fourteen state-of-the-art CAD systems of MR brain classification in terms of classification accuracy. The proposed method achieved accuracy of 100%, 100%, and 99.49% over Dataset-66, Dataset-160, and Dataset-255, respectively. The offline learning cost 208.2510 s for Dataset-255, and merely 0.053s for online prediction. (Conclusion) The proposed WE + HBP-FNN method achieves nearly perfect detection pathological brains in MRI scanning.",
"title": ""
},
{
"docid": "60f9a34771b844228e1d8da363e89359",
"text": "3-mercaptopyruvate sulfurtransferase (3-MST) was a novel hydrogen sulfide (H2S)-synthesizing enzyme that may be involved in cyanide degradation and in thiosulfate biosynthesis. Over recent years, considerable attention has been focused on the biochemistry and molecular biology of H2S-synthesizing enzyme. In contrast, there have been few concerted attempts to investigate the changes in the expression of the H2S-synthesizing enzymes with disease states. To investigate the changes of 3-MST after traumatic brain injury (TBI) and its possible role, mice TBI model was established by controlled cortical impact system, and the expression and cellular localization of 3-MST after TBI was investigated in the present study. Western blot analysis revealed that 3-MST was present in normal mice brain cortex. It gradually increased, reached a peak on the first day after TBI, and then reached a valley on the third day. Importantly, 3-MST was colocalized with neuron. In addition, Western blot detection showed that the first day post injury was also the autophagic peak indicated by the elevated expression of LC3. Importantly, immunohistochemistry analysis revealed that injury-induced expression of 3-MST was partly colabeled by LC3. However, there was no colocalization of 3-MST with propidium iodide (cell death marker) and LC3 positive cells were partly colocalized with propidium iodide. These data suggested that 3-MST was mainly located in living neurons and may be implicated in the autophagy of neuron and involved in the pathophysiology of brain after TBI.",
"title": ""
},
{
"docid": "dfe502f728d76f9b4294f725eca78413",
"text": "SUMMARY This paper reports work being carried out under the AMODEUS project (BRA 3066). The goal of the project is to develop interdisciplinary approaches to studying human-computer interaction and to move towards applying the results to the practicalities of design. This paper describes one of the approaches the project is taking to represent design-Design Space Analysis. One of its goals is help us bridge from relatively theoretical concerns to the practicalities of design. Design Space Analysis is a central component of a framework for representing the design rationale for designed artifacts. Our current work focusses more specifically on the design of user interfaces. A Design Space Analysis is represented using the QOC notation, which consists of Questions identifying key design issues, Options providing possible answers to the Questions, and Criteria for assessing and comparing the Options. In this paper we give an overview of our approach, some examples of the research issues we are currently tackling and an illustration of its role in helping to integrate the work of some of our project partners with design considerations.",
"title": ""
},
{
"docid": "ff7db3cca724a06c594a525b1f229024",
"text": "At the heart of emotion, mood, and any other emotionally charged event are states experienced as simply feeling good or bad, energized or enervated. These states--called core affect--influence reflexes, perception, cognition, and behavior and are influenced by many causes internal and external, but people have no direct access to these causal connections. Core affect can therefore be experienced as free-floating (mood) or can be attributed to some cause (and thereby begin an emotional episode). These basic processes spawn a broad framework that includes perception of the core-affect-altering properties of stimuli, motives, empathy, emotional meta-experience, and affect versus emotion regulation; it accounts for prototypical emotional episodes, such as fear and anger, as core affect attributed to something plus various nonemotional processes.",
"title": ""
},
{
"docid": "c3ba6fea620b410d5b6d9b07277d431e",
"text": "Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks.",
"title": ""
},
{
"docid": "a6fbd3f79105fd5c9edfc4a0292a3729",
"text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.",
"title": ""
},
{
"docid": "bbd378407abb1c2a9a5016afee40c385",
"text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.",
"title": ""
},
{
"docid": "7392769dae1e2859bb264774778860a0",
"text": "Abstract form only given. Communications is becoming increasingly important to the operation of protection and control schemes. Although offering many benefits, using standards-based communications, particularly IEC 61850, in the course of the research and development of novel schemes can be complex. This paper describes an open source platform which enables the rapid-prototyping of communications-enhanced schemes. The platform automatically generates the data model and communications code required for an Intelligent Electronic Device (IED) to implement publisher-subscriber Generic Object-Oriented Substation Event (GOOSE) and Sampled Value (SV) messaging. The generated code is tailored to a particular System Configuration Description (SCD) file, and is therefore extremely efficient at run-time. It is shown how a model-centric tool, such as the open source Eclipse Modeling Framework, can be used to manage the complexity of the IEC 61850 standard, by providing a framework for validating SCD files and by automating parts of the code generation process.",
"title": ""
},
{
"docid": "80bf80719a1751b16be2420635d34455",
"text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.",
"title": ""
},
{
"docid": "806eb562d4e2f1c8c45a08d7a8e7ce31",
"text": "We study admissibility of inference rules and unification with parameters in transitive modal logics (extensions of K4), in particular we generalize various results on parameterfree admissibility and unification to the setting with parameters. Specifically, we give a characterization of projective formulas generalizing Ghilardi’s characterization in the parameter-free case, leading to new proofs of Rybakov’s results that admissibility with parameters is decidable and unification is finitary for logics satisfying suitable frame extension properties (called cluster-extensible logics in this paper). We construct explicit bases of admissible rules with parameters for cluster-extensible logics, and give their semantic description. We show that in the case of finitely many parameters, these logics have independent bases of admissible rules, and determine which logics have finite bases. As a sideline, we show that cluster-extensible logics have various nice properties: in particular, they are finitely axiomatizable, and have an exponential-size model property. We also give a rather general characterization of logics with directed (filtering) unification. In the sequel, we will use the same machinery to investigate the computational complexity of admissibility and unification with parameters in cluster-extensible logics, and we will adapt the results to logics with unique top cluster (e.g., S4.2) and superintuitionistic logics.",
"title": ""
},
{
"docid": "d96373920011674bbb6b2008e9d4eec2",
"text": "Social networking site users must decide what content to share and with whom. Many social networks, including Facebook, provide tools that allow users to selectively share content or block people from viewing content. However, sometimes instead of targeting a particular audience, users will self-censor, or choose not to share. We report the results from an 18-participant user study designed to explore self-censorship behavior as well as the subset of unshared content participants would have potentially shared if they could have specifically targeted desired audiences. We asked participants to report all content they thought about sharing but decided not to share on Facebook and interviewed participants about why they made sharing decisions and with whom they would have liked to have shared or not shared. Participants reported that they would have shared approximately half the unshared content if they had been able to exactly target their desired audiences.",
"title": ""
},
{
"docid": "019f4534383668216108a456ac086610",
"text": "Cloud computing is an emerging paradigm for large scale infrastructures. It has the advantage of reducing cost by sharing computing and storage resources, combined with an on-demand provisioning mechanism relying on a pay-per-use business model. These new features have a direct impact on the budgeting of IT budgeting but also affect traditional security, trust and privacy mechanisms. Many of these mechanisms are no longer adequate, but need to be rethought to fit this new paradigm. In this paper we assess how security, trust and privacy issues occur in the context of cloud computing and discuss ways in which they may be addressed.",
"title": ""
},
{
"docid": "745562de56499ff0030f35afa8d84b7f",
"text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.",
"title": ""
},
{
"docid": "440b6eb0db7d28e85b74fd92c17dd818",
"text": "Recent advances in health and life sciences have led to generation of a large amount of data. To facilitate access to its desired parts, such a big mass of data has been represented in structured forms, like biomedical ontologies. On the other hand, representing ontologies in a formal language, constructing them independently from each other and storing them at different locations have brought about many challenges for answering queries about the knowledge represented in these ontologies. One of the challenges for the users is to be able represent a complex query in a natural language, and get its answers in an understandable form: Currently, such queries are answered by software systems in a formal language, however, the majority of the users lack the necessary knowledge of a formal query language to represent a query; moreover, none of these systems can provide informative explanations about the answers. Another challenge is to be able to answer complex queries that require appropriate integration of relevant knowledge stored in different places and in various forms. In this work, we address the first challenge by developing an intelligent user interface that allows users to enter biomedical queries in a natural language, and that presents the answers (possibly with explanations if requested) in a natural language. We address the second challenge by developing a rule layer over biomedical ontologies and databases, and use automated reasoners to answer queries considering relevant parts of the rule layer. The main contributions of our work can be summarized as follows:",
"title": ""
}
] |
scidocsrr
|
5985d7fc2cbc3bb182c513981b3fd821
|
Cross-Age LFW: A Database for Studying Cross-Age Face Recognition in Unconstrained Environments
|
[
{
"docid": "4143ba04659bf7b46c1733ae42b08956",
"text": "Recent face recognition experiments on the LFW [13] benchmark show that face recognition is performing stunningly well, surpassing human recognition rates. In this paper, we study face recognition at scale. Specifically, we have collected from Flickr a Million faces and evaluated state of the art face recognition algorithms on this dataset. We found that the performance of algorithms varies–while all perform great on LFW, once evaluated at scale recognition rates drop drastically for most algorithms. Interestingly, deep learning based approach by [23] performs much better, but still gets less robust at scale. We consider both verification and identification problems, and evaluate how pose affects recognition at scale. Moreover, we ran an extensive human study on Mechanical Turk to evaluate human recognition at scale, and report results. All the photos are creative commons photos and are released for research and further experiments on http://megaface. cs.washington.edu.",
"title": ""
}
] |
[
{
"docid": "f7e779114a0eb67fd9e3dfbacf5110c9",
"text": "Online game is an increasingly popular source of entertainment for all ages, with relatively prevalent negative consequences. Addiction is a problem that has received much attention. This research aims to develop a measure of online game addiction for Indonesian children and adolescents. The Indonesian Online Game Addiction Questionnaire draws from earlier theories and research on the internet and game addiction. Its construction is further enriched by including findings from qualitative interviews and field observation to ensure appropriate expression of the items. The measure consists of 7 items with a 5-point Likert Scale. It is validated by testing 1,477 Indonesian junior and senior high school students from several schools in Manado, Medan, Pontianak, and Yogyakarta. The validation evidence is shown by item-total correlation and criterion validity. The Indonesian Online Game Addiction Questionnaire has good item-total correlation (ranging from 0.29 to 0.55) and acceptable reliability (α = 0.73). It is also moderately correlated with the participant's longest time record to play online games (r = 0.39; p<0.01), average days per week in playing online games (ρ = 0.43; p<0.01), average hours per days in playing online games (ρ = 0.41; p<0.01), and monthly expenditure for online games (ρ = 0.30; p<0.01). Furthermore, we created a clinical cut-off estimate by combining criteria and population norm. The clinical cut-off estimate showed that the score of 14 to 21 may indicate mild online game addiction, and the score of 22 and above may indicate online game addiction. Overall, the result shows that Indonesian Online Game Addiction Questionnaire has sufficient psychometric property for research use, as well as limited clinical application.",
"title": ""
},
{
"docid": "a1ce51b0d9c54ef4b2bd3d797cb7425c",
"text": "Classification and segmentation of 3D point clouds are important tasks in computer vision. Because of the irregular nature of point clouds, most of the existing methods convert point clouds into regular 3D voxel grids before they are used as input for ConvNets. Unfortunately, voxel representations are highly insensitive to the geometrical nature of 3D data. More recent methods encode point clouds to higher dimensional features to cover the global 3D space. However, these models are not able to sufficiently capture the local structures of point clouds. Therefore, in this paper, we propose a method that exploits both local and global contextual cues imposed by the k-d tree. The method is designed to learn representation vectors progressively along the tree structure. Experiments on challenging benchmarks show that the proposed model provides discriminative point set features. For the task of 3D scene semantic segmentation, our method significantly outperforms the state-of-the-art on the Stanford Large-Scale 3D Indoor Spaces Dataset (S3DIS).",
"title": ""
},
{
"docid": "4828e830d440cb7a2c0501952033da2f",
"text": "This paper presents a current-mode control non-inverting buck-boost converter. The proposed circuit is controlled by the current mode and operated in three operation modes which are buck, buck-boost, and boost mode. The operation mode is automatically determined by the ratio between the input and output voltages. The proposed circuit is simulated by HSPICE with 0.5 um standard CMOS parameters. Its input voltage range is 2.5–5 V, and the output voltage range is 1.5–5 V. The maximum efficiency is 92% when it operates in buck mode.",
"title": ""
},
{
"docid": "3853fdb51a5e66c9fe83288c37bdad12",
"text": "We report a case of a young girl with Turner syndrome presenting with a pulsatile left-sided supraclavicular swelling since birth, which proved to be the rare anomaly of a cervical aortic arch. Though elongation of the transverse aortic arch is well known in Turner syndrome, to the best of our knowledge, a cervical aortic arch has not been described in the literature.",
"title": ""
},
{
"docid": "7ca61d792514de258fe0140cf833552d",
"text": "The design of a high-voltage output driver in a digital 0.25-/spl mu/m 2.5-V technology is presented. The use of stacked devices with a self-biased cascode topology allows the driver to operate at three times the nominal supply voltage. Oxide stress and hot carrier degradation is minimized since the driver operates within the voltage limits imposed by the design rules of a mainstream CMOS technology. The proposed high-voltage architecture uses a switching output stage. The realized prototype delivers an output swing of 6.46 V to a 50-/spl Omega/ load with a 7.5-V supply and an input square wave of 10 MHz. A PWM signal with a dual-tone sinusoid at 70 kHz and 250 kHz results in an IM3 of -65 dB and an IM2 of -67 dB. The on-resistance is 5.9 /spl Omega/.",
"title": ""
},
{
"docid": "2511dfd2f00448125ef1ea28d84a7439",
"text": "Libraries and other institutions are interested in providing access to scanned versions of their large collections of handwritten historical manuscripts on electronic media. Convenient access to a collection requires an index, which is manually created at great labour and expense. Since current handwriting recognizers do not perform well on historical documents, a technique called word spotting has been developed: clusters with occurrences of the same word in a collection are established using image matching. By annotating “interesting” clusters, an index can be built automatically. We present an algorithm for matching handwritten words in noisy historical documents. The segmented word images are preprocessed to create sets of 1-dimensional features, which are then compared using dynamic time warping. We present experimental results on two different data sets from the George Washington collection. Our experiments show that this algorithm performs better and is faster than competing matching techniques.",
"title": ""
},
{
"docid": "4930fa19f6374774a5f4575b56159e50",
"text": "We present a study of the correlation between the extent to which the cluster hypothesis holds, as measured by various tests, and the relative effectiveness of cluster-based retrieval with respect to document-based retrieval. We show that the correlation can be affected by several factors, such as the size of the result list of the most highly ranked documents that is analyzed. We further show that some cluster hypothesis tests are often negatively correlated with one another. Moreover, in several settings, some of the tests are also negatively correlated with the relative effectiveness of cluster-based retrieval.",
"title": ""
},
{
"docid": "f61d5c1b0c17de6aab8a0eafedb46311",
"text": "The use of social media creates the opportunity to turn organization-wide knowledge sharing in the workplace from an intermittent, centralized knowledge management process to a continuous online knowledge conversation of strangers, unexpected interpretations and re-uses, and dynamic emergence. We theorize four affordances of social media representing different ways to engage in this publicly visible knowledge conversations: metavoicing, triggered attending, network-informed associating, and generative role-taking. We further theorize mechanisms that affect how people engage in the knowledge conversation, finding that some mechanisms, when activated, will have positive effects on moving the knowledge conversation forward, but others will have adverse consequences not intended by the organization. These emergent tensions become the basis for the implications we draw.",
"title": ""
},
{
"docid": "64160c1842b00377b07da7797f6002d0",
"text": "The macaque monkey ventral intraparietal area (VIP) contains neurons with aligned visual-tactile receptive fields anchored to the face and upper body. Our previous fMRI studies using standard head coils found a human parietal face area (VIP+ complex; putative macaque VIP homologue) containing superimposed topological maps of the face and near-face visual space. Here, we construct high signal-to-noise surface coils and used phase-encoded air puffs and looming stimuli to map topological organization of the parietal face area at higher resolution. This area is consistently identified as a region extending between the superior postcentral sulcus and the upper bank of the anterior intraparietal sulcus (IPS), avoiding the fundus of IPS. Using smaller voxel sizes, our surface coils picked up strong fMRI signals in response to tactile and visual stimuli. By analyzing tactile and visual maps in our current and previous studies, we constructed a set of topological models illustrating commonalities and differences in map organization across subjects. The most consistent topological feature of the VIP+ complex is a central-anterior upper face (and upper visual field) representation adjoined by lower face (and lower visual field) representations ventrally (laterally) and/or dorsally (medially), potentially forming two subdivisions VIPv (ventral) and VIPd (dorsal). The lower visual field representations typically extend laterally into the anterior IPS to adjoin human area AIP, and medially to overlap with the parietal body areas at the superior parietal ridge. Significant individual variations are then illustrated to provide an accurate and comprehensive view of the topological organization of the parietal face area.",
"title": ""
},
{
"docid": "cd31be485b4b914508a5a9e7c5445459",
"text": "Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.",
"title": ""
},
{
"docid": "f8a32f8ccbc14ce1f4e4f5029ef122b8",
"text": "Content-based image retrieval (CBIR) is one of the most important applications of computer vision. In recent years, there have been many important advances in the development of CBIR systems, especially Convolutional Neural Networks (CNNs) and other deep-learning techniques. On the other hand, current CNN-based CBIR systems suffer from high computational complexity of CNNs. This problem becomes more severe as mobile applications become more and more popular. The current practice is to deploy the entire CBIR systems on the server side while the client side only serves as an image provider. This architecture can increase the computational burden on the server side, which needs to process thousands of requests per second. Moreover, sending images have the potential of personal information leakage. As the need of mobile search expands, concerns about privacy are growing. In this article, we propose a fast image search framework, named DeepSearch, which makes complex image search based on CNNs feasible on mobile phones. To implement the huge computation of CNN models, we present a tensor Block Term Decomposition (BTD) approach as well as a nonlinear response reconstruction method to accelerate the CNNs involving in object detection and feature extraction. The extensive experiments on the ImageNet dataset and Alibaba Large-scale Image Search Challenge dataset show that the proposed accelerating approach BTD can significantly speed up the CNN models and further makes CNN-based image search practical on common smart phones.",
"title": ""
},
{
"docid": "4f7bdbd70810be61870a9096f6d7ab13",
"text": "The limited screen size and resolution of current mobile devices can still be problematic for map, multimedia and browsing applications. In this paper we present Touch & Interact: an interaction technique in which a mobile phone is able to touch a display, at any position, to perform selections. Through the combination of the output capabilities of the mobile phone and display, applications can share the entire display space. Moreover, there is potential to realize new interaction techniques between the phone and display. For example, select & pick and select & drop are interactions whereby entities can be picked up onto the phone or dropped onto the display. We report the implementation of Touch & Interact, its usage for a tourist guide application and experimental comparison. The latter shows that the performance of Touch & Interact is comparable to approaches based on a touch screen; it also shows the advantages of our system regarding ease of use, intuitiveness and enjoyment.",
"title": ""
},
{
"docid": "06a1d90991c5a9039c6758a66205e446",
"text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.",
"title": ""
},
{
"docid": "d61283cb0873485e94213825b51880eb",
"text": "Influenza type A virus (Influenza virus A), an old foe of mankind, is presently the most significant pathogen causing both pandemics and epizootics, worldwide. The proliferative husbandry of poultry and pigs, primarily, constitutes a key factor in ongoing generation of pandemic and pre-pandemic strains, which is fueled by remarkable wild aquatic bird permissiveness of the virus. Those attributes are here thoroughly inquired into, so as to profile and rate threat and usability. Also, various human interventions and misuses, including human experimental infections, undesirable vaccinations, as well as unauthorized and unskillful operations, led to bad corollaries and may rather be reassessed and modified or disallowed. Diversified interfaces between influenza and human manners are thereby brought out and elucidated, along with their lessons.",
"title": ""
},
{
"docid": "25c41bdba8c710b663cb9ad634b7ae5d",
"text": "Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases, and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams, and hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalises ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “ l0 sketch” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 28th VLDB Conference, Hong Kong, China, 2002",
"title": ""
},
{
"docid": "4859e7f8bfc31401e19e360386867ae2",
"text": "Health data is important as it provides an individual with knowledge of the factors needed to be improved for oneself. The development of fitness trackers and their associated software aid consumers to understand the manner in which they may improve their physical wellness. These devices are capable of collecting health data for a consumer such sleeping patterns, heart rate readings or the number of steps taken by an individual. Although, this information is very beneficial to guide a consumer to a better healthier state, it has been identified that they have privacy and security concerns. Privacy and Security are of great concern for fitness trackers and their associated applications as protecting health data is of critical importance. This is so, as health data is one of the highly sort after information by cyber criminals. Fitness trackers and their associated applications have been identified to contain privacy and security concerns that places the health data of consumers at risk to intruders. As the study of Consumer Health continues to grow it is vital to understand the elements that are needed to better protect the health information of a consumer. This research paper therefore provides a conceptual threat assessment framework that can be used to identify the elements needed to better secure Consumer Health Wearables. These elements consist of six core elements from the CIA triad and Microsoft STRIDE framework. Fourteen vulnerabilities were further discovered that were classified within these six core elements. Through this, better guidance can be achieved to improve the privacy and security of Consumer Health Wearables.",
"title": ""
},
{
"docid": "e974154b7e25cb14c6bac2b0698299ac",
"text": "We study a stylized dynamic assortment planning problem during a selling season of finite length T . At each time spot, the seller offers an arriving customer an assortment of substitutable products and the customer makes the purchase among offered products according to a discrete choice model. The goal of the seller is to maximize the expected revenue, or equivalently, to minimize the worst-case expected regret. One key challenge is that utilities of products are unknown to the seller, which need to be learned. Although dynamic assortment planning problem has received an increasing attention in revenue management, most existing work is based on the multinomial logit choice models (MNL). In this paper, we study the problem of dynamic assortment planning under a more general choice model— nested logit model, which models hierarchical choice behavior and is “the most widely used member of the GEV (generalized extreme value) family” (Train, 2009). By leveraging the revenue-ordered structure of the optimal assortment within each nest, we develop a novel upper confidence bound (UCB) policy with an aggregated estimation scheme. Our policy simultaneously learns customers’ choice behavior and makes dynamic decisions on assortments based on the current knowledge. It achieves the regret at the order of Õ( √ MNT + MN), where M is the number of nests and N is the number of products in each nest. We further provide a lower bound result of Ω( √ MT ), which shows the optimality of the upper bound when T > M and N is small. However, the N term in the upper bound is not ideal for applications where N is large as compared to T . To address this issue, we further generalize our first policy by introducing a discretization technique, which leads to a regret of Õ( √ MT 2/3 + MNT ) with a specific choice of discretization granularity. It improves the previous regret bound whenever N > T . We provide numerical results to demonstrate the empirical performance of both proposed policies.",
"title": ""
},
{
"docid": "1bfe0c412abc11eeb664ad741a4239fa",
"text": "The Medium Access Control (MAC) protocol through which mobile stations can share a common broadcast channel is essential in an ad-hoc network. Due to the existence of hidden terminal problem, partially-connected network topology and lack of central administration, existing popular MAC protocols like IEEE 802.11 Distributed Foundation Wireless Medium Access Control (DFWMAC) [1] may lead to \"capture\" effects which means that some stations grab the shared channel and other stations suffer from starvation. This is also known as the \"fairness problem\". This paper reviews some related work in the literature and proposes a general approach to address the problem. This paper borrows the idea of fair queueing from wireline networks and defines the \"fairness index\" for ad-hoc network to quantify the fairness, so that the goal of achieving fairness becomes equivalent to minimizing the fairness index. Then this paper proposes a different backoff scheme for IEEE 802.11 DFWMAC, instead of the original binary exponential backoff scheme. Simulation results show that the new backoff scheme can achieve far better fairness without loss of simplicity.",
"title": ""
},
{
"docid": "95a376ec68ac3c4bd6b0fd236dca5bcd",
"text": "Long-term suppression of postprandial glucose concentration is an important dietary strategy for the prevention and treatment of type 2 diabetes. Because previous reports have suggested that seaweed may exert anti-diabetic effects in animals, the effects of Wakame or Mekabu intake with 200 g white rice, 50 g boiled soybeans, 60 g potatoes, and 40 g broccoli on postprandial glucose, insulin and free fatty acid levels were investigated in healthy subjects. Plasma glucose levels at 30 min and glucose area under the curve (AUC) at 0-30 min after the Mekabu meal were significantly lower than that after the control meal. Plasma glucose and glucose AUC were not different between the Wakame and control meals. Postprandial serum insulin and its AUC and free fatty acid concentration were not different among the three meals. In addition, fullness, satisfaction, and wellness scores were not different among the three meals. Thus, consumption of 70 g Mekabu with a white rice-based breakfast reduces postprandial glucose concentration.",
"title": ""
},
{
"docid": "9ff1b40d45182124af3788cd9f55935a",
"text": "with increase in the size of Web, the search engine relies on Web Crawlers to build and maintain the index of billions of pages for efficient searching. The creation and maintenance of Web indices is done by Web crawlers, the crawlers recursively traverses and downloads Web pages on behalf of search engines. The exponential growth of Web poses many challenges for crawlers.This paper makes an attempt to classify all the existing crawlers on certain parameters and also identifies the various challenges to web crawlers. Keywords— WWW, URL, Mobile Crawler, Mobile Agents, Web Crawler.",
"title": ""
}
] |
scidocsrr
|
15f23f09085e0dae423253cfe45ca814
|
A fuzzy model for wind speed prediction and power generation in wind parks using spatial correlation
|
[
{
"docid": "00b8207e783aed442fc56f7b350307f6",
"text": "A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.",
"title": ""
},
{
"docid": "338a8efaaf4a790b508705f1f88872b2",
"text": "During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of fuzzy set theory, especially in the realm of industrial processes, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relations. Fuzzy control is based on fuzzy logic-a logical system that is much closer in spirit to human thinking and natural language than traditional logical systems. The fuzzy logic controller (FLC) based on fuzzy logic provides a means of converting a linguistic control strategy based on expert knowledge into an automatic control strategy. A survey of the FLC is presented ; a general methodology for constructing an FLC and assessing its performance is described; and problems that need further research are pointed out. In particular, the exposition includes a discussion of fuzzification and defuzzification strategies, the derivation of the database and fuzzy control rules, the definition of fuzzy implication, and an analysis of fuzzy reasoning mechanisms. A may be regarded as a means of emulating a skilled human operator. More generally, the use of an FLC may be viewed as still another step in the direction of model-ing human decisionmaking within the conceptual framework of fuzzy logic and approximate reasoning. In this context, the forward data-driven inference (generalized modus ponens) plays an especially important role. In what follows, we shall investigate fuzzy implication functions, the sentence connectives and and also, compositional operators, inference mechanisms, and other concepts that are closely related to the decisionmaking logic of an FLC. In general, a fuzzy control rule is a fuzzy relation which is expressed as a fuzzy implication. In fuzzy logic, there are many ways in which a fuzzy implication may be defined. The definition of a fuzzy implication may be expressed as a fuzzy implication function. The choice of a fuzzy implication function reflects not only the intuitive criteria for implication but also the effect of connective also. I) Basic Properties of a Fuuy Implication Function: The choice of a fuzzy implication function involves a number of criteria, which are discussed in considered the following basic characteristics of a fuzzy implication function: fundamental property, smoothness property, unrestricted inference, symmetry of generalized modus ponens and generalized modus tollens, and a measure of propagation of fuzziness. All of these properties are justified on purely intuitive grounds. We prefer to say …",
"title": ""
}
] |
[
{
"docid": "f370a8ff8722d341d6e839ec2c7217c1",
"text": "We give the first O(mpolylog(n)) time algorithms for approximating maximum flows in undirected graphs and constructing polylog(n)-quality cut-approximating hierarchical tree decompositions. Our algorithm invokes existing algorithms for these two problems recursively while gradually incorporating size reductions. These size reductions are in turn obtained via ultra-sparsifiers, which are key tools in solvers for symmetric diagonally dominant (SDD) linear systems.",
"title": ""
},
{
"docid": "5d3738f554cbcba51d59ac18087795e0",
"text": "This study examined the role of monoand biarticular muscles in control of countermovement jumps (CMJ) in different directions. It was hypothesized that monoarticular muscles would demonstrate the same activity regardless of jump direction, based on previous studies which suggest their role is to generate energy to maximize center-of-mass (CM) velocity. In contrast, biarticular activity patterns were expected to change to control the direction of the ground reaction force (GRF) and CM velocity vectors. Twelve participants performed maximal CMJs in four directions: vertical, forward, intermediate forward, and backward. Electromyographical data from 4 monoarticular and 3 biarticular lower extremity muscles were analyzed with respect to segmental kinematics and kinetics during the jumps. The biarticular rectus femoris (RF), hamstrings (HA), and gastrocnemius all exhibited changes in activity magnitude and pattern as a function of jump angle. In particular, HA and RF demonstrated reciprocal trends, with HA activity increasing as jump angle changed from backward to forward, while RF activity was reduced in the forward jump condition. The vastus lateralis and gluteus maximus both demonstrated changes in activity patterns, although the former was the only monoarticular muscle to change activity level with jump direction. Monoand biarticular muscle activities therefore did not fit with their hypothesized roles. CM and segmental kinematics suggest that jump direction was initiated early in the countermovement, and that in each jump direction the propulsion phase began from a different position with unique angular and linear momentum. Issues that dictated the muscle activity patterns in each jump direction were the early initiation of appropriate forward momentum, the transition from countermovement to propulsion, the control of individual segment rotations, the control of GRF location and direction, and the influence of the subsequent landing.",
"title": ""
},
{
"docid": "c5b7fc20ec1f53390fbee7815e334c63",
"text": "In this paper, we propose a novel optimization framework for Roadside Unit (RSU) deployment and configuration in a vehicular network. We formulate the problem of placement of RSUs and selecting their configurations (e.g. power level, types of antenna and wired/wireless back haul network connectivity) as a linear program. The objective function is to minimize the total cost to deploy and maintain the network of RSU's. A user specified constraint on the minimum coverage provided by the RSU is also incorporated into the optimization framework. Further, the framework also supports the option of specifying selected regions of higher importance such as locations of frequently occurring accidents and incorporating constraints requiring stricter coverage in those areas. Simulation results are presented to demonstrate the feasibility of deployment on the campus map of Southern Methodist University (SMU). The efficiency and scalability of the optimization procedure for large scale problems are also studied and results shows that optimization over an area with the size of Cambridge, Massachusetts is completed in under 2 minutes. Finally, the effects of variation in several key parameters on the resulting design are studied.",
"title": ""
},
{
"docid": "f0f432edbfd66ae86621c9888d04249d",
"text": "Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.",
"title": ""
},
{
"docid": "0c6ad036e4136034d515c8eab4d414e2",
"text": "This paper presents Social MatchUP, a multiplayer Virtual Reality game for children with Neurodevelopmental Disorders (NDD). Shared virtual reality environments (SVREs) allow NDD children to interact in the same virtual space, but without the possible discomfort or fear caused by having a real person in front of them. Social MatchUP is a simple Concentration-like game, run on smartphones, where players should communicate to match up all the pairs of images they are given. Because every player can only interact with half of the pictures, but can see what his companion is doing, the game improves social and communication skill, and can be used also as a learning tool. A simple and easy-to-use customization tool was also developed to let therapists and teachers adapt the game context to the needs of the children they take care of.",
"title": ""
},
{
"docid": "c201eec6ee2b2a9dee62d56eae9ebe17",
"text": "In modeling system response to security threats, researchers have made extensive use of state space models, notable instances including the partially observable stochastic game model proposed by Zonouz et.al. The drawback of these state space models is that they may suffer from state space explosion. Our approach in modeling defense makes use of a combinatorial model which helps avert this problem. We propose a new attack-tree (AT) model named attack-countermeasure trees (ACT) based on combinatorial modeling technique for modeling attacks and countermeasures. ACT enables one to (i) place defense mechanisms in the form of detection and mitigation techniques at any node of the tree, not just at the leaf nodes as in defense trees (DT) (ii) automate the generation of attack scenarios from the ACT using its mincuts and (iii) perform probabilistic analysis (e.g. probability of attack, attack and security investment cost, impact of an attack, system risk, return on attack (ROA) and return on investment (ROI)) in an integrated manner (iv) select an optimal countermeasure set from the pool of defense mechanisms using a method which is much less expensive compared to the state-space based approach (v) perform analysis for trees with both repeated and non-repeat events. For evaluation purposes, we suggest suitable algorithms and implement an ACT module in SHARPE. We demonstrate the utility of ACT using a practical case study (BGP attacks).",
"title": ""
},
{
"docid": "a9314b036f107c99545349ccdeb30781",
"text": "The development and implementation of language teaching programs can be approached in several different ways, each of which has different implications for curriculum design. Three curriculum approaches are described and compared. Each differs with respect to when issues related to input, process, and outcomes, are addressed. Forward design starts with syllabus planning, moves to methodology, and is followed by assessment of learning outcomes. Resolving issues of syllabus content and sequencing are essential starting points with forward design, which has been the major tradition in language curriculum development. Central design begins with classroom processes and methodology. Issues of syllabus and learning outcomes are not specified in detail in advance and are addressed as the curriculum is implemented. Many of the ‘innovative methods’ of the 1980s and 90s reflect central design. Backward design starts from a specification of learning outcomes and decisions on methodology and syllabus are developed from the learning outcomes. The Common European Framework of Reference is a recent example of backward design. Examples will be given to suggest how the distinction between forward, central and backward design can clarify the nature of issues and trends that have emerged in language teaching in recent years.",
"title": ""
},
{
"docid": "2b491f3c06f91e62e07b43c68bec0801",
"text": "Sissay M.M., 2007. Helminth parasites of sheep and goats in eastern Ethiopia: Epidemiology, and anthelmintic resistance and its management. Doctoral thesis, Swedish University of Agricultural Sciences, Uppsala, Sweden. ISSN 1652-6880, ISBN 978-91-576-7351-0 A two-year epidemiology study of helminths of small ruminants involved the collection of viscera from 655 sheep and 632 goats from 4 abattoirs in eastern Ethiopia. A further more detailed epidemiology study of gastro-intestinal nematode infections used the Haramaya University (HU) flock of 60 Black Head Ogaden sheep. The parasitological data included numbers of nematode eggs per gram of faeces (EPG), faecal culture L3 larvae, packed red cell volume (PCV), adult worm and early L4 counts, and FAMACHA eye-colour score estimates, along with animal performance (body weight change). There were 13 species of nematodes and 4 species of flukes present in the sheep and goats, with Haemonchus contortus being the most prevalent (65–80%), followed by Trichostrongylus spp. The nematode infection levels of both sheep and goats followed the bi-modal annual rainfall pattern, with the highest worm burdens occurring during the two rain seasons (peaks in May and September). There were significant differences in worm burdens between the 4 geographic locations for both sheep and goats. Similar seasonal but not geographical variations occurred in the prevalence of flukes. There were significant correlations between EPG and PCV, EPG and FAMACHA scores, and PCV and FAMACHA scores. Moreover, H. contortus showed an increased propensity for arrested development during the dry seasons. Faecal egg count reduction tests (FECRT) conducted on the HU flocks, and flocks in surrounding small-holder communities, evaluated the efficacy of commonly used anthelmintics, including albendazole (ABZ), tetramisole (TET), a combination (ABZ + TET) and ivermectin (IVM). Initially, high levels of resistance to all of the anthelmintics were found in the HU goat flock but not in the sheep. In an attempt to restore the anthelmintic efficacy a new management system was applied to the HU goat flock, including: eliminating the existing parasite infections in the goats, exclusion from the traditional goat pastures, and initiation of communal grazing of the goats with the HU sheep and animals of the local small-holder farmers. Subsequent FECRTs revealed high levels of efficacy of all three drugs in the goat and sheep flocks, demonstrating that anthelmintic efficacy can be restored by exploiting refugia. Individual FECRTs were also conducted on 8 sheep and goat flocks owned by neighbouring small-holder farmers, who received breeding stock from the HU. In each FECRT, 50 local breed sheep and goats, 6–9 months old, were divided into 5 treatment groups: ABZ, TET, ABZ + TET, IVM and untreated control. There was no evidence of anthelmintic resistance in the nematodes, indicating that dilution of resistant parasites, which are likely to be imported with introduced breeding goats, and the low selection pressure imposed by the small-holder farmers, had prevented anthelmintic resistance from emerging.",
"title": ""
},
{
"docid": "338b6f6cd30f16ebfc991215e7ea5931",
"text": "Distance learning, electronic learning, and mobile learning offer content, methods, and technologies that decrease the limitations of traditional education. Mobile learning (m-learning) is an extension of distance education, supported by mobile devices equipped with wireless technologies. It is an emerging learning model and process that requires new forms of teaching, learning, contents, and dynamics between actors. In order to ascertain the current state of knowledge and research, an extensive review of the literature in m-learning has been undertaken to identify and harness potential factors and gaps in implementation. This article provides a critical analysis of m-learning projects and related literature, presenting the findings of this aforementioned analysis. It seeks to facilitate the inquiry into the following question: “What is possible in m-learning using recent technologies?” The analysis will be divided into two main parts: applications from the recent online mobile stores and operating system standalone applications.",
"title": ""
},
{
"docid": "515519cc7308477e1c38a74c4dd720f0",
"text": "The objective of cosmetic surgery is increased patient self-esteem and confidence. Most patients undergoing a procedure report these results post-operatively. The success of any procedure is measured in patient satisfaction. In order to optimize patient satisfaction, literature suggests careful pre-operative patient preparation including a discussion of the risks, benefits, limitations and expected results for each procedure undertaken. As a general rule, the patients that are motivated to surgery by a desire to align their outward appearance to their body-image tend to be the most satisfied. There are some psychiatric conditions that can prevent a patient from being satisfied without regard aesthetic success. The most common examples are minimal defect/Body Dysmorphic Disorder, the patient in crisis, the multiple revision patient, and loss of identity. This paper will familiarize the audience with these conditions, symptoms and related illnesses. Case examples are described and then explored in terms of the conditions presented. A discussion of the patient’s motivation for surgery, goals pertaining to specific attributes, as well as an evaluation of the patient’s understanding of the risks, benefits, and limitations of the procedure can help the physician determine if a patient is capable of being satisfied with a cosmetic plastic surgery procedure. Plastic surgeons can screen patients suffering from these conditions relatively easily, as psychiatry is an integral part of medical school education. If a psychiatric referral is required, then the psychiatrist needs to be aware of the nuances of each of these conditions.",
"title": ""
},
{
"docid": "ff345d732a273577ca0f965b92e1bbbd",
"text": "Integrated circuit (IC) testing for quality assurance is approaching 50% of the manufacturing costs for some complex mixed-signal IC’s. For many years the market growth and technology advancements in digital IC’s were driving the developments in testing. The increasing trend to integrate information acquisition and digital processing on the same chip has spawned increasing attention to the test needs of mixed-signal IC’s. The recent advances in wireless communications indicate a trend toward the integration of the RF and baseband mixed signal technologies. In this paper we examine the developments in IC testing form the historic, current status and future view points. In separate sections we address the testing developments for digital, mixed signal and RF IC’s. With these reviews as context, we relate new test paradigms that have the potential to fundamentally alter the methods used to test mixed-signal and RF parts.",
"title": ""
},
{
"docid": "822fdafcb1cec1c0f54e82fb79900ff3",
"text": "Chlorophyll fluorescence imaging was used to follow infections of Nicotiana benthamiana with the hemibiotrophic fungus, Colletotrichum orbiculare. Based on Fv/Fm images, infected leaves were divided into: healthy tissue with values similar to non-inoculated leaves; water-soaked/necrotic tissue with values near zero; and non-necrotic disease-affected tissue with intermediate values, which preceded or surrounded water-soaked/necrotic tissue. Quantification of Fv/Fm images showed that there were no changes until late in the biotrophic phase when spots of intermediate Fv/Fm appeared in visibly normal tissue. Those became water-soaked approx. 24 h later and then turned necrotic. Later in the necrotrophic phase, there was a rapid increase in affected and necrotic tissue followed by a slower increase as necrotic areas merged. Treatment with the induced systemic resistance activator, 2R, 3R-butanediol, delayed affected and necrotic tissue development by approx. 24 h. Also, the halo of affected tissue was narrower indicating that plant cells retained a higher photosystem II efficiency longer prior to death. While chlorophyll fluorescence imaging can reveal much about the physiology of infected plants, this study demonstrates that it is also a practical tool for quantifying hemibiotrophic fungal infections, including affected tissue that is appears normal visually but is damaged by infection.",
"title": ""
},
{
"docid": "1121e6d94c1e545e0fa8b0d8b0ef5997",
"text": "Research is a continuous phenomenon. It is recursive in nature. Every research is based on some earlier research outcome. A general approach in reviewing the literature for a problem is to categorize earlier work for the same problem as positive and negative citations. In this paper, we propose a novel automated technique, which classifies whether an earlier work is cited as sentiment positive or sentiment negative. Our approach first extracted the portion of the cited text from citing paper. Using a sentiment lexicon we classify the citation as positive or negative by picking a window of at most five (5) sentences around the cited place (corpus). We have used Naïve-Bayes Classifier for sentiment analysis. The algorithm is evaluated on a manually annotated and class labelled collection of 150 research papers from the domain of computer science. Our preliminary results show an accuracy of 80%. We assert that our approach can be generalized to classification of scientific research papers in different disciplines.",
"title": ""
},
{
"docid": "c731c1fb8a1b1a8bd6ab8b9165de5498",
"text": "Video Game Software Development is a promising area of empirical research because our first observations in industry environment identified a lack of a systematic process and method support and rarely conducted/documented studies. Nevertheless, video games specific types of software products focus strongly on user interface and game design. Thus, engineering processes, methods for game construction and verification/validation, and best-practices, derived from traditional software engineering, might be applicable in context of video game development. We selected the Austrian games industry as a manageable and promising starting point for systematically capturing the state-of-the practice in Video game development. In this paper we present the survey design and report on the first results of a national survey in the Austrian games industry. The results of the survey showed that the Austrian games industry is organized in a set of small and young studios with the trend to ad-hoc and flexible development processes and limitations in systematic method support.",
"title": ""
},
{
"docid": "9bb8a69b500d7d3ab5299262c8f17726",
"text": "Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute-based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal/aYahoo. Our approach outperforms state-of the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin.",
"title": ""
},
{
"docid": "8eb0edd6d378a627c61f9228745ef36e",
"text": "Unlike radial flux machines, slotted axial flux machine has a particular airgap flux distribution which is a function of the machine diameter. Due to the rectangular slot geometry, stator teeth present a trapezoidal geometry with small tooth width close to the shaft, increasing as the diameter becomes larger. This fact introduces an uneven airgap flux distribution if a constant flux source, such as rectangular PM, is utilized to magnetize the machine. As a result, flux density over the stator tooth becomes irregular, inductance parameters are a function of the stator diameter and saliency varies according to machine load. All these effects degrade machine power capability for low and rated load. In this paper, a novel axial flux PM machine with tangential magnetization is presented. An analytic and numerical study is carried out to consider stator tooth geometry and its effect over machine saliency ratio.",
"title": ""
},
{
"docid": "da19fd683e64b0192bd52eadfade33a2",
"text": "For professional users such as firefighters and other first responders, GNSS positioning technology (GPS, assisted GPS) can satisfy outdoor positioning requirements in many instances. However, there is still a need for high-performance deep indoor positioning for use by these same professional users. This need has already been clearly expressed by various communities of end users in the context of WearIT@Work, an R&D project funded by the European Community's Sixth Framework Program. It is known that map matching can help for indoor pedestrian navigation. In most previous research, it was assumed that detailed building plans are available. However, in many emergency / rescue scenarios, only very limited building plan information may be at hand. For example a building outline might be obtained from aerial photographs or cataster databases. Alternatively, an escape plan posted at the entrances to many building would yield only approximate exit door and stairwell locations as well as hallway and room orientation. What is not known is how much map information is really required for a USAR mission and how much each level of map detail might help to improve positioning accuracy. Obviously, the geometry of the building and the course through will be factors consider. The purpose of this paper is to show how a previously published Backtracking Particle Filter (BPF) can be combined with different levels of building plan detail to improve PDR performance. A new in/out scenario that might be typical of a reconnaissance mission during a fire in a two-story office building was evaluated. Using only external wall information, the new scenario yields positioning performance (2.56 m mean 2D error) that is greatly superior to the PDR-only, no map base case (7.74 m mean 2D error). This result has a substantial practical significance since this level of building plan detail could be quickly and easily generated in many emergency instances. The technique could be used to mitigate heading errors that result from exposing the IMU to extreme operating conditions. It is hoped that this mitigating effect will also occur for more irregular paths and in larger traversed spaces such as parking garages and warehouses.",
"title": ""
},
{
"docid": "a280f710b0e41d844f1b9c76e7404694",
"text": "Self-determination theory posits that the degree to which a prosocial act is volitional or autonomous predicts its effect on well-being and that psychological need satisfaction mediates this relation. Four studies tested the impact of autonomous and controlled motivation for helping others on well-being and explored effects on other outcomes of helping for both helpers and recipients. Study 1 used a diary method to assess daily relations between prosocial behaviors and helper well-being and tested mediating effects of basic psychological need satisfaction. Study 2 examined the effect of choice on motivation and consequences of autonomous versus controlled helping using an experimental design. Study 3 examined the consequences of autonomous versus controlled helping for both helpers and recipients in a dyadic task. Finally, Study 4 manipulated motivation to predict helper and recipient outcomes. Findings support the idea that autonomous motivation for helping yields benefits for both helper and recipient through greater need satisfaction. Limitations and implications are discussed.",
"title": ""
},
{
"docid": "698ff874df9ec0ee7a2b45f1ef52a09e",
"text": "a lot of studies provide strong evidence that traditional predictive regression models face significant challenges in out-of sample predictability tests due to model uncertainty and parameter instability. Recent studies introduce particular strategies that overcome these problems. Support Vector Machine (SVM) is a relatively new learning algorithm that has the desirable characteristics of the control of the decision function, the use of the kernel method, and the sparsely of the solution. In this paper, we present a theoretical and empirical framework to apply the Support Vector Machines strategy to predict the stock market. Firstly, four company-specific and six macroeconomic factors that may influence the stock trend are selected for further stock multivariate analysis. Secondly, Support Vector Machine is used in analyzing the relationship of these factors and predicting the stock performance. Our results suggest that SVM is a powerful predictive tool for stock predictions in the financial market.",
"title": ""
},
{
"docid": "cf428835fa19d39c9c4488ab9c715fbb",
"text": "Principle Component Analysis (PCA) is a mathematical procedure widely used in exploratory data analysis, signal processing, etc. However, it is often considered a black box operation whose results and procedures are difficult to understand. The goal of this paper is to provide a detailed explanation of PCA based on a designed visual analytics tool that visualizes the results of principal component analysis and supports a rich set of interactions to assist the user in better understanding and utilizing PCA. The paper begins by describing the relationship between PCA and single vector decomposition (SVD), the method used in our visual analytics tool. Then a detailed explanation of the interactive visual analytics tool, including advantages and limitations, is provided.",
"title": ""
}
] |
scidocsrr
|
bc06b9197d20496c46869cf310c831d8
|
How did the discussion go: Discourse act classification in social media conversations
|
[
{
"docid": "8f9af064f348204a71f0e542b2b98e7b",
"text": "It is often useful to classify email according to the intent of the sender (e.g., \"propose a meeting\", \"deliver information\"). We present experimental results in learning to classify email in this fashion, where each class corresponds to a verbnoun pair taken from a predefined ontology describing typical “email speech acts”. We demonstrate that, although this categorization problem is quite different from “topical” text classification, certain categories of messages can nonetheless be detected with high precision (above 80%) and reasonable recall (above 50%) using existing text-classification learning methods. This result suggests that useful task-tracking tools could be constructed based on automatic classification into this taxonomy.",
"title": ""
},
{
"docid": "59af45fa33fd70d044f9749e59ba3ca7",
"text": "Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating useful information. Even though a lot of information is shared via its social network structure in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user’s tweet. We believe that this research would inform the design of sensemaking tools for Twitter streams as well as other general social media collections. Keywords-Twitter; retweet; tweet; follower; social network; social media; factor analysis",
"title": ""
}
] |
[
{
"docid": "039044aaa25f047e28daba08237c0de5",
"text": "BI technologies are essential to running today's businesses and this technology is going through sea changes.",
"title": ""
},
{
"docid": "79d5cb45b36a707727ecfcae0a091498",
"text": "We use 810 versions of the Linux kernel, released over a perio d of 14 years, to characterize the system’s evolution, using Lehman’s laws of software evolut i n as a basis. We investigate different possible interpretations of these laws, as reflected by diff erent metrics that can be used to quantify them. For example, system growth has traditionally been qua tified using lines of code or number of functions, but functional growth of an operating system l ike Linux can also be quantified using the number of system calls. In addition we use the availabili ty of the source code to track metrics, such as McCabe’s cyclomatic complexity, that have not been tr acked across so many versions previously. We find that the data supports several of Lehman’ s l ws, mainly those concerned with growth and with the stability of the process. We also make som e novel observations, e.g. that the average complexity of functions is decreasing with time, bu t this is mainly due to the addition of many small functions.",
"title": ""
},
{
"docid": "affa4a43b68f8c158090df3a368fe6b6",
"text": "The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.",
"title": ""
},
{
"docid": "de9767297368dffbdbae4073338bdb15",
"text": "An increasing number of applications rely on 3D geoinformation. In addition to 3D geometry, these applications particularly require complex semantic information. In the context of spatial data infrastructures the needed data are drawn from distributed sources and often are thematically and spatially fragmented. Straight forward joining of 3D objects would inevitably lead to geometrical inconsistencies such as cracks, permeations, or other inconsistencies. Semantic information can help to reduce the ambiguities for geometric integration, if it is coherently structured with respect to geometry. The paper discusses these problems with special focus on virtual 3D city models and the semantic data model CityGML, an emerging standard for the representation and the exchange of 3D city models based on ISO 191xx standards and GML3. Different data qualities are analyzed with respect to their semantic and spatial structure leading to the distinction of six categories regarding the spatio-semantic coherence of 3D city models. Furthermore, it is shown how spatial data with complex object descriptions support the integration process. The derived categories will help in the future development of automatic integration methods for complex 3D geodata.",
"title": ""
},
{
"docid": "7fc3dfcc8fa43c36938f41877a65bed7",
"text": "We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar modelbased approaches and competes with state-of-the art approaches that require real pose-annotated images. 1",
"title": ""
},
{
"docid": "1a5183d8e0a0a7a52935e357e9b525ed",
"text": "Embedded systems, as opposed to traditional computers, bring an incredible diversity. The number of devices manufactured is constantly increasing and each has a dedicated software, commonly known as firmware. Full firmware images are often delivered as multiple releases, correcting bugs and vulnerabilities, or adding new features. Unfortunately, there is no centralized or standardized firmware distribution mechanism. It is therefore difficult to track which vendor or device a firmware package belongs to, or to identify which firmware version is used in deployed embedded devices. At the same time, discovering devices that run vulnerable firmware packages on public and private networks is crucial to the security of those networks. In this paper, we address these problems with two different, yet complementary approaches: firmware classification and embedded web interface fingerprinting. We use supervised Machine Learning on a database subset of real world firmware files. For this, we first tell apart firmware images from other kind of files and then we classify firmware images per vendor or device type. Next, we fingerprint embedded web interfaces of both physical and emulated devices. This allows recognition of web-enabled devices connected to the network. In some cases, this complementary approach allows to logically link web-enabled online devices with the corresponding firmware package that is running on the devices. Finally, we test the firmware classification approach on 215 images with an accuracy of 93.5%, and the device fingerprinting approach on 31 web interfaces with 89.4% accuracy.",
"title": ""
},
{
"docid": "0d60045d58a4fbad2a3a30bd8b9483a8",
"text": "We present R2G, a tool for the automatic migration of databases from a relational to a Graph Database Management System (GDBMS). GDBMSs provide a flexible and efficient solution to the management of graph-based data (e.g., social and semantic Web data) and, in this context, the conversion of the persistent layer of an application from a relational to a graph format can be very beneficial. R2G provides a thorough solution to this problem with a minimal impact to the application layer: it transforms a relational database r into a graph database g and any conjunctive query over r into a graph query over g. Constraints defined over r are suitably used in the translation to minimize the number of data access required by graph queries. The approach refers to an abstract notion of graph database and this allows R2G to map relational database into different GDBMSs. The demonstration of R2G allows the direct comparison of the relational and the graph approaches to data management.",
"title": ""
},
{
"docid": "b150e9aef47001e1b643556f64c5741d",
"text": "BACKGROUND\nMany adolescents have poor mental health literacy, stigmatising attitudes towards people with mental illness, and lack skills in providing optimal Mental Health First Aid to peers. These could be improved with training to facilitate better social support and increase appropriate help-seeking among adolescents with emerging mental health problems. teen Mental Health First Aid (teen MHFA), a new initiative of Mental Health First Aid International, is a 3 × 75 min classroom based training program for students aged 15-18 years.\n\n\nMETHODS\nAn uncontrolled pilot of the teen MHFA course was undertaken to examine the feasibility of providing the program in Australian secondary schools, to test relevant measures of student knowledge, attitudes and behaviours, and to provide initial evidence of program effects.\n\n\nRESULTS\nAcross four schools, 988 students received the teen MHFA program. 520 students with a mean age of 16 years completed the baseline questionnaire, 345 completed the post-test and 241 completed the three-month follow-up. Statistically significant improvements were found in mental health literacy, confidence in providing Mental Health First Aid to a peer, help-seeking intentions and student mental health, while stigmatising attitudes significantly reduced.\n\n\nCONCLUSIONS\nteen MHFA appears to be an effective and feasible program for training high school students in Mental Health First Aid techniques. Further research is required with a randomized controlled design to elucidate the causal role of the program in the changes observed.",
"title": ""
},
{
"docid": "65d84bb6907a34f8bc8c4b3d46706e53",
"text": "This study analyzes the correlation between video game usage and academic performance. Scholastic Aptitude Test (SAT) and grade-point average (GPA) scores were used to gauge academic performance. The amount of time a student spends playing video games has a negative correlation with students' GPA and SAT scores. As video game usage increases, GPA and SAT scores decrease. A chi-squared analysis found a p value for video game usage and GPA was greater than a 95% confidence level (0.005 < p < 0.01). This finding suggests that dependence exists. SAT score and video game usage also returned a p value that was significant (0.01 < p < 0.05). Chi-squared results were not significant when comparing time spent studying and an individual's SAT score. This research suggests that video games may have a detrimental effect on an individual's GPA and possibly on SAT scores. Although these results show statistical dependence, proving cause and effect remains difficult, since SAT scores represent a single test on a given day. The effects of video games maybe be cumulative; however, drawing a conclusion is difficult because SAT scores represent a measure of general knowledge. GPA versus video games is more reliable because both involve a continuous measurement of engaged activity and performance. The connection remains difficult because of the complex nature of student life and academic performance. Also, video game usage may simply be a function of specific personality types and characteristics.",
"title": ""
},
{
"docid": "2d32062668cb4b010f69267911124718",
"text": "Interfascial plane blocks have becomevery popular in recent years. A novel interfascial plane block, erector spinae plane (ESP) block can target the dorsal and ventral rami of the thoracic spinal nerves but its effect in neuropathic pain is unclear [1]. If acute pain management for herpes zoster is not done aggressively, it can turn into chronic pain. However; ESP block is first described as inject local anesthetics around the erector spinae muscle at the level of T5 spinous process for thoracic region, if the block is performed at lower levels it could be effective for abdominal and lumbar region [2]. There have been no reports on the efficacy of ESP block over the herpes zoster pain. Here it we report the successful management of acute herpes zoster pain using low thoracic ESP block. Awritten consent formwasobtained from thepatient for this report. The patient was an 72-year-oldmanwho presentedwith severe painful vesicles (9/10 VAS intensity) over posterior lumbar and lateral abdominal region (Fig. 1A). The patient received amitriptyline 10 mg, non-",
"title": ""
},
{
"docid": "2b8305c10f1105905f2a2f9651cb7c9f",
"text": "Many distributed collective decision-making processes must balance diverse individual preferences with a desire for collective unity. We report here on an extensive session of behavioral experiments on biased voting in networks of individuals. In each of 81 experiments, 36 human subjects arranged in a virtual network were financially motivated to reach global consensus to one of two opposing choices. No payments were made unless the entire population reached a unanimous decision within 1 min, but different subjects were paid more for consensus to one choice or the other, and subjects could view only the current choices of their network neighbors, thus creating tensions between private incentives and preferences, global unity, and network structure. Along with analyses of how collective and individual performance vary with network structure and incentives generally, we find that there are well-studied network topologies in which the minority preference consistently wins globally; that the presence of \"extremist\" individuals, or the awareness of opposing incentives, reliably improve collective performance; and that certain behavioral characteristics of individual subjects, such as \"stubbornness,\" are strongly correlated with earnings.",
"title": ""
},
{
"docid": "4249c95fcd869434312524f05c013c55",
"text": "The demands on visual recognition systems do not end with the complexity offered by current large-scale image datasets, such as ImageNet. In consequence, we need curious and continuously learning algorithms that actively acquire knowledge about semantic concepts which are present in available unlabeled data. As a step towards this goal, we show how to perform continuous active learning and exploration, where an algorithm actively selects relevant batches of unlabeled examples for annotation. These examples could either belong to already known or to yet undiscovered classes. Our algorithm is based on a new generalization of the Expected Model Output Change principle for deep architectures and is especially tailored to deep neural networks. Furthermore, we show easy-to-implement approximations that yield efficient techniques for active selection. Empirical experiments show that our method outperforms currently used heuristics.",
"title": ""
},
{
"docid": "6572c7d33fcb3f1930a41b4b15635ffe",
"text": "Neurons in area MT (V5) are selective for the direction of visual motion. In addition, many are selective for the motion of complex patterns independent of the orientation of their components, a behavior not seen in earlier visual areas. We show that the responses of MT cells can be captured by a linear-nonlinear model that operates not on the visual stimulus, but on the afferent responses of a population of nonlinear V1 cells. We fit this cascade model to responses of individual MT neurons and show that it robustly predicts the separately measured responses to gratings and plaids. The model captures the full range of pattern motion selectivity found in MT. Cells that signal pattern motion are distinguished by having convergent excitatory input from V1 cells with a wide range of preferred directions, strong motion opponent suppression and a tuned normalization that may reflect suppressive input from the surround of V1 cells.",
"title": ""
},
{
"docid": "3c2b68ac95f1a9300585b73ca4b83122",
"text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.",
"title": ""
},
{
"docid": "fbbd24318caac8a8a2a63670f6a624cd",
"text": "We show that elliptic-curve cryptography implementations on mobile devices are vulnerable to electromagnetic and power side-channel attacks. We demonstrate full extraction of ECDSA secret signing keys from OpenSSL and CoreBitcoin running on iOS devices, and partial key leakage from OpenSSL running on Android and from iOS's CommonCrypto. These non-intrusive attacks use a simple magnetic probe placed in proximity to the device, or a power probe on the phone's USB cable. They use a bandwidth of merely a few hundred kHz, and can be performed cheaply using an audio card and an improvised magnetic probe.",
"title": ""
},
{
"docid": "ef64da59880750872e056822c17ab00e",
"text": "The efficient cooling is very important for a light emitting diode (LED) module because both the energy efficiency and lifespan decrease significantly as the junction temperature increases. The fin heat sink is commonly used for cooling LED modules with natural convection conditions. This work proposed a new design method for high-power LED lamp cooling by combining plate fins with pin fins and oblique fins. Two new types of fin heat sinks called the pin-plate fin heat sink (PPF) and the oblique-plate fin heat sink (OPF) were designed and their heat dissipation performances were compared with three conventional fin heat sinks, the plate fin heat sink, the pin fin heat sink and the oblique fin heat sink. The LED module was assumed to be operated under 1 atmospheric pressure and its heat input is set to 4 watts. The PPF and OPF models show lower junction temperatures by about 6°C ~ 12°C than those of three conventional models. The PPF with 8 plate fins inside (PPF-8) and the OPF with 7 plate fins inside (OPF-7) showed the best thermal performance among all the PPF and OPF designs, respectively. The total thermal resistances of the PPF-8 and OPF-7 models decreased by 9.0% ~ 15.6% compared to those of three conventional models.",
"title": ""
},
{
"docid": "0ee2dff9fb026b5c117d39fa537ab1b3",
"text": "Motor Imagery (MI) is a highly supervised method nowadays for the disabled patients to give them hope. This paper proposes a differentiation method between imagery left and right hands movement using Daubechies wavelet of Discrete Wavelet Transform (DWT) and Levenberg-Marquardt back propagation training algorithm of Neural Network (NN). DWT decomposes the raw EEG data to extract significant features that provide feature vectors precisely. Levenberg-Marquardt Algorithm (LMA) based neural network uses feature vectors as input for classification of the two class data and outcomes overall classification accuracy of 92%. Previously various features and methods used but this recommended method exemplifies that statistical features provide better accuracy for EEG classification. Variation among features indicates differences between neural activities of two brain hemispheres due to two imagery hands movement. Results from the classifier are used to interface human brain with machine for better performance that requires high precision and accuracy scheme.",
"title": ""
},
{
"docid": "a2891655fbb08c584c6efe07ee419fb7",
"text": "Forecasting the flow of crowds is of great importance to traffic management and public safety, and very challenging as it is affected by many complex factors, such as inter-region traffic, events, and weather. We propose a deep-learning-based approach, called ST-ResNet, to collectively forecast the inflow and outflow of crowds in each and every region of a city. We design an end-to-end structure of ST-ResNet based on unique properties of spatio-temporal data. More specifically, we employ the residual neural network framework to model the temporal closeness, period, and trend properties of crowd traffic. For each property, we design a branch of residual convolutional units, each of which models the spatial properties of crowd traffic. ST-ResNet learns to dynamically aggregate the output of the three residual neural networks based on data, assigning different weights to different branches and regions. The aggregation is further combined with external factors, such as weather and day of the week, to predict the final traffic of crowds in each and every region. Experiments on two types of crowd flows in Beijing and New York City (NYC) demonstrate that the proposed ST-ResNet outperforms six well-known methods.",
"title": ""
},
{
"docid": "2f012c2941f8434b9d52ae1942b64aff",
"text": "Classification of plants based on a multi-organ approach is very challenging. Although additional data provide more information that might help to disambiguate between species, the variability in shape and appearance in plant organs also raises the degree of complexity of the problem. Despite promising solutions built using deep learning enable representative features to be learned for plant images, the existing approaches focus mainly on generic features for species classification, disregarding the features representing plant organs. In fact, plants are complex living organisms sustained by a number of organ systems. In our approach, we introduce a hybrid generic-organ convolutional neural network (HGO-CNN), which takes into account both organ and generic information, combining them using a new feature fusion scheme for species classification. Next, instead of using a CNN-based method to operate on one image with a single organ, we extend our approach. We propose a new framework for plant structural learning using the recurrent neural network-based method. This novel approach supports classification based on a varying number of plant views, capturing one or more organs of a plant, by optimizing the contextual dependencies between them. We also present the qualitative results of our proposed models based on feature visualization techniques and show that the outcomes of visualizations depict our hypothesis and expectation. Finally, we show that by leveraging and combining the aforementioned techniques, our best network outperforms the state of the art on the PlantClef2015 benchmark. The source code and models are available at https://github.com/cs-chan/Deep-Plant.",
"title": ""
},
{
"docid": "c03de8afcb5a6fce6c22e9394367f54d",
"text": "Thus the Gestalt domain with its three operations forms a general algebra. J. N. Wilson, Handbook of Computer Vision Algorithms in Image Algebra, 2nd ed. (1072), Computational Techniques and Algorithms for Image Processing (S. (1047), Universal Algebra and Coalgebra (Klaus Denecke, Shelly L. Wismath), World (986), Handbook of Mathematical Models in Computer Vision, (N. Paragios, (985), Numerical Optimization, second edition (Jorge Nocedal, Stephen J.",
"title": ""
}
] |
scidocsrr
|
d6686bafd8c017f9547c74af1ca30435
|
Time to challenge the spurious hierarchy of systematic over narrative reviews?
|
[
{
"docid": "0448b076548e9ada3529292741ac1a29",
"text": "Evidence based medicine, whose philosophical origins extend back to mid-19th century Paris and earlier, remains a hot topic for clinicians, public health practitioners, purchasers, planners, and the public. There are now frequent workshops in how to practice and teach it (one sponsored by the BMJ will be held in London on 24 April); undergraduate and postgraduate training programmes are incorporating it (or pondering how to do so); British centres for evidence based practice have been established or planned in adult medicine, child health, surgery, pathology, pharmacotherapy, nursing, general practice, and dentistry; the Cochrane Collaboration and Britain's Centre for Review and Dissemination in York are providing systematic reviews of the effects of health care; new evidence based practice journals are being launched; and it has become a common topic in the lay media. But enthusiasm has been mixed with some negative reaction. 5 6 Criticism has ranged from evidence based medicine being old hat to it being a dangerous innovation, perpetrated by the arrogant to serve cost cutters and suppress clinical freedom. As evidence based medicine continues to evolve and adapt, now is a useful time to refine the discussion of what it is and what it is not.",
"title": ""
}
] |
[
{
"docid": "39a4a7ac64b05811984d2782381314b7",
"text": "Recently there has been a growing concern that many published research findings do not hold up in attempts to replicate them. We argue that this problem may originate from a culture of 'you can publish if you found a significant effect'. This culture creates a systematic bias against the null hypothesis which renders meta-analyses questionable and may even lead to a situation where hypotheses become difficult to falsify. In order to pinpoint the sources of error and possible solutions, we review current scientific practices with regard to their effect on the probability of drawing a false-positive conclusion. We explain why the proportion of published false-positive findings is expected to increase with (i) decreasing sample size, (ii) increasing pursuit of novelty, (iii) various forms of multiple testing and researcher flexibility, and (iv) incorrect P-values, especially due to unaccounted pseudoreplication, i.e. the non-independence of data points (clustered data). We provide examples showing how statistical pitfalls and psychological traps lead to conclusions that are biased and unreliable, and we show how these mistakes can be avoided. Ultimately, we hope to contribute to a culture of 'you can publish if your study is rigorous'. To this end, we highlight promising strategies towards making science more objective. Specifically, we enthusiastically encourage scientists to preregister their studies (including a priori hypotheses and complete analysis plans), to blind observers to treatment groups during data collection and analysis, and unconditionally to report all results. Also, we advocate reallocating some efforts away from seeking novelty and discovery and towards replicating important research findings of one's own and of others for the benefit of the scientific community as a whole. We believe these efforts will be aided by a shift in evaluation criteria away from the current system which values metrics of 'impact' almost exclusively and towards a system which explicitly values indices of scientific rigour.",
"title": ""
},
{
"docid": "44582f087f9bb39d6e542ff7b600d1c7",
"text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.",
"title": ""
},
{
"docid": "d0b16a75fb7b81c030ab5ce1b08d8236",
"text": "It is unquestionable that successive hardware generations have significantly improved GPU computing workload performance over the last several years. Moore's law and DRAM scaling have respectively increased single-chip peak instruction throughput by 3X and off-chip bandwidth by 2.2X from NVIDIA's GeForce 8800 GTX in November 2006 to its GeForce GTX 580 in November 2010. However, raw capability numbers typically underestimate the improvements in real application performance over the same time period, due to significant architectural feature improvements. To demonstrate the effects of architecture features and optimizations over time, we conducted experiments on a set of benchmarks from diverse application domains for multiple GPU architecture generations to understand how much performance has truly been improving for those workloads. First, we demonstrate that certain architectural features make a huge difference in the performance of unoptimized code, such as the inclusion of a general cache which can improve performance by 2-4× in some situations. Second, we describe what optimization patterns have been most essential and widely applicable for improving performance for GPU computing workloads across all architecture generations. Some important optimization patterns included data layout transformation, converting scatter accesses to gather accesses, GPU workload regularization, and granularity coarsening, each of which improved performance on some benchmark by over 20%, sometimes by a factor of more than 5×. While hardware improvements to baseline unoptimized code can reduce the speedup magnitude, these patterns remain important for even the most recent GPUs. Finally, we identify which added architectural features created significant new optimization opportunities, such as increased register file capacity or reduced bandwidth penalties for misaligned accesses, which increase performance by 2× or more in the optimized versions of relevant benchmarks.",
"title": ""
},
{
"docid": "da14a995b0a061a7045497be46eab411",
"text": "Fully homomorphic encryption supports meaningful computations on encrypted data, and hence, is widely used in cloud computing and big data environments. Recently, Li et al. constructed an efficient symmetric fully homomorphic encryption scheme and utilized it to design a privacy-preserving-outsourced association rule mining scheme. Their proposal allows multiple data owners to jointly mine some association rules without sacrificing the data privacy. The security of the homomorphic encryption scheme against the known-plaintext attacks was established by examining the hardness of solving nonlinear systems. However, in this paper, we illustrate that the security of Li et al.’s homomorphic encryption is overvalued. First, we show that we can recover the first part of the secret key from several known plaintext/ciphertext pairs with the continued fraction algorithm. Second, we find that we can retrieve the second part of the secret key through the Euclidean algorithm for the greatest common divisor problem. Experiments on the suggested parameters demonstrate that in case of more than two homomorphic multiplications, all the secret keys of the randomly instantiated Li et al.’s encryption schemes can be very efficiently recovered, and the success probability is at least 98% for one homomorphic multiplication.",
"title": ""
},
{
"docid": "e4f96495e32e1cf7e3c92d6344251b66",
"text": "Self-transcendence has been found to be an important correlate of mental health in older adults and adults facing the end of life. This study extends current theory by examining the relationship of transcendence and other transcendence variables to depression in middle-age adults (N = 133). Reed's Self-Transcendence Scale, the Center for Epidemiological Studies-Depression Scale, and measures of parenting, acceptance and spirituality were administered. Findings indicating significant inverse correlations between self-transcendence and depression, as well as between other measures of transcendence and depression support Reed's (1991b) theory. Multiple regression analysis indicated that acceptance may be another significant correlate of depression. Significant gender differences and age-related patterns of increased levels of self-transcendence were found. Study results illuminate the need to continue research into developmentally based transcendence variables related to various experiences of health and well-being across the life span.",
"title": ""
},
{
"docid": "059aed9f2250d422d76f3e24fd62bed8",
"text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms",
"title": ""
},
{
"docid": "263e8b756862ab28d313578e3f6acbb1",
"text": "Goal posts detection is a critical robot soccer ability which is needed to be accurate, robust and efficient. A goal detection method using Hough transform to get the detailed goal features is presented in this paper. In the beginning, the image preprocessing and Hough transform implementation are described in detail. A new modification on the θ parameter range in Hough transform is explained and applied to speed up the detection process. Line processing algorithm is used to classify the line detected, and then the goal feature extraction method, including the line intersection calculation, is done. Finally, the goal distance from the robot body is estimated using triangle similarity. The experiment is performed on our university humanoid robot with the goal dimension of 225 cm in width and 110 cm in height, in yellow color. The result shows that the goal detection method, including the modification in Hough transform, is able to extract the goal features seen by the robot correctly, with the lowest speed of 5 frames per second. Additionally, the goal distance estimation is accomplished with maximum error of 20 centimeters.",
"title": ""
},
{
"docid": "0949f7e3f4f1c8c1d1ff1d5b56ae8ce4",
"text": "Advancement in information and communication technology (ICT) has given rise to explosion of data in every field of operations. Working with the enormous volume of data (or Big Data, as it is popularly known as) for extraction of useful information to support decision making is one of the sources of competitive advantage for organizations today. Enterprises are leveraging the power of analytics in formulating business strategy in every facet of their operations to mitigate business risk. Volatile global market scenario has compelled the organizations to redefine their supply chain management (SCM). In this paper, we have delineated the relevance of Big Data and its importance in managing end to end supply chains for achieving business excellence. A Big Data-centric architecture for SCM has been proposed that exploits the current state of the art technology of data management, analytics and visualization. The security and privacy requirements of a Big Data system have also been highlighted and several mechanisms have been discussed to implement these features in a real world Big Data system deployment in the context of SCM. Some future scope of work has also been pointed out. Keyword: Big Data, Analytics, Cloud, Architecture, Protocols, Supply Chain Management, Security, Privacy.",
"title": ""
},
{
"docid": "2024b4274587bd00f480894f504aa614",
"text": "We describe a set of experiments using machine learning techniques for the task of extractive summarisation. The research is part of a summarisation project for which we use a corpus of judgments of the UK House of Lords. We present classification results for naïve Bayes and maximum entropy and we explore methods for scoring the summary-worthiness of a sentence. We present sample output from the system, illustrating the utility of rhetorical status information, which provides a means for structuring summaries and tailoring them to different types of users.",
"title": ""
},
{
"docid": "584de328ade02c34e36e2006f3e66332",
"text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.",
"title": ""
},
{
"docid": "f7fa80456b0fb479bc694cb89fbd84e5",
"text": "In the past two decades, social capital in its various forms and contexts has emerged as one of the most salient concepts in social sciences. While much excitement has been generated, divergent views, perspectives, and expectations have also raised the serious question : is it a fad or does it have enduring qualities that will herald a new intellectual enterprise? This presentation's purpose is to review social capital as discussed in the literature, identify controversies and debates, consider some critical issues, and propose conceptual and research strategies in building a theory. I will argue that such a theory and the research enterprise must be based on the fundamental understanding that social capital is captured from embedded resources in social networks . Deviations from this understanding in conceptualization and measurement lead to confusion in analyzing causal mechanisms in the macroand microprocesses. It is precisely these mechanisms and processes, essential for an interactive theory about structure and action, to which social capital promises to make contributions .",
"title": ""
},
{
"docid": "89f71867aeb71aaa3748ad6cdf7dbbf0",
"text": "Banks have used early fraud warning systems for some years. Improved fraud detection thus has become essential to maintain the viability of the payment system. Outlier mining in data mining is an important functionality of the existing algorithms which can be divided into methods based on statistical, distance based methods, density based methods and deviation based methods. In this article I propose the concept of credit card fraud detection by using a data stream outlier detection algorithm which is based on reverse k-nearest neighbors (SODRNN). The distinct quality of SODRNN algorithm is it needs only one pass of scan. Whereas traditional methods need to scan the database many times, it is not suitable for data stream environment.",
"title": ""
},
{
"docid": "6e9064fa15335f3f9013533b8770d297",
"text": "The last decade has witnessed a renaissance of empirical and psychological approaches to art study, especially regarding cognitive models of art processing experience. This new emphasis on modeling has often become the basis for our theoretical understanding of human interaction with art. Models also often define areas of focus and hypotheses for new empirical research, and are increasingly important for connecting psychological theory to discussions of the brain. However, models are often made by different researchers, with quite different emphases or visual styles. Inputs and psychological outcomes may be differently considered, or can be under-reported with regards to key functional components. Thus, we may lose the major theoretical improvements and ability for comparison that can be had with models. To begin addressing this, this paper presents a theoretical assessment, comparison, and new articulation of a selection of key contemporary cognitive or information-processing-based approaches detailing the mechanisms underlying the viewing of art. We review six major models in contemporary psychological aesthetics. We in turn present redesigns of these models using a unified visual form, in some cases making additions or creating new models where none had previously existed. We also frame these approaches in respect to their targeted outputs (e.g., emotion, appraisal, physiological reaction) and their strengths within a more general framework of early, intermediate, and later processing stages. This is used as a basis for general comparison and discussion of implications and future directions for modeling, and for theoretically understanding our engagement with visual art.",
"title": ""
},
{
"docid": "81efe31163aaa1b1006325cad08676ad",
"text": "Big Data trepidations enormous tome, intricate, emergent records sets with several, self-directed bases. With the fast growth of networking, data storage, and the data pool capacity, Big Data is now quickly escalating in all learning and engineering purviews, including physical, organic and biomedical sciences. In order to get use of these huge and heterogeneous data, data mining is a technique to get utilization of such data. Data mining is also called as facts discovery in databases which is the non-trivial process of detecting the valid, novel, hypothetically beneficial and eventually comprehensible acquaintance in outsized scale data. But the existing data mining techniques are not suitable for such heterogeneous data. In this paper we are proposing a map-reduced based fuzzy based data mining technique.",
"title": ""
},
{
"docid": "36da9ac0c2a111a7bfed3b9f1df845e2",
"text": "This paper has the purpose of describing a new approach on firmware update on automotive ECUs. The firmware update process for certain automotive ECUs requires excessive time through the CAN bus. By using the delta flashing concept, firmware update time can be greatly reduced by reducing the quantity of the data that is transmitted over the network.",
"title": ""
},
{
"docid": "44bebd3c18e1f8929b470f0dbfd7251b",
"text": "In this paper model for analysis electric DC drive made in Matlab Simulink and Matlab SimPower Systems is given. Basic mathematical formulation which describes DC motor is given. Existing laboratory motor is described. Simulation model of DC motor drive and model of discontinuous load is made. Comparison of model made in Matlab Simulink and existing model in SimPower Systems is given. Essential parameters for starting simulation of used DC motor drive is given. Dynamical characteristics of DC motor drive as results of both simulation are shown. Practical use of simulation model is proposed. Keywords— analysis, DC drive, Matlab, SimPower Systems, model, simulation.",
"title": ""
},
{
"docid": "382aac30f231b98aec07106fd458e525",
"text": "New proposals for prosthetic hands fabricated by means of 3D printing are either body-powered for partial hand amputees or myoelectric powered prostheses for transradial amputees. There are no current studies to develop powered 3D printed prostheses for transmetacarpal, probably because at this level of amputation there is little space to fit actuators and their associated electronics. In this work, a design of a 3D-printed hand prosthesis for transmetacarpal amputees and powered by DC micromotors is presented. Four-bar linkage mechanisms were used for the index, middle, ring and little fingers flexion movements, while a mechanism of cylindrical gears and worm drive were used for the thumb. Additionally, a method for customizing prosthetic fingers to match a user specific anthropometry is proposed. Sensors and actuators' selection is explained, and a position control algorithm was developed for each local controller by modeling the DC motors and transmission mechanisms. Finally, a basic control scheme was tested on the prototype for velocity and force evaluation.",
"title": ""
},
{
"docid": "f1fa371e9e17ee136a101c8e69376bd4",
"text": "Many tools allow programmers to develop applications in high-level languages and deploy them in web browsers via compilation to JavaScript. While practical and widely used, these compilers are ad hoc: no guarantee is provided on their correctness for whole programs, nor their security for programs executed within arbitrary JavaScript contexts. This paper presents a compiler with such guarantees. We compile an ML-like language with higher-order functions and references to JavaScript, while preserving all source program properties. Relying on type-based invariants and applicative bisimilarity, we show full abstraction: two programs are equivalent in all source contexts if and only if their wrapped translations are equivalent in all JavaScript contexts. We evaluate our compiler on sample programs, including a series of secure libraries.",
"title": ""
},
{
"docid": "103f432e237567c2954490e8ef257fe7",
"text": "Pierre Bourdieu holds the Chair in Sociology at the prestigious College de France, Paris. He is Directeur d'Etudes at l'Ecole des Hautes Etudes en Sciences Sociales, where he is also Director of the Center for European Sociology, and Editor of the influential journal Actes de la recherche en sciences sociales. Professor Bourdieu is the author or coauthor of approximately twenty books. A number of these have been published in English translation: The Algerians, 1962; Reproduction in Education, Society and Culture (with Jean-Claude Passeron), 1977; Outline of a Theory of Practice, 1977; Algeria I960, 1979; The Inheritors: French Students and their Relations to Culture, 1979; Distinction: A Social Critique of the Judgment of Taste, 1984. The essay below analyzes what Bourdieu terms the \"juridical field.\" In Bourdieu's conception, a \"field\" is an area of structured, socially patterned activity or \"practice,\" in this case disciplinarily and professionally defined. The \"field\" and its \"practices\" have special senses in",
"title": ""
},
{
"docid": "072d187f56635ebc574f2eedb8a91d14",
"text": "With the development of location-based social networks, an increasing amount of individual mobility data accumulate over time. The more mobility data are collected, the better we can understand the mobility patterns of users. At the same time, we know a great deal about online social relationships between users, providing new opportunities for mobility prediction. This paper introduces a noveltyseeking driven predictive framework for mining location-based social networks that embraces not only a bunch of Markov-based predictors but also a series of location recommendation algorithms. The core of this predictive framework is the cooperation mechanism between these two distinct models, determining the propensity of seeking novel and interesting locations.",
"title": ""
}
] |
scidocsrr
|
77fd25b366b086ddab1f4785b3252608
|
Re-architecting the on-chip memory sub-system of machine-learning accelerator for embedded devices
|
[
{
"docid": "d716725f2a5d28667a0746b31669bbb7",
"text": "This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49%, it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.",
"title": ""
}
] |
[
{
"docid": "15c0f63bb4ab47e47d2bb9789cf404f4",
"text": "This review provides an account of the Study of Mathematically Precocious Youth (SMPY) after 35 years of longitudinal research. Findings from recent 20-year follow-ups from three cohorts, plus 5- or 10-year findings from all five SMPY cohorts (totaling more than 5,000 participants), are presented. SMPY has devoted particular attention to uncovering personal antecedents necessary for the development of exceptional math-science careers and to developing educational interventions to facilitate learning among intellectually precocious youth. Along with mathematical gifts, high levels of spatial ability, investigative interests, and theoretical values form a particularly promising aptitude complex indicative of potential for developing scientific expertise and of sustained commitment to scientific pursuits. Special educational opportunities, however, can markedly enhance the development of talent. Moreover, extraordinary scientific accomplishments require extraordinary commitment both in and outside of school. The theory of work adjustment (TWA) is useful in conceptualizing talent identification and development and bridging interconnections among educational, counseling, and industrial psychology. The lens of TWA can clarify how some sex differences emerge in educational settings and the world of work. For example, in the SMPY cohorts, although more mathematically precocious males than females entered math-science careers, this does not necessarily imply a loss of talent because the women secured similar proportions of advanced degrees and high-level careers in areas more correspondent with the multidimensionality of their ability-preference pattern (e.g., administration, law, medicine, and the social sciences). By their mid-30s, the men and women appeared to be happy with their life choices and viewed themselves as equally successful (and objective measures support these subjective impressions). Given the ever-increasing importance of quantitative and scientific reasoning skills in modern cultures, when mathematically gifted individuals choose to pursue careers outside engineering and the physical sciences, it should be seen as a contribution to society, not a loss of talent.",
"title": ""
},
{
"docid": "b4554b814d889806df0a5ff50fb0e0f8",
"text": "Recent work on searching the Semantic Web has yielded a wide range of approaches with respect to the underlying search mechanisms, results management and presentation, and style of input. Each approach impacts upon the quality of the information retrieved and the user’s experience of the search process. However, despite the wealth of experience accumulated from evaluating Information Retrieval (IR) systems, the evaluation of Semantic Web search systems has largely been developed in isolation from mainstream IR evaluation with a far less unified approach to the design of evaluation activities. This has led to slow progress and low interest when compared to other established evaluation series, such as TREC for IR or OAEI for Ontology Matching. In this paper, we review existing approaches to IR evaluation and analyse evaluation activities for Semantic Web search systems. Through a discussion of these, we identify their weaknesses and highlight the future need for a more comprehensive evaluation framework that addresses current limitations.",
"title": ""
},
{
"docid": "379144f80332b2e81375b3d72c5c3389",
"text": "Due to the fact much of today’s data can be represented as graphs, there has been a demand for generalizing neural network models for graph data. One recent direction that has shown fruitful results, and therefore growing interest, is the usage of graph convolutional neural networks (GCNs). They have been shown to provide a significant improvement on a wide range of tasks in network analysis, one of which being node representation learning. The task of learning low-dimensional node representations has shown to increase performance on a plethora of other tasks from link prediction and node classification, to community detection and visualization. Simultaneously, signed networks (or graphs having both positive and negative links) have become ubiquitous with the growing popularity of social media. However, since previous GCN models have primarily focused on unsigned networks (or graphs consisting of only positive links), it is unclear how they could be applied to signed networks due to the challenges presented by negative links. The primary challenges are based on negative links having not only a different semantic meaning as compared to positive links, but their principles are inherently different and they form complex relations with positive links. Therefore we propose a dedicated and principled effort that utilizes balance theory to correctly aggregate and propagate the information across layers of a signed GCN model. We perform empirical experiments comparing our proposed signed GCN against state-of-the-art baselines for learning node representations in signed networks. More specifically, our experiments are performed on four realworld datasets for the classical link sign prediction problem that is commonly used as the benchmark for signed network embeddings algorithms.",
"title": ""
},
{
"docid": "5897b87a82d5bc11757e33a8a46b1f21",
"text": "BACKGROUND\nProspective data from over 10 years of follow-up were used to examine neighbourhood deprivation, social fragmentation and trajectories of health.\n\n\nMETHODS\nFrom the third phase (1991-93) of the Whitehall II study of British civil servants, SF-36 health functioning was measured on up to five occasions for 7834 participants living in 2046 census wards. Multilevel linear regression models assessed the Townsend deprivation index and social fragmentation index as predictors of initial health and health trajectories.\n\n\nRESULTS\nIndependent of individual socioeconomic factors, deprivation was inversely associated with initial SF-36 physical component summary (PCS) score. Social fragmentation was not associated with PCS scores. Deprivation and social fragmentation were inversely associated with initial mental component summary (MCS) score. Neighbourhood characteristics were not associated with trajectories of PCS score or MCS score for the whole set. However, restricted analysis on longer term residents revealed that residents in deprived or socially fragmented neighbourhoods had lowest initial and smallest improvements in MCS score.\n\n\nCONCLUSIONS\nThis longitudinal study provides evidence that residence in a deprived or fragmented neighbourhood is associated with poorer mental health and that longer exposure to such neighbourhood environments has incremental effects. Associations between physical health functioning and neighbourhood characteristics were less clear. Mindful of the importance of individual socioeconomic factors, the findings warrant more detailed examination of materially and socially deprived neighbourhoods and their consequences for health.",
"title": ""
},
{
"docid": "78437d8aafd3bf09522993447b0a4d50",
"text": "Over the past 30 years, policy makers and professionals who provide services to older adults with chronic conditions and impairments have placed greater emphasis on conceptualizing aging in place as an attainable and worthwhile goal. Little is known, however, of the changes in how this concept has evolved in aging research. To track trends in aging in place, we examined scholarly articles published from 1980 to 2010 that included the concept in eleven academic gerontology journals. We report an increase in the absolute number and proportion of aging-in-place manuscripts published during this period, with marked growth in the 2000s. Topics related to the environment and services were the most commonly examined during 2000-2010 (35% and 31%, resp.), with a substantial increase in manuscripts pertaining to technology and health/functioning. This underscores the increase in diversity of topics that surround the concept of aging-in-place literature in gerontological research.",
"title": ""
},
{
"docid": "8bb30efa3f14fa0860d1e5bc1265c988",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "9c2609adae64ec8d0b4e2cc987628c05",
"text": "We propose a novel method capable of retrieving clips from untrimmed videos based on natural language queries. This cross-modal retrieval task plays a key role in visual-semantic understanding, and requires localizing clips in time and computing their similarity to the query sentence. Current methods generate sentence and video embeddings and then compare them using a late fusion approach, but this ignores the word order in queries and prevents more fine-grained comparisons. Motivated by the need for fine-grained multi-modal feature fusion, we propose a novel early fusion embedding approach that combines video and language information at the word level. Furthermore, we use the inverse task of dense video captioning as a side-task to improve the learned embedding. Our full model combines these components with an efficient proposal pipeline that performs accurate localization of potential video clips. We present a comprehensive experimental validation on two large-scale text-to-clip datasets (Charades-STA and DiDeMo) and attain state-ofthe-art retrieval results with our model.",
"title": ""
},
{
"docid": "8e3d30eebcd6e255be682157f6f2ccd5",
"text": "X-ray crystallography shows the myosin cross-bridge to exist in two conformations, the beginning and end of the \"power stroke.\" A long lever-arm undergoes a 60 degrees to 70 degrees rotation between the two states. This rotation is coupled with changes in the active site (OPEN to CLOSED) and phosphate release. Actin binding mediates the transition from CLOSED to OPEN. Kinetics shows that the binding of myosin to actin is a two-step process which affects ATP and ADP affinity. The structural basis of these effects is not explained by the presently known conformers of myosin. Therefore, other states of the myosin cross-bridge must exist. Moreover, cryoelectronmicroscopy has revealed other angles of the cross-bridge lever arm induced by ADP binding. These structural states are presently being characterized by site-directed mutagenesis coupled with kinetic analysis.",
"title": ""
},
{
"docid": "ee7e519f678bc8e13738ec20a21dd78f",
"text": "A compact high PSR Low Drop-Out (LDO) voltage regulator providing a peak load-current (IL) of 100μA is realized in 0.13μm CMOS 1P6M process. Ultra low-power operation is achieved for the power block by realizing a nano-power bandgap reference circuit whose total power consumption including LDO is only just 95nW for 1.2Vsupply. The resistor-less reference circuit with no external capacitor for LDO stability results in a very compact design occupying just 0.033 mm2. The proposed post-layout reference and LDO block consumes only 38nA and 41nA respectively, regulating output at 0.9V with a 1.2V supply.",
"title": ""
},
{
"docid": "47e84cacb4db05a30bedfc0731dd2717",
"text": "Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.",
"title": ""
},
{
"docid": "f18dc5d572f60da7c85d50e6a42de2c9",
"text": "Recent developments in remote sensing are offering a promising opportunity to rethink conventional control strategies of wind turbines. With technologies such as LIDAR, the information about the incoming wind field - the main disturbance to the system - can be made available ahead of time. Feedforward control can be easily combined with traditional collective pitch feedback controllers and has been successfully tested on real systems. Nonlinear model predictive controllers adjusting both collective pitch and generator torque can further reduce structural loads in simulations but have higher computational times compared to feedforward or linear model predictive controller. This paper compares a linear and a commercial nonlinear model predictive controller to a baseline controller. On the one hand simulations show that both controller have significant improvements if used along with the preview of the rotor effective wind speed. On the other hand the nonlinear model predictive controller can achieve better results compared to the linear model close to the rated wind speed.",
"title": ""
},
{
"docid": "e519d705cd52b4eb24e4e936b849b3ce",
"text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.",
"title": ""
},
{
"docid": "5bef975924d427c3ae186d92a93d4f74",
"text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.",
"title": ""
},
{
"docid": "2e4829d97aa013040ee0f069cfa2845e",
"text": "The presence of noise in data is a common problem that produces several negative consequences in classification problems. In multi-class problems, these consequences are aggravated in terms of accuracy, building time, and complexity of the classifiers. In these cases, an interesting approach to reduce the effect of noise is to decompose the problem into several binary subproblems, reducing the complexity and, consequently, dividing the effects caused by noise into each of these subproblems. This paper analyzes the usage of decomposition strategies, and more specifically the One-vs-One scheme, to deal with noisy multi-class datasets. In order to investigate whether the decomposition is able to reduce the effect of noise or not, a large number of datasets are created introducing different levels and types of noise, as suggested in the literature. Several well-known classification algorithms, with or without decomposition, are trained on them in order to check when decomposition is advantageous. The results obtained show that methods using the One-vs-One strategy lead to better performances and more robust classifiers when dealing with noisy data, especially with the most disruptive noise schemes.",
"title": ""
},
{
"docid": "e1d7f0ce42121f7bb4e408637caa2160",
"text": "A Game Theoretical Method for Cost-Benefit Analysis of Malware Dissemination Prevention Theodoros Spyridopoulos, Konstantinos Maraslis, Alexios Mylonas, Theo Tryfonas & George Oikonomou To cite this article: Theodoros Spyridopoulos, Konstantinos Maraslis, Alexios Mylonas, Theo Tryfonas & George Oikonomou (2015): A Game Theoretical Method for Cost-Benefit Analysis of Malware Dissemination Prevention, Information Security Journal: A Global Perspective, DOI: 10.1080/19393555.2015.1092186 To link to this article: http://dx.doi.org/10.1080/19393555.2015.1092186",
"title": ""
},
{
"docid": "c2203ebede0e1a578db16beea6c0a57e",
"text": "There have been many advances in vitreoretinal surgery since Machemer introduced the concept of pars plana vitrectomy, in 1971. Of particular interest are the changes in the vitrectomy cutters, their fluidics interaction, the wide-angle viewing systems and the evolution of endoillumination through the past decade and notably in the last few years. The indications of 27-gauge surgery have expanded, including more complex cases. Cut rates of up to 16,000 cuts per minute are already available. New probe designs and pump technology have allowed duty cycle performances of near 100% and improved flow control. The smaller vitrectomy diameter can be positioned between narrow spaces, allowing membrane dissection and serving as a multifunctional instrument. Enhanced endoillumination safety can be achieved by changing the light source, adding light filters, increasing the working distance and understanding the potential interactions between light and vital dyes commonly used to stain the retina. Wide-angle viewing systems (contact, non-contact or a combination of both) provide a panoramic view of the retina. Non-contact systems are assistant-independent, while contact systems may be associated with better image resolution. This review will cover some current aspects on vitrectomy procedures, mainly assessing vitrectomy cutters, as well as the importance of endoillumination and the use of wide-angle viewing systems.",
"title": ""
},
{
"docid": "528aa41f48405f2aead0e4a26671b942",
"text": "Spoofing detection, which discriminates the spoofed speech from the natural speech, has gained much attention recently. Low-dimensional features that are used in speaker recognition/verification are also used in spoofing detection. Unfortunately, they don't capture sufficient information required for spoofing detection. In this work, we investigate the use of high-dimensional features for spoofing detection, that maybe more sensitive to the artifacts in the spoofed speech. Six types of high-dimensional feature are employed. For each kind of feature, four different representations are extracted, i.e. the original high-dimensional feature, corresponding low-dimensional feature, the low- and the high-frequency regions of the original high-dimensional feature. Dynamic features are also calculated to assess the effectiveness of the temporal information to detect the artifacts across frames. A neural network-based classifier is adopted to handle the high-dimensional features. Experimental results on the standard ASVspoof 2015 corpus suggest that high-dimensional features and dynamic features are useful for spoofing attack detection. A fusion of them has been shown to achieve 0.0% the equal error rates for nine of ten attack types.",
"title": ""
},
{
"docid": "7f2cc3419b33590cf535085c2c648e41",
"text": "Depth information has been shown to affect identification of visually salient regions in images. In this paper, we investigate the role of depth in saliency detection in the presence of (i) competing saliencies due to appearance, (ii) depth-induced blur and (iii) centre-bias. Having established through experiments that depth continues to be a significant contributor to saliency in the presence of these cues, we propose a 3D-saliency formulation that takes into account structural features of objects in an indoor setting to identify regions at salient depth levels. Computed 3D-saliency is used in conjunction with 2D-saliency models through non-linear regression using SVM to improve saliency maps. Experiments on benchmark datasets containing depth information show that the proposed fusion of 3D-saliency with 2D-saliency models results in an average improvement in ROC scores of about 9% over state-of-the-art 2D saliency models. The main contributions of this paper are: (i) The development of a 3D-saliency model that integrates depth and geometric features of object surfaces in indoor scenes. (ii) Fusion of appearance (RGB) saliency with depth saliency through non-linear regression using SVM. (iii) Experiments to support the hypothesis that depth improves saliency detection in the presence of blur and centre-bias. The effectiveness of the 3D-saliency model and its fusion with RGB-saliency is illustrated through experiments on two benchmark datasets that contain depth information. Current stateof-the-art saliency detection algorithms perform poorly on these datasets that depict indoor scenes due to the presence of competing saliencies in the form of color contrast. For example in Fig. 1, saliency maps of [1] is shown for different scenes, along with its human eye fixations and our proposed saliency map after fusion. It is seen from the first scene of Fig. 1, that illumination plays spoiler role in RGB-saliency map. In second scene of Fig. 1, the RGB-saliency is focused on the cap though multiple salient objects are present in the scene. Last scene at the bottom of Fig. 1, shows the limitation of the RGB-saliency when the object is similar in appearance with the background. Effect of depth on Saliency: In [4], it is shown that depth is an important cue for saliency. In this paper we go further and verify if the depth alone influences the saliency. Different scenes were captured for experimentation using Kinect sensor. Observations resulted out of these experiments are (i) Humans fixate on the objects at closer depth, in the presence of visually competing salient objects in the background, (ii) Early attention happens on the objects at closer depth, (iii) Effective fixations are high at the low contrast foreground compared to the high contrast objects in the background which are blurred, (iv) Low contrast object placed at the center of the field of view, gets more attention compared to other locations. As a result of all these observations, we develop a 3D-saliency that captures the depth information of the regions in the scene. 3D-Saliency: We adapt the region based contrast method from Cheng et al. [1] in computing contrast strengths for the segmented 3D surfaces or regions. Each segmented region is assigned a contrast score using surface normals as the feature. Structure of the surface can be described based on the distribution of normals in the region. We compute a histogram of angular distances formed by every pair of normals in the region. Every region Rk is associated with a histogram Hk. Contrast score Ck of a region Rk is computed as the sum of the dot products of its histogram with histograms of other regions in the scene. Since the depth of the region is influencing the visual attention, the contrast score is scaled by a value Zk, which is the depth of the region Rk from the sensor. In order to define the saliency, sizes of the regions i.e. the number of the points in the region, have to be considered. We find the ratio of the region dimension to the half of the scene dimension. Considering nk as the number of 3D points in the region Rk, the constrast score becomes Figure 1: Four different scenes and their saliency maps; For each scene from top left (i) Original Image, (ii) RGB-Saliency map using RC [1], (iii) Human fixations from eye-tracker and (iv) Fused RGBD-saliency map",
"title": ""
},
{
"docid": "488fa6801070af3056912d9296cc2151",
"text": "Current evaluation metrics to question answering based machine reading comprehension (MRC) systems generally focus on the lexical overlap between candidate and reference answers, such as ROUGE and BLEU. However, bias may appear when these metrics are used for specific question types, especially questions inquiring yes-no opinions and entity lists. In this paper, we make adaptations on the metrics to better correlate n-gram overlap with the human judgment for answers to these two question types. Statistical analysis proves the effectiveness of our approach. Our adaptations may provide positive guidance for the development of realscene MRC systems.",
"title": ""
},
{
"docid": "723bfb5acef53d78a05660e5d9710228",
"text": "Cheap micro-controllers, such as the Arduino or other controllers based on the Atmel AVR CPUs are being deployed in a wide variety of projects, ranging from sensors networks to robotic submarines. In this paper, we investigate the feasibility of using the Arduino as a true random number generator (TRNG). The Arduino Reference Manual recommends using it to seed a pseudo random number generator (PRNG) due to its ability to read random atmospheric noise from its analog pins. This is an enticing application since true bits of entropy are hard to come by. Unfortunately, we show by statistical methods that the atmospheric noise of an Arduino is largely predictable in a variety of settings, and is thus a weak source of entropy. We explore various methods to extract true randomness from the micro-controller and conclude that it should not be used to produce randomness from its analog pins.",
"title": ""
}
] |
scidocsrr
|
b87edf4e2b15aaee500496823e3da2c5
|
Pelvic Discontinuity Associated With Total Hip Arthroplasty: Evaluation and Management.
|
[
{
"docid": "fa691b72e61685d0fa89bf7a821373da",
"text": "BACKGROUND\nStabilization of a pelvic discontinuity with a posterior column plate with or without an associated acetabular cage sometimes results in persistent micromotion across the discontinuity with late fatigue failure and component loosening. Acetabular distraction offers an alternative technique for reconstruction in cases of severe bone loss with an associated pelvic discontinuity.\n\n\nQUESTIONS/PURPOSES\nWe describe the acetabular distraction technique with porous tantalum components and evaluate its survival, function, and complication rate in patients undergoing revision for chronic pelvic discontinuity.\n\n\nMETHODS\nBetween 2002 and 2006, we treated 28 patients with a chronic pelvic discontinuity with acetabular reconstruction using acetabular distraction. A porous tantalum elliptical acetabular component was used alone or with an associated modular porous tantalum augment in all patients. Three patients died and five were lost to followup before 2 years. The remaining 20 patients were followed semiannually for a minimum of 2 years (average, 4.5 years; range, 2-7 years) with clinical (Merle d'Aubigné-Postel score) and radiographic (loosening, migration, failure) evaluation.\n\n\nRESULTS\nOne of the 20 patients required rerevision for aseptic loosening. Fifteen patients remained radiographically stable at last followup. Four patients had early migration of their acetabular component but thereafter remained clinically asymptomatic and radiographically stable. At latest followup, the average improvement in the patients not requiring rerevision using the modified Merle d'Aubigné-Postel score was 6.6 (range, 3.3-9.6). There were no postoperative dislocations; however, one patient had an infection, one a vascular injury, and one a bowel injury.\n\n\nCONCLUSIONS\nAcetabular distraction with porous tantalum components provides predictable pain relief and durability at 2- to 7-year followup when reconstructing severe acetabular defects with an associated pelvic discontinuity.\n\n\nLEVEL OF EVIDENCE\nLevel IV, therapeutic study. See Instructions for Authors for a complete description of levels of evidence.",
"title": ""
}
] |
[
{
"docid": "312e6ee352df4cb88bcae54f51ac5404",
"text": "Cloud Computing adoption has experienced a considerable rate of growth since its emergence in 2006. By 2011, it had become the top technology priority for organizations worldwide and according to some leading industry reports the cloud computing market is estimated to reach $241 billion by 2020. Reasons for adoption are multi-fold, including for example the expected realisation of benefits pertaining to cost reduction, improved scalability, improved resource utilization, worker mobility and collaboration, and business continuity, among others. Research into cloud computing adoption has to date primarily focused on the larger, multinational enterprises. However, one key area where cloud computing is expected to hold considerable promise is for the Small and Medium Sized Enterprise (SME). SMEs are recognized as being inherently different from their large firm counterparts, not least from a resource constraint perspective and for this reason, cloud computing is reported to offer significant benefits for SMEs through, for example, facilitating a reduction in the financial burden associated with new technology adoption. This paper reports findings from a recent exploratory study into Cloud Computing adoption among Irish SMEs. Despite its purported importance, this study found that almost half of the respondents had not migrated any services or processes to the cloud environment. Further, with respect to those who had transitioned to the cloud, the data suggests that many of these SMEs did not rigorously assess their readiness for adopting cloud computing technology or did not adopt in-depth approaches for managing their engagement with cloud. While the study is of an exploratory nature, nevertheless the findings have important implications for the development/ improvement of national strategies or policies to support the successful adoption of Cloud Computing technology among the SME market. This research has implications for academic research in this area as well as proposing a number of practical recommendations to support the SME cloud adoption journey.",
"title": ""
},
{
"docid": "30d7f140a5176773611b3c1f8ec4953e",
"text": "The healthcare environment is generally perceived as being ‘information rich’ yet ‘knowledge poor’. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, decision tree and Artificial Neural Network to massive volume of healthcare data. In particular we consider a case study using classification techniques on a medical data set of diabetic patients.",
"title": ""
},
{
"docid": "5d23af3f778a723b97690f8bf54dfa41",
"text": "Software engineering techniques have been employed for many years to create software products. The selections of appropriate software development methodologies for a given project, and tailoring the methodologies to a specific requirement have been a challenge since the establishment of software development as a discipline. In the late 1990’s, the general trend in software development techniques has changed from traditional waterfall approaches to more iterative incremental development approaches with different combination of old concepts, new concepts, and metamorphosed old concepts. Nowadays, the aim of most software companies is to produce software in short time period with minimal costs, and within unstable, changing environments that inspired the birth of Agile. Agile software development practice have caught the attention of software development teams and software engineering researchers worldwide during the last decade but scientific research and published outcomes still remains quite scarce. Every agile approach has its own development cycle that results in technological, managerial and environmental changes in the software companies. This paper explains the values and principles of ten agile practices that are becoming more and more dominant in the software development industry. Agile processes are not always beneficial, they have some limitations as well, and this paper also discusses the advantages and disadvantages of Agile processes.",
"title": ""
},
{
"docid": "c647b0b28c61da096b781b4aa3c89f03",
"text": "This article concerns the real-world importance of leadership for the success or failure of organizations and social institutions. The authors propose conceptualizing leadership and evaluating leaders in terms of the performance of the team or organization for which they are responsible. The authors next offer a taxonomy of the dependent variables used as criteria in leadership studies. A review of research using this taxonomy suggests that the vast empirical literature on leadership may tell us more about the success of individual managerial careers than the success of these people in leading groups, teams, and organizations. The authors then summarize the evidence showing that leaders do indeed affect the performance of organizations--for better or for worse--and conclude by describing the mechanisms through which they do so.",
"title": ""
},
{
"docid": "ce1ba5311ce3dfc94b2bf0271df1a2ae",
"text": "A fundamental artificial intelligence challenge is how to design agents that intelligently trade off exploration and exploitation while quickly learning about an unknown environment. However, in order to learn quickly, we must somehow generalize experience across states. One promising approach is to use Bayesian methods to simultaneously cluster dynamics and control exploration; unfortunately, these methods tend to require computationally intensive MCMC approximation techniques which lack guarantees. We propose Thompson Clustering for Reinforcement Learning (TCRL), a family of Bayesian clustering algorithms for reinforcement learning that leverage structure in the state space to remain computationally efficient while controlling both exploration and generalization. TCRL-Theoretic achieves near-optimal Bayesian regret bounds while consistently improving over a standard Bayesian exploration approach. TCRLRelaxed is guaranteed to converge to acting optimally, and empirically outperforms state-of-the-art Bayesian clustering algorithms across a variety of simulated domains, even in cases where no states are similar.",
"title": ""
},
{
"docid": "f8548ed0045a0306c594e1b3786f617f",
"text": "The goal of having networks of seamlessly connected people, software agents and IT systems remains elusive. Early integration efforts focused on connectivity at the physical and syntactic layers. Great strides were made; there are many commercial tools available, for example to assist with enterprise application integration. It is now recognized that physical and syntactic connectivity is not adequate. A variety of research systems have been developed addressing some of the semantic issues. In this paper, we argue that ontologies in particular and semantics-based technologies in general will play a key role in achieving seamless connectivity. We give a detailed introduction to ontologies, summarize the current state of the art for applying ontologies to achieve semantic connectivity and highlight some key challenges.",
"title": ""
},
{
"docid": "65aed4d07ba558da05d3458884d8b67b",
"text": "This paper proposes an input voltage sensorless control algorithm for three-phase active boost rectifiers. Using this approach, the input ac-phase voltages can be accurately estimated from the fluctuations of other measured state variables and preceding switching state information from converter dynamics. Furthermore, the proposed control strategy reduces the input current harmonics of an ac–dc three-phase boost power factor correction (PFC) converter by injecting an additional common-mode duty ratio term to the feedback controllers’ outputs. This additional duty compensation term cancels the unwanted input harmonics, caused by the floating potential between ac source neutral and dc link negative, without requiring any access to the neutral point. A 6-kW (continuous power)/10-kW (peak power) three-phase boost PFC prototype using SiC-based semiconductor switching devices is designed and developed to validate the proposed control algorithm. The experimental results show that an input power factor of 0.999 with a conversion efficiency of 98.3%, total harmonic distortion as low as 4%, and a tightly regulated dc-link voltage with 1% ripple can be achieved.",
"title": ""
},
{
"docid": "1ac124cd7f8f4c92693ee959b5b39425",
"text": "The intestinal microbiota plays a fundamental role in maintaining immune homeostasis. In controlled clinical trials probiotic bacteria have demonstrated a benefit in treating gastrointestinal diseases, including infectious diarrhea in children, recurrent Clostridium difficile-induced infection, and some inflammatory bowel diseases. This evidence has led to the proof of principle that probiotic bacteria can be used as a therapeutic strategy to ameliorate human diseases. The precise mechanisms influencing the crosstalk between the microbe and the host remain unclear but there is growing evidence to suggest that the functioning of the immune system at both a systemic and a mucosal level can be modulated by bacteria in the gut. Recent compelling evidence has demonstrated that manipulating the microbiota can influence the host. Several new mechanisms by which probiotics exert their beneficial effects have been identified and it is now clear that significant differences exist between different probiotic bacterial species and strains; organisms need to be selected in a more rational manner to treat disease. Mechanisms contributing to altered immune function in vivo induced by probiotic bacteria may include modulation of the microbiota itself, improved barrier function with consequent reduction in immune exposure to microbiota, and direct effects of bacteria on different epithelial and immune cell types. These effects are discussed with an emphasis on those organisms that have been used to treat human inflammatory bowel diseases in controlled clinical trials.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
},
{
"docid": "b68a54ec0f401c2e590f313f13abcd4d",
"text": "Automated sleep apnea detection and severity identification has largely focused on multivariate sensor data in the past two decades. Clinically too, sleep apnea is identified using a combination of markers including blood oxygen saturation, respiration rate etc. More recently, scientists have begun to investigate the use of instantaneous heart rates for detection and severity measurement of sleep apnea. However, the best-known techniques that use heart rate and its derivatives have been able to achieve less than 85% accuracy in classifying minute-to-minute apnea data. In our research reported in this paper, we apply a deep learning technique called LSTM-RNN (long short-term memory recurrent neural network) for identification of sleep apnea and its severity based only on instantaneous heart rates. We have tested this model on multiple sleep apnea datasets and obtained perfect accuracy. Furthermore, we have also tested its robustness on an arrhythmia dataset (that is highly probable in mimicking sleep apnea heart rate variability) and found that the model is highly accurate in distinguishing between the two.",
"title": ""
},
{
"docid": "8dfd91ceadfcceea352975f9b5958aaf",
"text": "The bag-of-words representation commonly used in text analysis can be analyzed very efficiently and retains a great deal of useful information, but it is also troublesome because the same thought can be expressed using many different terms or one term can have very different meanings. Dimension reduction can collapse together terms that have the same semantics, to identify and disambiguate terms with multiple meanings and to provide a lower-dimensional representation of documents that reflects concepts instead of raw terms. In this chapter, we survey two influential forms of dimension reduction. Latent semantic indexing uses spectral decomposition to identify a lower-dimensional representation that maintains semantic properties of the documents. Topic modeling, including probabilistic latent semantic indexing and latent Dirichlet allocation, is a form of dimension reduction that uses a probabilistic model to find the co-occurrence patterns of terms that correspond to semantic topics in a collection of documents. We describe the basic technologies in detail and expose the underlying mechanism. We also discuss recent advances that have made it possible to apply these techniques to very large and evolving text collections and to incorporate network structure or other contextual information.",
"title": ""
},
{
"docid": "01f9b07bc5c6ca47a6181deb908445e8",
"text": "This paper deals with deep neural networks for predicting accurate dense disparity map with Semi-global matching (SGM). SGM is a widely used regularization method for real scenes because of its high accuracy and fast computation speed. Even though SGM can obtain accurate results, tuning of SGMs penalty-parameters, which control a smoothness and discontinuity of a disparity map, is uneasy and empirical methods have been proposed. We propose a learning based penalties estimation method, which we call SGM-Nets that consist of Convolutional Neural Networks. A small image patch and its position are input into SGMNets to predict the penalties for the 3D object structures. In order to train the networks, we introduce a novel loss function which is able to use sparsely annotated disparity maps such as captured by a LiDAR sensor in real environments. Moreover, we propose a novel SGM parameterization, which deploys different penalties depending on either positive or negative disparity changes in order to represent the object structures more discriminatively. Our SGM-Nets outperformed state of the art accuracy on KITTI benchmark datasets.",
"title": ""
},
{
"docid": "25f67b19daa65a8c7ade4cabe1153c60",
"text": "This paper deals with feedback controller synthesis for Timed Event Graphs in dioids. We discuss here the existence and the computation of a controller which leads to a closed-loop system whose behavior is as close as possible to the one of a given reference model and which delays as much as possible the input of tokens inside the (controlled) system. The synthesis presented here is mainly based on residuation theory results and some Kleene star properties.",
"title": ""
},
{
"docid": "77749f228ebcadfbff9202ee17225752",
"text": "Temporal object detection has attracted significant attention, but most popular detection methods cannot leverage rich temporal information in videos. Very recently, many algorithms have been developed for video detection task, yet very few approaches can achieve real-time online object detection in videos. In this paper, based on the attention mechanism and convolutional long short-term memory (ConvLSTM), we propose a temporal single-shot detector (TSSD) for real-world detection. Distinct from the previous methods, we take aim at temporally integrating pyramidal feature hierarchy using ConvLSTM, and design a novel structure, including a low-level temporal unit as well as a high-level one for multiscale feature maps. Moreover, we develop a creative temporal analysis unit, namely, attentional ConvLSTM, in which a temporal attention mechanism is specially tailored for background suppression and scale suppression, while a ConvLSTM integrates attention-aware features across time. An association loss and a multistep training are designed for temporal coherence. Besides, an online tubelet analysis (OTA) is exploited for identification. Our framework is evaluated on ImageNet VID dataset and 2DMOT15 dataset. Extensive comparisons on the detection and tracking capability validate the superiority of the proposed approach. Consequently, the developed TSSD-OTA achieves a fast speed and an overall competitive performance in terms of detection and tracking. Finally, a real-world maneuver is conducted for underwater object grasping.",
"title": ""
},
{
"docid": "b12defb3d9d7c5ccda8c3e0b0858f55f",
"text": "We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.",
"title": ""
},
{
"docid": "c9b6f91a7b69890db88b929140f674ec",
"text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"title": ""
},
{
"docid": "41f14f975d8c757404112f918027bd50",
"text": "Providing a plausible explanation for the relationship between two related entities is an important task in some applications of knowledge graphs, such as in search engines. However, most existing methods require a large number of manually labeled training data, which cannot be applied in large-scale knowledge graphs due to the expensive data annotation. In addition, these methods typically rely on costly handcrafted features. In this paper, we propose an effective pairwise ranking model by leveraging clickthrough data of a Web search engine to address these two problems. We first construct large-scale training data by leveraging the query-title pairs derived from clickthrough data of a Web search engine. Then, we build a pairwise ranking model which employs a convolutional neural network to automatically learn relevant features. The proposed model can be easily trained with backpropagation to perform the ranking task. The experiments show that our method significantly outperforms several strong baselines.",
"title": ""
},
{
"docid": "6a0d404aff5059fc482671b497b2b8d0",
"text": "OBJECTIVE\nTo identify the effects of laryngeal surgical treatment in the voice of transgender women, especially on the fundamental frequency (f0).\n\n\nSTUDY DESIGN\nWe performed a systematic review in PubMed and Scopus in July 2016, covering the period between 2005 and 2016.\n\n\nMETHODS\nInclusion criteria were studies in English or Portuguese about the laryngeal surgical treatment in transgender women, featuring experimental design, title, year of publication, country of origin, journal of publication, participants, intervention, results. For the meta-analysis, only studies that had control group were selected. Exclusion criteria were articles that mentioned the use of surgical techniques but did not use the procedure in research, animal studies, studies of revision, and postmortem studies.\n\n\nRESULTS\nFour hundred and twenty-three articles were identified in the initial search; 94 were selected for analysis by two referees, independently. After applying all the selection criteria, five studies remained in the meta-analysis. The surgical procedures that were identified included laryngoplasty with or without thyrohyoid approximation, Wendler glottoplasty, cricothyroid approximation, laser glottoplasty reduction and the vocal fold shortening and retrodisplacement of anterior commissure. There was no significant difference between the experimental group and the control group in relation to f0.\n\n\nCONCLUSION\nNo randomized clinical trials and prospective cohort studies are available, and a small number of retrospective cohort and case-control studies of surgical techniques reveal an increase in the f0. The evidence produced is not conclusive regarding which surgical technique would be better for vocal treatment of transgender women.\n\n\nLEVEL OF EVIDENCE\nNA Laryngoscope, 127:2596-2603, 2017.",
"title": ""
},
{
"docid": "468462cec7766673675a4a0becda5a72",
"text": "An innovative medical curriculum at the University of New South Wales (UNSW) has been developed through a highly collaborative process aimed at building faculty ownership and ongoing sustainability. The result is a novel capability-based program that features early clinical experience and small-group teaching, which offers students considerable flexibility and achieves a high degree of alignment between graduate outcomes, learning activities and assessments. Graduate capabilities that focus student learning on generic outcomes are described (critical evaluation, reflection, communication and teamwork) along with traditional outcomes in biomedical science, social aspects, clinical performance and ethics. Each two-year phase promotes a distinctive learning process to support and develop autonomous learning across six years. The approaches emphasize important adult education themes: student autonomy; learning from experience; collaborative learning; and adult teacher-learner relationships. Teaching in each phase draws on stages of the human life cycle to provide an explicit organization for the vertical integration of knowledge and skills. A learning environment that values the social nature of learning is fostered through the program's design and assessment system, which supports interdisciplinary integration and rewards students who exhibit self-direction. Assessment incorporates criterion referencing, interdisciplinary examinations, a balance between continuous and barrier assessments, peer feedback and performance assessments of clinical competence. A portfolio examination in each phase, in which students submit evidence of reflection and achievement for each capability, ensures overall alignment.",
"title": ""
},
{
"docid": "cfacd2faab806c1d7ea59708affe2ef6",
"text": "This paper proposes an isolated three-phase rectifier power-factor correction using two single-phase buck preregulators in continuous conduction mode. The use of the Scott transformer renders a simple and robust rectifier to operate with unity power factor. With only two active switches, the rectifier is able to generate symmetrical currents in the line and a regulated voltage output without any necessary synchronous switches. The proposed control technique with sinusoidal pulse width modulation uses a feedforward of the output inductor current and only one voltage control regulates the output. Complete simulation results under closed-loop operation are given and a 12-kW prototype has been implemented in the laboratory, which demonstrated to operate successfully with excellent performance, and thus can feasibly be implemented in higher power applications.",
"title": ""
}
] |
scidocsrr
|
1fd54736ccbc4324e0cf0b3a313ce530
|
Temporal Convolutional Networks for Action Segmentation and Detection
|
[
{
"docid": "f4cd7a70a257aea595bf4a26142127ff",
"text": "Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient `position refinement' model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model [21] to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC [20] dataset and outperforms all existing approaches on the MPII-human-pose dataset [1].",
"title": ""
},
{
"docid": "19a1f9c9f3dec6f90d08479f0669d0dc",
"text": "We present a multi-stream bi-directional recurrent neural network for fine-grained action detection. Recently, twostream convolutional neural networks (CNNs) trained on stacked optical flow and image frames have been successful for action recognition in videos. Our system uses a tracking algorithm to locate a bounding box around the person, which provides a frame of reference for appearance and motion and also suppresses background noise that is not within the bounding box. We train two additional streams on motion and appearance cropped to the tracked bounding box, along with full-frame streams. Our motion streams use pixel trajectories of a frame as raw features, in which the displacement values corresponding to a moving scene point are at the same spatial position across several frames. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. We show that our bi-directional LSTM network utilizes about 8 seconds of the video sequence to predict an action label. We test on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we introduce and make available to the community with this paper. The results demonstrate that our method significantly outperforms state-of-the-art action detection methods on both datasets.",
"title": ""
},
{
"docid": "15ce175cc7aa263ded19c0ef344d9a61",
"text": "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"title": ""
},
{
"docid": "9ad1acc78312d66f3e37dfb39f4692df",
"text": "This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art.",
"title": ""
}
] |
[
{
"docid": "a10804fe3d5648a014a164c92ffa0c25",
"text": "OBJECTIVES\nThe aim of this study was to compare the long-term outcomes of implants placed in patients treated for periodontitis periodontally compromised patients (PCP) and in periodontally healthy patients (PHP) in relation to adhesion to supportive periodontal therapy (SPT).\n\n\nMATERIAL AND METHODS\nOne hundred and twelve partially edentulous patients were consecutively enrolled in private specialist practice and divided into three groups according to their initial periodontal condition: PHP, moderate PCP and severe PCP. Perio and implant treatment was carried out as needed. Solid screws (S), hollow screws (HS) and hollow cylinders (HC) were installed to support fixed prostheses, after successful completion of initial periodontal therapy (full-mouth plaque score <25% and full-mouth bleeding score <25%). At the end of treatment, patients were asked to follow an individualized SPT program. At 10 years, clinical measures and radiographic bone changes were recorded by two calibrated operators, blinded to the initial patient classification.\n\n\nRESULTS\nEleven patients were lost to follow-up. During the period of observation, 18 implants were removed because of biological complications. The implant survival rate was 96.6%, 92.8% and 90% for all implants and 98%, 94.2% and 90% for S-implants only, respectively, for PHP, moderate PCP and severe PCP. The mean bone loss was 0.75 (+/- 0.88) mm in PHP, 1.14 (+/- 1.11) mm in moderate PCP and 0.98 (+/- 1.22) mm in severe PCP, without any statistically significant difference. The percentage of sites, with bone loss > or =3 mm, was, respectively, 4.7% for PHP, 11.2% for moderate PCP and 15.1% for severe PCP, with a statistically significant difference between PHP and severe PCP (P<0.05). Lack of adhesion to SPT was correlated with a higher incidence of bone loss and implant loss.\n\n\nCONCLUSION\nPatients with a history of periodontitis presented a lower survival rate and a statistically significantly higher number of sites with peri-implant bone loss. Furthermore, PCP, who did not completely adhere to the SPT, were found to present a higher implant failure rate. This underlines the value of the SPT in enhancing the long-term outcomes of implant therapy, particularly in subjects affected by periodontitis, in order to control reinfection and limit biological complications.",
"title": ""
},
{
"docid": "12ee117f58c5bd5b6794de581bfcacdb",
"text": "The visualization of complex network traffic involving a large number of communication devices is a common yet challenging task. Traditional layout methods create the network graph with overwhelming visual clutter, which hinders the network understanding and traffic analysis tasks. The existing graph simplification algorithms (e.g. community-based clustering) can effectively reduce the visual complexity, but lead to less meaningful traffic representations. In this paper, we introduce a new method to the traffic monitoring and anomaly analysis of large networks, namely Structural Equivalence Grouping (SEG). Based on the intrinsic nature of the computer network traffic, SEG condenses the graph by more than 20 times while preserving the critical connectivity information. Computationally, SEG has a linear time complexity and supports undirected, directed and weighted traffic graphs up to a million nodes. We have built a Network Security and Anomaly Visualization (NSAV) tool based on SEG and conducted case studies in several real-world scenarios to show the effectiveness of our technique.",
"title": ""
},
{
"docid": "bc6bc98e683fe4bbd7978d59ecd91a7a",
"text": "The explosion of enhanced applications such as live video streaming, video gaming and Virtual Reality calls for efforts to optimize transport protocols to manage the increasing amount of data traffic on future 5G networks. Through bandwidth aggregation over multiple paths, the Multi-Path Transmission Control Protocol (MPTCP) can enhance the performance of network applications. MPTCP can split a large multimedia flow into subflows and apply a congestion control mechanism on each subflow. Segment Routing (SR), a promising source routing approach, has emerged to provide advanced packet forwarding over 5G networks. In this paper, we explore the utilization of MPTCP and SR in SDN-based networks to improve network resources utilization and end- user's QoE for delivering multimedia services over 5G networks. We propose a novel QoE-aware, SDN- based MPTCP/SR approach for service delivery. In order to demonstrate the feasibility of our approach, we implemented an intelligent QoE- centric Multipath Routing Algorithm (QoMRA) on an SDN source routing platform using Mininet and POX controller. We carried out experiments on Dynamic Adaptive video Steaming over HTTP (DASH) applications over various network conditions. The preliminary results show that, our QoE-aware SDN- based MPTCP/SR scheme performs better compared to the conventional TCP approach in terms of throughput, link utilization and the end-user's QoE.",
"title": ""
},
{
"docid": "74381f9602374af5ad0775a69163d1b9",
"text": "This paper discusses some of the basic formulation issues and solution procedures for solving oneand twodimensional cutting stock problems. Linear programming, sequential heuristic and hybrid solution procedures are described. For two-dimensional cutting stock problems with rectangular shapes, we also propose an approach for solving large problems with limits on the number of times an ordered size may appear in a pattern.",
"title": ""
},
{
"docid": "3c41bdaeaaa40481c8e68ad00426214d",
"text": "Image captioning is an important task, applicable to virtual assistants, editing tools, image indexing, and support of the disabled. In recent years significant progress has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM) units. Despite mitigating the vanishing gradient problem, and despite their compelling ability to memorize dependencies, LSTM units are complex and inherently sequential across time. To address this issue, recent work has shown benefits of convolutional networks for machine translation and conditional image generation [9, 34, 35]. Inspired by their success, in this paper, we develop a convolutional image captioning technique. We demonstrate its efficacy on the challenging MSCOCO dataset and demonstrate performance on par with the LSTM baseline [16], while having a faster training time per number of parameters. We also perform a detailed analysis, providing compelling reasons in favor of convolutional language generation approaches.",
"title": ""
},
{
"docid": "c09256d7daaff6e2fc369df0857a3829",
"text": "Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used, as well as a classifier for identifying tweets that express grieving and aggression.",
"title": ""
},
{
"docid": "f49364d463c3225e52e22c8c043e9590",
"text": "Palpation is a physical examination technique where objects, e.g., organs or body parts, are touched with fingers to determine their size, shape, consistency and location. Many medical procedures utilize palpation as a supplementary interaction technique and it can be therefore considered as an essential basic method. However, palpation is mostly neglected in medical training simulators, with the exception of very specialized simulators that solely focus on palpation, e.g., for manual cancer detection. In this article we propose a novel approach to enable haptic palpation interaction for virtual reality-based medical simulators. The main contribution is an extensive user study conducted with a large group of medical experts. To provide a plausible simulation framework for this user study, we contribute a novel and detailed interaction algorithm for palpation with tissue dragging, which utilizes a multi-object force algorithm to support multiple layers of anatomy and a pulse force algorithm for simulation of an arterial pulse. Furthermore, we propose a modification for an off-the-shelf haptic device by adding a lightweight palpation pad to support a more realistic finger grip configuration for palpation tasks. The user study itself has been conducted on a medical training simulator prototype with a specific procedure from regional anesthesia, which strongly depends on palpation. The prototype utilizes a co-rotational finite-element approach for soft tissue simulation and provides bimanual interaction by combining the aforementioned techniques with needle insertion for the other hand. The results of the user study suggest reasonable face validity of the simulator prototype and in particular validate medical plausibility of the proposed palpation interaction algorithm.",
"title": ""
},
{
"docid": "9fa8133dcb3baef047ee887fea1ed5a3",
"text": "In this paper, we present an effective hierarchical shot classification scheme for broadcast soccer video. We first partition a video into replay and non-replay shots with replay logo detection. Then, non-replay shots are further classified into Long, Medium, Close-up or Out-field types with color and texture features based on a decision tree. We tested the method on real broadcast FIFA soccer videos, and the experimental results demonstrate its effectiveness..",
"title": ""
},
{
"docid": "45c917e024842ff7e087e4c46a05be25",
"text": "A centrifugal pump that employs a bearingless motor with 5-axis active control has been developed. In this paper, a novel bearingless canned motor pump is proposed, and differences from the conventional structure are explained. A key difference between the proposed and conventional bearingless canned motor pumps is the use of passive magnetic bearings; in the proposed pump, the amount of permanent magnets (PMs) is reduced by 30% and the length of the rotor is shortened. Despite the decrease in the total volume of PMs, the proposed structure can generate large suspension forces and high torque compared with the conventional design by the use of the passive magnetic bearings. In addition, levitation and rotation experiments demonstrated that the proposed motor is suitable for use as a bearingless canned motor pump.",
"title": ""
},
{
"docid": "b63e88701018a80a7815ee43b62e90fd",
"text": "Educational data mining and learning analytics promise better understanding of student behavior and knowledge, as well as new information on the tacit factors that contribute to student actions. This knowledge can be used to inform decisions related to course and tool design and pedagogy, and to further engage students and guide those at risk of failure. This working group report provides an overview of the body of knowledge regarding the use of educational data mining and learning analytics focused on the teaching and learning of programming. In a literature survey on mining students' programming processes for 2005-2015, we observe a significant increase in work related to the field. However, the majority of the studies focus on simplistic metric analysis and are conducted within a single institution and a single course. This indicates the existence of further avenues of research and a critical need for validation and replication to better understand the various contributing factors and the reasons why certain results occur. We introduce a novel taxonomy to analyse replicating studies and discuss the importance of replicating and reproducing previous work. We describe what is the state of the art in collecting and sharing programming data. To better understand the challenges involved in replicating or reproducing existing studies, we report our experiences from three case studies using programming data. Finally, we present a discussion of future directions for the education and research community.",
"title": ""
},
{
"docid": "e5b05292bee316cbc5cb6da35bd615a2",
"text": "Blockchain technology has emerged as a primary enabler for verification-driven transactions between parties that do not have complete trust among themselves. Bitcoin uses this technology to provide a provenance-driven verifiable ledger that is based on consensus. Nevertheless, the use of blockchain as a transaction service in non-cryptocurrency applications, for example, business networks, is at a very nascent stage. While the blockchain supports transactional provenance, the datamanagement community and other scientific and industrial communities are assessing how blockchain can be used to enable certain key capabilities for business applications. We have reviewed a number of proof of concepts and early adoptions of blockchain solutions that we have been involved spanning diverse use cases to draw common data life cycle, persistence as well as analytics patterns used in real-world applications with the ultimate aim to identify new frontier of exciting research in blockchain data management and analytics. In this paper, we discuss several open topics that researchers could increase focus on: (1) leverage existing capabilities of mature data and information systems, (2) enhance data security and privacy assurances, (3) enable analytics services on blockchain as well as across off-chain data, and (4) make blockchain-based systems active-oriented and intelligent.",
"title": ""
},
{
"docid": "9a356447204de409cde40726c8777c50",
"text": "A fundamental limitation of Bitcoin and its variants is that the movement of coin between addresses can be observed by examining the public block chain. This record enables adversaries to link addresses to individuals, and to identify multiple addresses as belonging to a single participant. Users can try to hide this information by mixing, where a participant exchanges the funds in an address coin-for-coin with another participant and address. In this paper, we describe the weaknesses of extant mixing protocols, and analyze their vulnerability to Sybil-based denial-of-service and inference attacks. As a solution, we propose Xim, a two-party mixing protocol that is compatible with Bitcoin and related virtual currencies. It is the first decentralized protocol to simultaneously address Sybil attackers, denial-of-service attacks, and timing-based inference attacks. Xim is a multi-round protocol with tunably high success rates. It includes a decentralized system for anonymously finding mix partners based on ads placed in the block chain. No outside party can confirm or find evidence of participants that pair up. We show that Xim's design increases attacker costs linearly with the total number of participants, and that its probabilistic approach to mixing mitigates Sybil-based denial-of-service attack effects. We evaluate protocol delays based on our measurements of the Bitcoin network.",
"title": ""
},
{
"docid": "24269f55442c3001f86ac8f69d5bbba0",
"text": "Software Defined Networking (SDN) and cloud automation enable a large number of diverse parties (network operators, application admins, tenants/end-users) and control programs (SDN Apps, network services) to generate network policies independently and dynamically. Yet existing policy abstractions and frameworks do not support natural expression and automatic composition of high-level policies from diverse sources. We tackle the open problem of automatic, correct and fast composition of multiple independently specified network policies. We first develop a high-level Policy Graph Abstraction (PGA) that allows network policies to be expressed simply and independently, and leverage the graph structure to detect and resolve policy conflicts efficiently. Besides supporting ACL policies, PGA also models and composes service chaining policies, i.e., the sequence of middleboxes to be traversed, by merging multiple service chain requirements into conflict-free composed chains. Our system validation using a large enterprise network policy dataset demonstrates practical composition times even for very large inputs, with only sub-millisecond runtime latencies.",
"title": ""
},
{
"docid": "1943e91837f854a6e8e797a5297abed3",
"text": "Counterfactual Regret Minimization and variants (e.g. Public Chance Sampling CFR and Pure CFR) have been known as the best approaches for creating approximate Nash equilibrium solutions for imperfect information games such as poker. This paper introduces CFR, a new algorithm that typically outperforms the previously known algorithms by an order of magnitude or more in terms of computation time while also potentially requiring less memory.",
"title": ""
},
{
"docid": "5f1abca1f9c3244b4f1655e34a7a9765",
"text": "This paper conceptualizes and develops valid measurements of the key dimensions of information systems development project (ISDP) complexity. A conceptual framework is proposed to define four components of ISDP complexity: structural organizational complexity, structural IT complexity, dynamic organizational complexity, and dynamic IT complexity. Measures of ISDP complexity are generated based on literature review, field interviews, focus group discussions and two pilot tests with 76 IS managers. The measures are then tested using both exploratory and confirmatory data analyses with survey responses from managers of 541 ISDPs. Results from both the exploratory and confirmatory analyses support the fourcomponent conceptualization of ISDP complexity. The final 20-item measurements of ISDP complexity are shown to adequately satisfy the criteria for unidimensionality, convergent validity, discriminant validity, reliability, factorial invariance across different types of ISDPs, and nomological validity. Implications of the study results to theory development and practice as well as future research directions are discussed.",
"title": ""
},
{
"docid": "e1dcd8edac8028e958516cbc31ab15ad",
"text": "New air-to-ground wireless datalinks are needed to supplement existing civil aviation technologies. The 960 – 1164 MHz part of the IEEE L band has been identified as a candidate spectrum. EUROCONTROL — the European organization for the Safety of Air Navigation, has funded two parallel projects and developed two proposals called L-DACS1 and L-DACS2. Although, there is a significant amount of literature available on each of the two technologies from the two teams that designed the respective proposals, there is very little independent comparison of the two proposals. The goal of this paper is to provide this comparison. We compare the two proposals in terms of their scalability, spectral efficiency, and interference resistance. Both the technologies have to co-exist with several other aeronautical technologies that use the same L-band. 12",
"title": ""
},
{
"docid": "2b197191cce0bf1fe83a3d40fcec582f",
"text": "BACKGROUND\nPatient delay in seeking medical attention could be a contributing cause in a substantial number of breast cancer deaths. The purpose of this study was to identify factors associated with long delay in order to identify specific groups in need of more intensive education regarding the signs of breast cancer and the importance of early treatment.\n\n\nMETHODS\nA study of 162 women with potential breast cancer symptoms was done in the area of Worcester, MA. Two methods of analysis were used. A case-control approach was used where the outcome variable was categorized into two groups of longer and shorter delay, and a survival analysis was used where the outcome variable was treated as a continuous variable.\n\n\nRESULTS\nIt was found that women with increasing symptoms were more likely to delay than women whose symptoms either decreased or remained the same. Women performing monthly breast self-examination and/or receiving at least bi-annual mammograms were much less likely to delay than women who performed breast self-examination or received mammograms less often. It was also found that women using family practitioners were less likely to delay than women using other types of physicians.\n\n\nCONCLUSIONS\nPatient delay continues to be a major problem in breast cancer, as 16% of the women here delayed at least two months before seeking help. This study presented a new and improved method for defining patient delay, which should be explored further in larger studies.",
"title": ""
},
{
"docid": "729a68e8a81173035240052941498e86",
"text": "A fast approximate nearest neighbor search algorithm for the (binary) Hamming space is proposed. The proposed Error Weighted Hashing (EWH) algorithm is up to 20 times faster than the popular locality sensitive hashing (LSH) algorithm and works well even for large nearest neighbor distances where LSH fails. EWH significantly reduces the number of candidate nearest neighbors by weighing them based on the difference between their hash vectors. EWH can be used for multimedia retrieval and copy detection systems that are based on binary fingerprinting. On a fingerprint database with more than 1,000 videos, for a specific detection accuracy, we demonstrate that EWH is more than 10 times faster than LSH. For the same retrieval time, we show that EWH has a significantly better detection accuracy with a 15 times lower error rate.",
"title": ""
},
{
"docid": "efe279fbc7307bc6a191ebb397b01823",
"text": "Real-time traffic sign detection and recognition has been receiving increasingly more attention in recent years due to the popularity of driver-assistance systems and autonomous vehicles. This paper proposes an accurate and efficient traffic sign detection technique by exploring AdaBoost and support vector regression (SVR) for discriminative detector learning. Different from the reported traffic sign detection techniques, a novel saliency estimation approach is first proposed, where a new saliency model is built based on the traffic sign-specific color, shape, and spatial information. By incorporating the saliency information, enhanced feature pyramids are built to learn an AdaBoost model that detects a set of traffic sign candidates from images. A novel iterative codeword selection algorithm is then designed to generate a discriminative codebook for the representation of sign candidates, as detected by the AdaBoost, and an SVR model is learned to identify the real traffic signs from the detected sign candidates. Experiments on three public data sets show that the proposed traffic sign detection technique is robust and obtains superior accuracy and efficiency.",
"title": ""
},
{
"docid": "3179cc075f08314f62578d06ab15bf67",
"text": "Quite some research has been done on reinforcement learning in continuous environments, but the research on problems where the actions can also be chosen from a continuous space is much more limited. We present a new class of algorithms named continuous actor critic learning automaton (CACLA) that can handle continuous states and actions. The resulting algorithm is straightforward to implement. An experimental comparison is made between this algorithm and other algorithms that can handle continuous action spaces. These experiments show that CACLA performs much better than the other algorithms, especially when it is combined with a Gaussian exploration method",
"title": ""
}
] |
scidocsrr
|
d2ba37d6b8bc224e4449f2995689a633
|
High-Performance Design of Hadoop RPC with RDMA over InfiniBand
|
[
{
"docid": "f10660b168700e38e24110a575b5aafa",
"text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.",
"title": ""
}
] |
[
{
"docid": "b02cfc336a6e1636dbcba46d4ee762e8",
"text": "Peter C. Verhoef a,∗, Katherine N. Lemon b, A. Parasuraman c, Anne Roggeveen d, Michael Tsiros c, Leonard A. Schlesinger d a University of Groningen, Faculty of Economics and Business, P.O. Box 800, NL-9700 AV Groningen, The Netherlands b Boston College, Carroll School of Management, Fulton Hall 510, 140 Commonwealth Avenue, Chestnut Hill, MA 02467 United States c University of Miami, School of Business Administration, P.O. Box 24814, Coral Gables, FL 33124, United States d Babson College, 231 Forest Street, Wellesley, Massachusetts, United States",
"title": ""
},
{
"docid": "6d589aaae8107bf6b71c0f06f7a49a28",
"text": "1. INTRODUCTION The explosion of digital connectivity, the significant improvements in communication and information technologies and the enforced global competition are revolutionizing the way business is performed and the way organizations compete. A new, complex and rapidly changing economic order has emerged based on disruptive innovation, discontinuities, abrupt and seditious change. In this new landscape, knowledge constitutes the most important factor, while learning, which emerges through cooperation, together with the increased reliability and trust, is the most important process (Lundvall and Johnson, 1994). The competitive survival and ongoing sustenance of an organisation primarily depend on its ability to redefine and adopt continuously goals, purposes and its way of doing things (Malhotra, 2001). These trends suggest that private and public organizations have to reinvent themselves through 'continuous non-linear innovation' in order to sustain themselves and achieve strategic competitive advantage. The extant literature highlights the great potential of ICT tools for operational efficiency, cost reduction, quality of services, convenience, innovation and learning in private and public sectors. However, scholarly investigations have focused primarily on the effects and outcomes of ICTs (Information & Communication Technology) for the private sector. The public sector has been sidelined because it tends to lag behind in the process of technology adoption and business reinvention. Only recently has the public sector come to recognize the potential importance of ICT and e-business models as a means of improving the quality and responsiveness of the services they provide to their citizens, expanding the reach and accessibility of their services and public infrastructure and allowing citizens to experience a faster and more transparent form of access to government services. The initiatives of government agencies and departments to use ICT tools and applications, Internet and mobile devices to support good governance, strengthen existing relationships and build new partnerships within civil society, are known as eGovernment initiatives. As with e-commerce, eGovernment represents the introduction of a great wave of technological innovation as well as government reinvention. It represents a tremendous impetus to move forward in the 21 st century with higher quality, cost effective government services and a better relationship between citizens and government (Fang, 2002). Many government agencies in developed countries have taken progressive steps toward the web and ICT use, adding coherence to all local activities on the Internet, widening local access and skills, opening up interactive services for local debates, and increasing the participation of citizens on promotion and management …",
"title": ""
},
{
"docid": "fe3003714cf89aa7fcc5490eda25f933",
"text": "While convolutional neural networks (CNN) have been excellent for object recognition, the greater spatial variability in scene images typically meant that the standard full-image CNN features are suboptimal for scene classification. In this paper, we investigate a framework allowing greater spatial flexibility, in which the Fisher vector (FV) encoded distribution of local CNN features, obtained from a multitude of region proposals per image, is considered instead. The CNN features are computed from an augmented pixel-wise representation comprising multiple modalities of RGB, HHA and surface normals, as extracted from RGB-D data. More significantly, we make two postulates: (1) component sparsity - that only a small variety of region proposals and their corresponding FV GMM components contribute to scene discriminability, and (2) modal non-sparsity - within these discriminative components, all modalities have important contribution. In our framework, these are implemented through regularization terms applying group lasso to GMM components and exclusive group lasso across modalities. By learning and combining regressors for both proposal-based FV features and global CNN features, we were able to achieve state-of-the-art scene classification performance on the SUNRGBD Dataset and NYU Depth Dataset V2.",
"title": ""
},
{
"docid": "aac326acf267f3299f03b9b426c8c9ac",
"text": "Recently, Internet of Things (IoT) and cloud computing (CC) have been widely studied and applied in many fields, as they can provide a new method for intelligent perception and connection from M2M (including man-to-man, man-to-machine, and machine-to-machine), and on-demand use and efficient sharing of resources, respectively. In order to realize the full sharing, free circulation, on-demand use, and optimal allocation of various manufacturing resources and capabilities, the applications of the technologies of IoT and CC in manufacturing are investigated in this paper first. Then, a CC- and IoT-based cloud manufacturing (CMfg) service system (i.e., CCIoT-CMfg) and its architecture are proposed, and the relationship among CMfg, IoT, and CC is analyzed. The technology system for realizing the CCIoT-CMfg is established. Finally, the advantages, challenges, and future works for the application and implementation of CCIoT-CMfg are discussed.",
"title": ""
},
{
"docid": "612330c4bfbfddd07251ee0a07912526",
"text": "Radiofrequency-induced calf muscle volume reduction is a commonly used method for cosmetic shaping of the lower leg contour. Functional disabilities associated with the use of the radiofrequency (RF) technique, with this procedure targeting the normal gastrocnemius muscle, still have not been reported. However, the authors have experienced several severe ankle equinus cases after RF-induced calf muscle volume reduction. This study retrospectively reviewed 19 calves of 12 patients who showed more than 20° of fixed equinus even though they underwent physical therapy for more than 6 months. All were women with a mean age of 32 years (range, 23–41 years). Of the 12 patients, 7 were bilateral. All the patients received surgical Achilles lengthening for deformity correction. To evaluate the clinical outcome, serial ankle dorsiflexion was measured, and the American Orthopedic Foot and Ankle Society (AOFAS) score was evaluated at the latest follow-up visit. The presence of soleus muscle involvement and an ongoing lesion that might affect the postoperative results of preoperative magnetic resonance imaging (MRI) were investigated. Statistical analysis was conducted to analyze preoperative factors strongly associated with patient clinical outcomes. The mean follow-up period after surgery was 18.6 months (range, 12–28 months). At the latest follow-up visit, the mean ankle dorsiflexion was 9° (range, 0–20°), and the mean AOFAS score was 87.7 (range, 80–98). On preoperative MRI, 13 calves showed soleus muscle involvement. Seven calves had ongoing lesions. Five of the ongoing lesions were muscle edema, and the remaining two lesions were cystic mass lesions resulting from muscle necrosis. Ankle dorsiflexion and AOFAS scores at the latest follow-up evaluation were insufficient in the ongoing lesions group. Although RF-induced calf muscle reduction is believed to be a safer method than conventional procedures, careful handling is needed because of the side effects that may occur in some instances. The slow progression of fibrosis could be observed after RF-induced calf reduction. Therefore, long-term follow-up evaluation is needed after the procedure. Therapeutic case series.",
"title": ""
},
{
"docid": "c46edb8a67c10ba5819a5eeeb0e62905",
"text": "One of the most challenging projects in information systems is extracting information from unstructured texts, including medical document classification. I am developing a classification algorithm that classifies a medical document by analyzing its content and categorizing it under predefined topics from the Medical Subject Headings (MeSH). I collected a corpus of 50 full-text journal articles (N=50) from MEDLINE, which were already indexed by experts based on MeSH. Using natural language processing (NLP), my algorithm classifies the collected articles under MeSH subject headings. I evaluated the algorithm's outcome by measuring its precision and recall of resulting subject headings from the algorithm, comparing results to the actual documents' subject headings. The algorithm classified the articles correctly under 45% to 60% of the actual subject headings and got 40% to 53% of the total subject headings correct. This holds promising solutions for the global health arena to index and classify medical documents expeditiously.",
"title": ""
},
{
"docid": "4bcb62b8ca73fe841908e24c5c454a89",
"text": "Neural network based models have achieved impressive results on various specific tasks. However, in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we propose two deep architectures which can be trained jointly on multiple related tasks. More specifically, we augment neural model with an external memory, which is shared by several tasks. Experiments on two groups of text classification tasks show that our proposed architectures can improve the performance of a task with the help of other related tasks.",
"title": ""
},
{
"docid": "52e492ff5e057a8268fd67eb515514fe",
"text": "We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.",
"title": ""
},
{
"docid": "501760c68ed75ed288749e9b4068234f",
"text": "This research investigated impulse buying as resulting from the depletion of a common—but limited—resource that governs self-control. In three investigations, participants’ self-regulatory resources were depleted or not; later, impulsive spending responses were measured. Participants whose resources were depleted, relative to participants whose resources were not depleted, felt stronger urges to buy, were willing to spend more, and actually did spend more money in unanticipated buying situations. Participants having depleted resources reported being influenced equally by affective and cognitive factors and purchased products that were high on each factor at equal rates. Hence, self-regulatory resource availability predicts whether people can resist impulse buying temptations.",
"title": ""
},
{
"docid": "2736c48061df67aab12b7cb303090267",
"text": "The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word ldquononidealrdquo is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.",
"title": ""
},
{
"docid": "40703fa491838fca50361e2733e2988f",
"text": "The HIV-1 core consists of capsid proteins (CA) surrounding viral genomic RNA. After virus-cell fusion, the core enters the cytoplasm and the capsid shell is lost through uncoating. CA loss precedes nuclear import and HIV integration into the host genome, but the timing and location of uncoating remain unclear. By visualizing single HIV-1 infection, we find that CA is required for core docking at the nuclear envelope (NE), whereas early uncoating in the cytoplasm promotes proteasomal degradation of viral complexes. Only docked cores exhibiting accelerated loss of CA at the NE enter the nucleus. Interestingly, a CA mutation (N74D) altering virus engagement of host factors involved in nuclear transport does not alter the uncoating site at the NE but reduces the nuclear penetration depth. Thus, CA protects HIV-1 complexes from degradation, mediates docking at the nuclear pore before uncoating, and determines the depth of nuclear penetration en route to integration.",
"title": ""
},
{
"docid": "4cc71db87682a96ddee09e49a861142f",
"text": "BACKGROUND\nReadiness is an integral and preliminary step in the successful implementation of telehealth services into existing health systems within rural communities.\n\n\nMETHODS AND MATERIALS\nThis paper details and critiques published international peer-reviewed studies that have focused on assessing telehealth readiness for rural and remote health. Background specific to readiness and change theories is provided, followed by a critique of identified telehealth readiness models, including a commentary on their readiness assessment tools.\n\n\nRESULTS\nFour current readiness models resulted from the search process. The four models varied across settings, such as rural outpatient practices, hospice programs, rural communities, as well as government agencies, national associations, and organizations. All models provided frameworks for readiness tools. Two specifically provided a mechanism by which communities could be categorized by their level of telehealth readiness.\n\n\nDISCUSSION\nCommon themes across models included: an appreciation of practice context, strong leadership, and a perceived need to improve practice. Broad dissemination of these telehealth readiness models and tools is necessary to promote awareness and assessment of readiness. This will significantly aid organizations to facilitate the implementation of telehealth.",
"title": ""
},
{
"docid": "472519682e5b086732b31e558ec7934d",
"text": "As networks become ubiquitous in people's lives, users depend on networks a lot for sufficient communication and convenient information access. However, networks suffer from security issues. Network security becomes a challenging topic since numerous new network attacks have appeared increasingly sophisticated and caused vast loss to network resources. Game theoretic approaches have been introduced as a useful tool to handle those tricky network attacks. In this paper, we review the existing game-theory based solutions for network security problems, classifying their application scenarios under two categories, attack-defense analysis and security measurement. Moreover, we present a brief view of the game models in those solutions and summarize them into two categories, cooperative game models and non-cooperative game models with the latter category consisting of subcategories. In addition to the introduction to the state of the art, we discuss the limitations of those game theoretic approaches and propose future research directions.",
"title": ""
},
{
"docid": "ff6a487e49d1fed033ad082ad7cd0524",
"text": "We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.",
"title": ""
},
{
"docid": "5508603a802abb9ab0203412b396b7bc",
"text": "We present an optimal algorithm for informative path planning (IPP), using a branch and bound method inspired by feature selection algorithms. The algorithm uses the monotonicity of the objective function to give an objective function-dependent speedup versus brute force search. We present results which suggest that when maximizing variance reduction in a Gaussian process model, the speedup is significant.",
"title": ""
},
{
"docid": "6fa191434ae343d4d645587b5a240b1f",
"text": "An integrated framework for density-based cluster analysis, outlier detection, and data visualization is introduced in this article. The main module consists of an algorithm to compute hierarchical estimates of the level sets of a density, following Hartigan’s classic model of density-contour clusters and trees. Such an algorithm generalizes and improves existing density-based clustering techniques with respect to different aspects. It provides as a result a complete clustering hierarchy composed of all possible density-based clusters following the nonparametric model adopted, for an infinite range of density thresholds. The resulting hierarchy can be easily processed so as to provide multiple ways for data visualization and exploration. It can also be further postprocessed so that: (i) a normalized score of “outlierness” can be assigned to each data object, which unifies both the global and local perspectives of outliers into a single definition; and (ii) a “flat” (i.e., nonhierarchical) clustering solution composed of clusters extracted from local cuts through the cluster tree (possibly corresponding to different density thresholds) can be obtained, either in an unsupervised or in a semisupervised way. In the unsupervised scenario, the algorithm corresponding to this postprocessing module provides a global, optimal solution to the formal problem of maximizing the overall stability of the extracted clusters. If partially labeled objects or instance-level constraints are provided by the user, the algorithm can solve the problem by considering both constraints violations/satisfactions and cluster stability criteria. An asymptotic complexity analysis, both in terms of running time and memory space, is described. Experiments are reported that involve a variety of synthetic and real datasets, including comparisons with state-of-the-art, density-based clustering and (global and local) outlier detection methods.",
"title": ""
},
{
"docid": "2319e5f20b03abe165b7715e9b69bac5",
"text": "Cloud networking imposes new requirements in terms of connection resiliency and throughput among virtual machines, hypervisors and users. A promising direction is to exploit multipath communications, yet existing protocols have a so limited scope that performance improvements are often unreachable. Generally, multipathing adds signaling overhead and in certain conditions may in fact decrease throughput due to packet arrival disorder. At the transport layer, the most promising protocol is Multipath TCP (MPTCP), a backward compatible TCP extension allowing to balance the load on several TCP subflows, ideally following different physical paths, to maximize connection throughput. Current implementations create a full mesh between hosts IPs, which can be suboptimal. For situation when at least one end-point network is multihomed, we propose to enhance its subflow creation mechanism so that MPTCP creates an adequate number of subflows considering the underlying path diversity offered by an IP-in-IP mapping protocol, the Location/Identifier Separation Protocol (LISP). We defined and implemented a cross-layer cooperation module between MPTCP and LISP, leading to an improved version of MPTCP we name Augmented MPTCP (A-MPTCP). We evaluated A-MPTCP for a realistic Cloud access use-case scenario involving one multi-homed data-center. Results from a large-scale test bed show us that A-MPTCP can halve the transfer times with the simple addition of one additional LIS-Penabled MPTCP subflow, hence showing promising performance for Cloud communications between multi-homed users and multihomed data-centers.",
"title": ""
},
{
"docid": "6c4c235c779d9e6a78ea36d7fc636df4",
"text": "Digital archiving creates a vast store of knowledge that can be accessed only through digital tools. Users of this information will need fluency in the tools of digital access, exploration, visualization, analysis, and collaboration. This paper proposes that this fluency represents a new form of literacy, which must become fundamental for humanities scholars. Tools influence both the creation and the analysis of information. Whether using pen and paper, Microsoft Office, or Web 2.0, scholars base their process, production, and questions on the capabilities their tools offer them. Digital archiving and the interconnectivity of the Web provide new challenges in terms of quantity and quality of information. They create a new medium for presentation as well as a foundation for collaboration that is independent of physical location. Challenges for digital humanities include: • developing new genres for complex information presentation that can be shared, analyzed, and compared; • creating a literacy in information analysis and visualization that has the same rigor and richness as current scholarship; and • expanding classically text-based pedagogy to include simulation, animation, and spatial and geographic representation.",
"title": ""
},
{
"docid": "463ef40777aaf14406186d5d4d99ba13",
"text": "Social media is already a fixture for reporting for many journalists, especially around breaking news events where non-professionals may already be on the scene to share an eyewitness report, photo, or video of the event. At the same time, the huge amount of content posted in conjunction with such events serves as a challenge to finding interesting and trustworthy sources in the din of the stream. In this paper we develop and investigate new methods for filtering and assessing the verity of sources found through social media by journalists. We take a human centered design approach to developing a system, SRSR (\"Seriously Rapid Source Review\"), informed by journalistic practices and knowledge of information production in events. We then used the system, together with a realistic reporting scenario, to evaluate the filtering and visual cue features that we developed. Our evaluation offers insights into social media information sourcing practices and challenges, and highlights the role technology can play in the solution.",
"title": ""
}
] |
scidocsrr
|
df7ed48ba46f6a34cef2ee8866650a8c
|
How to Make Causal Inferences Using Texts
|
[
{
"docid": "eae92d06d00d620791e6b247f8e63c36",
"text": "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources.",
"title": ""
}
] |
[
{
"docid": "b9538c45fc55caff8b423f6ecc1fe416",
"text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.",
"title": ""
},
{
"docid": "388c0681127193d8f7bca99adaa11363",
"text": "Land use maps are very important references for the urban planning and management. However, it is difficult and time-consuming to get high-resolution urban land use maps. In this study, we propose a new method to derive land use information at building block level based on machine learning and geo-tagged street-level imagery – Google Street View images. Several commonly used generic image features (GIST, HoG, and SIFT-Fisher) are used to represent street-level images of different cityscapes in a case study area of New York City. Machine learning is further used to categorize different images based on the calculated image features of different street-level images. Accuracy assessment results show that the method developed in this study is a promising method for land use mapping at building block level in future.",
"title": ""
},
{
"docid": "4faef20f6f8807f500b0a555f0f0ed2b",
"text": "Online search and item recommendation systems are often based on being able to correctly label items with topical keywords. Typically, topical labelers analyze the main text associated with the item, but social media posts are often multimedia in nature and contain contents beyond the main text. Topic labeling for social media posts is therefore an important open problem for supporting effective social media search and recommendation. In this work, we present a novel solution to this problem for Google+ posts, in which we integrated a number of different entity extractors and annotators, each responsible for a part of the post (e.g. text body, embedded picture, video, or web link). To account for the varying quality of different annotator outputs, we first utilized crowdsourcing to measure the accuracy of individual entity annotators, and then used supervised machine learning to combine different entity annotators based on their relative accuracy. Evaluating using a ground truth data set, we found that our approach substantially outperforms topic labels obtained from the main text, as well as naive combinations of the individual annotators. By accurately applying topic labels according to their relevance to social media posts, the results enables better search and item recommendation.",
"title": ""
},
{
"docid": "6a1a9c6cb2da06ee246af79fdeedbed9",
"text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review",
"title": ""
},
{
"docid": "0e5fc650834d883e291c2cf4ace91d35",
"text": "The majority of practitioners express software requirements using natural text notations such as user stories. Despite the readability of text, it is hard for people to build an accurate mental image of the most relevant entities and relationships. Even converting requirements to conceptual models is not sufficient: as the number of requirements and concepts grows, obtaining a holistic view of the requirements becomes increasingly difficult and, eventually, practically impossible. In this paper, we introduce and experiment with a novel, automated method for visualizing requirements—by showing the concepts the text references and their relationships—at different levels of granularity. We build on two pillars: (i) clustering techniques for grouping elements into coherent sets so that a simplified overview of the concepts can be created, and (ii) state-of-the-art, corpus-based semantic relatedness algorithms between words to measure the extent to which two concepts are related. We build a proof-of-concept tool and evaluate our approach by applying it to requirements from four real-world data sets.",
"title": ""
},
{
"docid": "5cf3f5f7e4d3300dc53ddd2fcbf52226",
"text": "We demonstrate the fabrication of poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate) (PEDOT:PSS) nanogratings by a dehydration-assisted nanoimprint lithographic technique. Dehydration of PEDOT:PSS increases its cohesion to protect the nanostructures formed by nanoimprinting during demolding, resulting in the formation of high quality nanogratings of 60 nm in height, 70 nm in width and 70 nm in spacing (aspect ratio of 0.86). PEDOT:PSS nanogratings are used as hole transport and an electron blocking layer in blended poly(3-hexylthiophene-2,5-diyl) (P3HT):[6,6]-penyl-C61-butyric-acid-methyl-ester (PCBM) organic photovoltaic devices (OPV), showing enhancement of photocurrent and power efficiency in comparison to OPV devices with non-patterned PEDOT:PSS films.",
"title": ""
},
{
"docid": "df9acaed8dbcfbd38a30e4e1fa77aa8a",
"text": "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-theart object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.",
"title": ""
},
{
"docid": "9539b057f14a48cec48468cb97a4a9c1",
"text": "Fuzzy-match repair (FMR), which combines a human-generated translation memory (TM) with the flexibility of machine translation (MT), is one way of using MT to augment resources available to translators. We evaluate rule-based, phrase-based, and neural MT systems as black-box sources of bilingual information for FMR. We show that FMR success varies based on both the quality of the MT system and the type of MT system being used.",
"title": ""
},
{
"docid": "f4490447bf8a43de95d61e1626d365ae",
"text": "The connective tissue of the skin is composed mostly of collagen and elastin. Collagen makes up 70-80% of the dry weight of the skin and gives the dermis its mechanical and structural integrity. Elastin is a minor component of the dermis, but it has an important function in providing the elasticity of the skin. During aging, the synthesis of collagen gradually declines, and the skin thus becomes thinner in protected skin, especially after the seventh decade. Several factors contribute to the aging of the skin. In several hereditary disorders collagen or elastin are deficient, leading to accelerated aging. In cutis laxa, for example, elastin fibers are deficient or completely lacking, leading to sagging of the skin. Solar irradiation causes skin to look prematurely aged. Especially ultraviolet radiation induces an accumulation of abnormal elastotic material. These changes are usually observed after 60 years of age, but excessive exposure to the sun may cause severe photoaging as early as the second decade of life. The different biochemical and mechanical parameters of the dermis can be studied by modern techniques. The applications of these techniques to study the aging of dermal connective tissue are described in detail.",
"title": ""
},
{
"docid": "5d99524fdfc8c0da283a48962ea9ba4c",
"text": "Recommender Systems learn users’ preferences and tastes in different domains to suggest potentially interesting items to users. Group Recommender Systems generate recommendations that intend to satisfy a group of users as a whole, instead of individual users. In this article, we present a social based approach for recommender systems in the tourism domain, which builds a group profile by analyzing not only users’ preferences, but also the social relationships between members of a group. This aspect is a hot research topic in the recommender systems area. In addition, to generate the individual and group recommendations our approach uses a hybrid technique that combines three well-known filtering techniques: collaborative, content-based and demographic filtering. In this way, the disadvantages of one technique are overcome by the others. Our approach was materialized in a recommender system named Hermes, which suggests tourist attractions to both individuals and groups of users. We have obtained promising results when comparing our approach with classic approaches to generate recommendations to individual users and groups. These results suggest that considering the type of users’ relationship to provide recommendations to groups leads to more accurate recommendations in the tourism domain. These findings can be helpful for recommender systems developers and for researchers in this area.",
"title": ""
},
{
"docid": "aeebcc70000e6ceed99d2e033d35c65e",
"text": "This paper presents glowworm swarm optimization (GSO), a novel algorithm for the simultaneous computation of multiple optima of multimodal functions. The algorithm shares a few features with some better known swarm intelligence based optimization algorithms, such as ant colony optimization and particle swarm optimization, but with several significant differences. The agents in GSO are thought of as glowworms that carry a luminescence quantity called luciferin along with them. The glowworms encode the fitness of their current locations, evaluated using the objective function, into a luciferin value that they broadcast to their neighbors. The glowworm identifies its neighbors and computes its movements by exploiting an adaptive neighborhood, which is bounded above by its sensor range. Each glowworm selects, using a probabilistic mechanism, a neighbor that has a luciferin value higher than its own and moves toward it. These movements—based only on local information and selective neighbor interactions—enable the swarm of glowworms to partition into disjoint subgroups that converge on multiple optima of a given multimodal function. We provide some theoretical results related to the luciferin update mechanism in order to prove the bounded nature and convergence of luciferin levels of the glowworms. Experimental results demonstrate the efficacy of the proposed glowworm based algorithm in capturing multiple optima of a series of standard multimodal test functions and more complex ones, such as stair-case and multiple-plateau functions. We also report the results of tests in higher dimensional spaces with a large number of peaks. We address the parameter selection problem by conducting experiments to show that only two parameters need to be selected by the user. Finally, we provide some comparisons of GSO with PSO and an experimental comparison with Niche-PSO, a PSO variant that is designed for the simultaneous computation of multiple optima.",
"title": ""
},
{
"docid": "bbfc74593baf0cc0262d46730590ea8b",
"text": "\"So far, evaluation has not kept pace with efforts in digital libraries (or with digital libraries themselves), has not become part of their integral activity, and has not been even specified as to what it means, and how to do it.\" - [1]Conducting a comprehensive evaluation of a digital library requires a \"triangulation\" approach including multiple models, procedures, and tools. Carrying out valid evaluations of digital libraries in a timely and efficient manner is the focus of this tutorial. Why is evaluation of digital libraries so important? Each year sees the introduction of new digital libraries promoted as valuable resources for education and other needs. Yet systematic evaluation of the implementation and efficacy of these digital library systems is often lacking. This tutorial is specifically designed to establish evaluation as a key strategy throughout the design, development, and implementation of digital libraries. The tutorial focuses on a decision-oriented model for evaluating digital libraries using multiple methods such as: service evaluation, usability evaluation, information retrieval, biometrics evaluation, transaction log analysis survey methods, interviews and focus groups, observations, and experimental methods. Participants in this tutorial will learn how to implement models and procedures for evaluating digital libraries at all levels of education. The tutorial includes presentations with actual case studies that are focused on a variety of digital library evaluation strategies. Participants will also receive a copy of Evaluating Digital Libraries: A User-Friendly Guide.",
"title": ""
},
{
"docid": "07713323e19b00c93a21a3d121c0039b",
"text": "A CMOS nested-chopper instrumentation amplifier is presented with a typical offset of 100 nV. This performance is obtained by nesting an additional low-frequency chopper pair around a conventional chopper amplifier. The inner chopper pair removes the 1/f noise, while the outer chopper pair reduces the residual offset. The test chip is free from 1/f noise and has a thermal noise of 27 nV//spl radic/Hz consuming a total supply current of 200 /spl mu/A.",
"title": ""
},
{
"docid": "c839542db0e80ce253a170a386d91bab",
"text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).",
"title": ""
},
{
"docid": "ff7cce658de6150af85e95b25fd8e508",
"text": "Using a panel of mandatory SEC disclosure filings we test the predictability of investment fraud. We find that past regulatory and legal violations, conflicts of interest, and monitoring, are significantly associated with future fraud. Avoiding the 5% of firms with the highest fraud risk allows investors to avoid 29.7% of investment frauds, and over half of the total dollar losses from fraud. Even after excluding small frauds and fraud by rogue employees, we are able to predict at least 24.1% of frauds at a false positive rate of 5%. There is no evidence that investors are compensated for fraud risk through superior performance or lower fees. We also find that investors react strongly to the discovery of fraud, resulting in significantly higher rates of firm death and investor outflows. Our results provide investors and regulators with tools for predicting investment fraud. JEL Classifications: G2, G20, G28, K2, K22",
"title": ""
},
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "1d3441ce9065ab004d04946528d92935",
"text": "General purpose object-oriented programs typically aren't embarrassingly parallel. For these applications, finding enough concurrency remains a challenge in program design. To address this challenge, in the Panini project we are looking at reconciling concurrent program design goals with modular program design goals. The main idea is that if programmers improve the modularity of their programs they should get concurrency for free. In this work we describe one of our directions to reconcile these two goals by enhancing Gang-of-Four (GOF) object-oriented design patterns. GOF patterns are commonly used to improve the modularity of object-oriented software. These patterns describe strategies to decouple components in design space and specify how these components should interact. Our hypothesis is that if these patterns are enhanced to also decouple components in execution space applying them will concomitantly improve the design and potentially available concurrency in software systems. To evaluate our hypothesis we have studied all 23 GOF patterns. For 18 patterns out of 23, our hypothesis has held true. Another interesting preliminary result reported here is that for 17 out of these 18 studied patterns, concurrency and synchronization concerns were completely encapsulated in our concurrent design pattern framework.",
"title": ""
},
{
"docid": "b9733e699abaaedc380a45a3136f97da",
"text": "Generally speaking, anti-computer forensics is a set of techniques used as countermeasures to digital forensic analysis. When put into information and data perspective, it is a practice of making it hard to understand or find. Typical example being when programming code is often encoded to protect intellectual property and prevent an attacker from reverse engineering a proprietary software program.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "e5a2c2ef9d2cb6376b18c1e7232016b2",
"text": "In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera. We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results.",
"title": ""
}
] |
scidocsrr
|
ac46f1f7b548e1a7ead5d8a633385d7d
|
Agency and communion from the perspective of self versus others.
|
[
{
"docid": "c36fec7cebe04627ffcd9a689df8c5a2",
"text": "In seems there are two dimensions that underlie most judgments of traits, people, groups, and cultures. Although the definitions vary, the first makes reference to attributes such as competence, agency, and individualism, and the second to warmth, communality, and collectivism. But the relationship between the two dimensions seems unclear. In trait and person judgment, they are often positively related; in group and cultural stereotypes, they are often negatively related. The authors report 4 studies that examine the dynamic relationship between these two dimensions, experimentally manipulating the location of a target of judgment on one and examining the consequences for the other. In general, the authors' data suggest a negative dynamic relationship between the two, moderated by factors the impact of which they explore.",
"title": ""
},
{
"docid": "fd1b32615aa7eb8f153e495d831bdd93",
"text": "The culture movement challenged the universality of the self-enhancement motive by proposing that the motive is pervasive in individualistic cultures (the West) but absent in collectivistic cultures (the East). The present research posited that Westerners and Easterners use different tactics to achieve the same goal: positive self-regard. Study 1 tested participants from differing cultural backgrounds (the United States vs. Japan), and Study 2 tested participants of differing self-construals (independent vs. interdependent). Americans and independents self-enhanced on individualistic attributes, whereas Japanese and interdependents self-enhanced on collectivistic attributes. Independents regarded individualistic attributes, whereas interdependents regarded collectivistic attributes, as personally important. Attribute importance mediated self-enhancement. Regardless of cultural background or self-construal, people self-enhance on personally important dimensions. Self-enhancement is a universal human motive.",
"title": ""
}
] |
[
{
"docid": "65557b1b1e43e4f98f8edea6869d35b3",
"text": "Several new genomics technologies have become available that offer long-read sequencing or long-range mapping with higher throughput and higher resolution analysis than ever before. These long-range technologies are rapidly advancing the field with improved reference genomes, more comprehensive variant identification and more complete views of transcriptomes and epigenomes. However, they also require new bioinformatics approaches to take full advantage of their unique characteristics while overcoming their complex errors and modalities. Here, we discuss several of the most important applications of the new technologies, focusing on both the currently available bioinformatics tools and opportunities for future research. Various genomics-related fields are increasingly taking advantage of long-read sequencing and long-range mapping technologies, but making sense of the data requires new analysis strategies. This Review discusses bioinformatics tools that have been devised to handle the numerous characteristic features of these long-range data types, with applications in genome assembly, genetic variant detection, haplotype phasing, transcriptomics and epigenomics.",
"title": ""
},
{
"docid": "ca1729ffc67b37c39eca7d98115a55ec",
"text": "Causal inference is one of the fundamental problems in science. In recent years, several methods have been proposed for discovering causal structure from observational data. These methods, however, focus specifically on numeric data, and are not applicable on nominal or binary data. In this work, we focus on causal inference for binary data. Simply put, we propose causal inference by compression. To this end we propose an inference framework based on solid information theoretic foundations, i.e. Kolmogorov complexity. However, Kolmogorov complexity is not computable, and hence we propose a practical and computable instantiation based on the Minimum Description Length (MDL) principle. To apply the framework in practice, we propose ORIGO, an efficient method for inferring the causal direction from binary data. ORIGO employs the lossless PACK compressor, works directly on the data and does not require assumptions about neither distributions nor the type of causal relations. Extensive evaluation on synthetic, benchmark, and real-world data shows that ORIGO discovers meaningful causal relations, and outperforms state-of-the-art methods by a wide margin.",
"title": ""
},
{
"docid": "b9b53f3e3196e31a24e32dd1902eea63",
"text": "Currency, defined here as banknotes and coins, plays an important role in the economy as a medium of exchange and a store of value. For Australia’s currency to function efficiently, it is important that the public has confidence in it and is therefore willing to accept banknotes and coins in transactions. Counterfeiting currency is a crime under the Crimes (Currency) Act 1981, and carries penalties of up to 14 years’ jail. People who fall victim to this crime have essentially been robbed. They cannot be reimbursed for their loss as, among other things, doing so would serve as an incentive to counterfeiters to continue their illegal activities. As a result, a high prevalence of counterfeiting can threaten public confidence in currency given that someone who accepts a counterfeit in place of a genuine banknote is left out of pocket and may be reluctant to accept banknotes in the future. Under the Reserve Bank Act 1959, the Reserve Bank issues Australia’s banknotes and has a mandate to contribute to the stability of the Australian currency. To ensure the security of these banknotes, the Reserve Bank works actively to monitor and manage the threat of banknote counterfeiting in Australia. The Reserve Bank works in partnership with key stakeholders to ensure that cash-handling professionals have information on how to detect counterfeits, that machines can authenticate banknotes, and that counterfeiters are apprehended and prosecuted (Evans, Gallagher and Martz 2015). The periodic issuance of new banknote series with upgraded security features, as is currently under way in Australia, is key to ensuring the security of, and thus confidence in, banknotes. Research into potential new security features is ongoing so that the Reserve Bank is well placed to develop and issue new banknote series as required and before counterfeiting levels become problematic. Monitoring of counterfeit activities informs the Bank’s decisions about the timing of such issuance. Recent Trends in Banknote Counterfeiting",
"title": ""
},
{
"docid": "ee884daf681a44b29ba5aa92ec2f78ee",
"text": "Fourteen leadership effect studies that used indirect-effect models were quantitatively analysed to explore the most promising mediating variables. The results indicate that total effect sizes based on indirect-effect studies appear to be low, quite comparable to the results of some meta-analyses of direct-effect studies. As the earlier indirect-effect studies tended to include a broad range of mainly school organisational conditions as intermediary variables, more recent studies focus more sharply on instructional conditions. The results of the conceptual analysis and the quantitative research synthesis would seem to support conceptualising educational leadership as a detached and ‘lean’ kind of meta-control, which would make maximum use of the available substitutes and self-organisation offered by the school staff and school organisational structural provisions. The coupling of conceptual analysis and systematic review of studies driven by indirect-effect models provides a new perspective on leadership effectiveness.",
"title": ""
},
{
"docid": "a4ecdccf4370292a31fc38d6602b3f50",
"text": "Loop gain analysis for performance evaluation of current sensors for switching converters is presented. The MOS transistor scaling technique is reviewed and employed in developing high-speed and high-accuracy current-sensors with offset-current cancellation. Using a standard 0.35/spl mu/m CMOS process, and integrated full-range inductor current sensor for a boost converter is designed. It operated at a supply voltage of 1.5 V with a DC loop gain of 38 dB, and a unity gain frequency of 10 MHz. The sensor worked properly at a converter switching frequency of 500 kHz.",
"title": ""
},
{
"docid": "a77e5f81c925e2f170df005b6576792b",
"text": "Recommendation systems utilize data analysis techniques to the problem of helping users find the items they would like. Example applications include the recommendation systems for movies, books, CDs and many others. As recommendation systems emerge as an independent research area, the rating structure plays a critical role in recent studies. Among many alternatives, the collaborative filtering algorithms are generally accepted to be successful to estimate user ratings of unseen items and then to derive proper recommendations. In this paper, we extend the concept of single criterion ratings to multi-criteria ones, i.e., an item can be evaluated in many different aspects. For example, the goodness of a restaurant can be evaluated in terms of its food, decor, service and cost. Since there are usually conflicts among different criteria, the recommendation problem cannot be formulated as an optimization problem any more. Instead, we propose in this paper to use data query techniques to solve this multi-criteria recommendation problem. Empirical studies show that our approach is of both theoretical and practical values.",
"title": ""
},
{
"docid": "c8482ed26ba2c4ba1bd3eed6ac0e00b4",
"text": "Virtual Reality (VR) has now emerged as a promising tool in many domains of therapy and rehabilitation (Rizzo, Schultheis, Kerns & Mateer, 2004; Weiss & Jessel, 1998; Zimand, Anderson, Gershon, Graap, Hodges, & Rothbaum, 2002; Glantz, Rizzo & Graap, 2003). Continuing advances in VR technology along with concomitant system cost reductions have supported the development of more usable, useful, and accessible VR systems that can uniquely target a wide range of physical, psychological, and cognitive rehabilitation concerns and research questions. What makes VR application development in the therapy and rehabilitation sciences so distinctively important is that it represents more than a simple linear extension of existing computer technology for human use. VR offers the potential to create systematic human testing, training and treatment environments that allow for the precise control of complex dynamic 3D stimulus presentations, within which sophisticated interaction, behavioral tracking and performance recording is possible. Much like an aircraft simulator serves to test and train piloting ability, virtual environments (VEs) can be developed to present simulations that assess and rehabilitate human functional performance under a range of stimulus conditions that are not easily deliverable and controllable in the real world. When combining these assets within the context of functionally relevant, ecologically enhanced VEs, a fundamental advancement could emerge in how human functioning can be addressed in many rehabilitation disciplines.",
"title": ""
},
{
"docid": "e88ad42145c63dd2aeff6c1f64f4b4c7",
"text": "Recommender systems are in the center of network science, and they are becoming increasingly important in individual businesses for providing efficient, personalized services and products to users. Previous research in the field of recommendation systems focused on improving the precision of the system through designing more accurate recommendation lists. Recently, the community has been paying attention to diversity and novelty of recommendation lists as key characteristics of modern recommender systems. In many cases, novelty and precision do not go hand in hand, and the accuracy--novelty dilemma is one of the challenging problems in recommender systems, which needs efforts in making a trade-off between them.\n In this work, we propose an algorithm for providing novel and accurate recommendation to users. We consider the standard definition of accuracy and an effective self-information--based measure to assess novelty of the recommendation list. The proposed algorithm is based on item popularity, which is defined as the number of votes received in a certain time interval. Wavelet transform is used for analyzing popularity time series and forecasting their trend in future timesteps. We introduce two filtering algorithms based on the information extracted from analyzing popularity time series of the items. The popularity-based filtering algorithm gives a higher chance to items that are predicted to be popular in future timesteps. The other algorithm, denoted as a novelty and population-based filtering algorithm, is to move toward items with low popularity in past timesteps that are predicted to become popular in the future. The introduced filters can be applied as adds-on to any recommendation algorithm. In this article, we use the proposed algorithms to improve the performance of classic recommenders, including item-based collaborative filtering and Markov-based recommender systems. The experiments show that the algorithms could significantly improve both the accuracy and effective novelty of the classic recommenders.",
"title": ""
},
{
"docid": "008ad9d12f1a8451f46be59eeef5bf0b",
"text": "0957-4174/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.05.070 ⇑ Corresponding author. Tel.: +34 953 212898; fax: E-mail address: msaleh@ujaen.es (M. Rushdi Saleh 1 http://www.amazon.com. 2 http://www.epinions.com. 3 http://www.imdb.com. Recently, opinion mining is receiving more attention due to the abundance of forums, blogs, e-commerce web sites, news reports and additional web sources where people tend to express their opinions. Opinion mining is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. In this paper we explore this new research area applying Support Vector Machines (SVM) for testing different domains of data sets and using several weighting schemes. We have accomplished experiments with different features on three corpora. Two of them have already been used in several works. The last one has been built from Amazon.com specifically for this paper in order to prove the feasibility of the SVM for different domains. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ea33b26333eaa1d92f3c42688eb8aba5",
"text": "Code to implement network protocols can be either inside the kernel of an operating system or in user-level processes. Kernel-resident code is hard to develop, debug, and maintain, but user-level implementations typically incur significant overhead and perform poorly.\nThe performance of user-level network code depends on the mechanism used to demultiplex received packets. Demultiplexing in a user-level process increases the rate of context switches and system calls, resulting in poor performance. Demultiplexing in the kernel eliminates unnecessary overhead.\nThis paper describes the packet filter, a kernel-resident, protocol-independent packet demultiplexer. Individual user processes have great flexibility in selecting which packets they will receive. Protocol implementations using the packet filter perform quite well, and have been in production use for several years.",
"title": ""
},
{
"docid": "44d5f8816285d81a731761ad00157e6f",
"text": "Gunshot detection traditionally has been a task performed with acoustic signal processing. While this type of detection can give cities, civil services and training institutes a method to identify specific locations of gunshots, the nature of acoustic detection may not provide the fine-grained detection accuracy and sufficient metrics for performance assessment. If however you examine a different signature of a gunshot, the recoil, detection of the same event with accelerometers can provide you with persona and firearm model level detection abilities. The functionality of accelerometer sensors in wrist worn devices have increased significantly in recent time. From fitness trackers to smart watches, accelerometers have been put to use in various activity recognition and detection applications. In this paper, we design an approach that is able to account for the variations in firearm generated recoil, as recorded by a wrist worn accelerometer, and helps categorize the impulse forces. Our experiments show that not only can wrist worn accelerometers detect the differences in handgun rifle and shotgun gunshots, but the individual models of firearms can be distinguished from each other. The application of this framework could be extended in the future to include real time detection embedded in smart devices to assist in firearms training and also help in crime detection and prosecution.",
"title": ""
},
{
"docid": "c77fad43abe34ecb0a451a3b0b5d684e",
"text": "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A â cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks",
"title": ""
},
{
"docid": "ebe14e601d0b61f10f6674e2d7108d41",
"text": "In this letter, the design procedure and electrical performance of a dual band (2.4/5.8GHz) printed dipole antenna using spiral structure are proposed and investigated. For the first time, a dual band printed dipole antenna with spiral configuration is proposed. In addition, a matching method by adjusting the transmission line width, and a new bandwidth broadening method varying the distance between the top and bottom spirals are reported. The operating frequencies of the proposed antenna are 2.4GHz and 5.8GHz which cover WLAN system. The proposed antenna achieves a good matching using tapered transmission lines for the top and bottom spirals. The desired resonant frequencies are obtained by adjusting the number of turns of the spirals. The bandwidth is optimized by varying the distance between the top and bottom spirals. A relative position of the bottom spiral plays an important role in achieving a bandwidth in terms of 10-dB return loss.",
"title": ""
},
{
"docid": "4f760928083b9b4c574c6d6e1cc4f4b1",
"text": "Finding matching images across large datasets plays a key role in many computer vision applications such as structure-from-motion (SfM), multi-view 3D reconstruction, image retrieval, and image-based localisation. In this paper, we propose finding matching and non-matching pairs of images by representing them with neural network based feature vectors, whose similarity is measured by Euclidean distance. The feature vectors are obtained with convolutional neural networks which are learnt from labeled examples of matching and non-matching image pairs by using a contrastive loss function in a Siamese network architecture. Previously Siamese architecture has been utilised in facial image verification and in matching local image patches, but not yet in generic image retrieval or whole-image matching. Our experimental results show that the proposed features improve matching performance compared to baseline features obtained with networks which are trained for image classification task. The features generalize well and improve matching of images of new landmarks which are not seen at training time. This is despite the fact that the labeling of matching and non-matching pairs is imperfect in our training data. The results are promising considering image retrieval applications, and there is potential for further improvement by utilising more training image pairs with more accurate ground truth labels.",
"title": ""
},
{
"docid": "ff2b53e0cecb849d1cbb503300f1ab9a",
"text": "Receiving rapid, accurate and comprehensive knowledge about the conditions of damaged buildings after earthquake strike and other natural hazards is the basis of many related activities such as rescue, relief and reconstruction. Recently, commercial high-resolution satellite imagery such as IKONOS and QuickBird is becoming more powerful data resource for disaster management. In this paper, a method for automatic detection and classification of damaged buildings using integration of high-resolution satellite imageries and vector map is proposed. In this method, after extracting buildings position from vector map, they are located in the pre-event and post-event satellite images. By measuring and comparing different textural features for extracted buildings in both images, buildings conditions are evaluated through a Fuzzy Inference System. Overall classification accuracy of 74% and kappa coefficient of 0.63 were acquired. Results of the proposed method, indicates the capability of this method for automatic determination of damaged buildings from high-resolution satellite imageries.",
"title": ""
},
{
"docid": "85cc307d55f4d1727e0194890051d34a",
"text": "Exploiting linguistic knowledge to infer properties of neologisms C. Paul Cook Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2010 Neologisms, or newly-coined words, pose problems for natural language processing (NLP) systems. Due to the recency of their coinage, neologisms are typically not listed in computational lexicons—dictionary-like resources that many NLP applications depend on. Therefore when a neologism is encountered in a text being processed, the performance of an NLP system will likely suffer due to the missing word-level information. Identifying and documenting the usage of neologisms is also a challenge in lexicography, the making of dictionaries. The traditional approach to these tasks has been to manually read a lot of text. However, due to the vast quantities of text being produced nowadays, particularly in electronic media such as blogs, it is no longer possible to manually analyze it all in search of neologisms. Methods for automatically identifying and inferring syntactic and semantic properties of neologisms would therefore address problems encountered in both natural language processing and lexicography. Because neologisms are typically infrequent due to their recent addition to the language, approaches to automatically learning word-level information relying on statistical distributional information are in many cases inappropriate. Moreover, neologisms occur in many domains and genres, and therefore approaches relying on domain-specific resources are also inappropriate. The hypothesis of this thesis is that knowledge about etymology—including word formation processes and types of semantic change—can be exploited for the acquisition of aspects of the syntax and semantics of neologisms. Evidence supporting this hypothesis is found",
"title": ""
},
{
"docid": "914d17433df678e9ace1c9edd1c968d3",
"text": "We propose a Deep Learning approach to the visual question answering task, where machines answer to questions about real-world images. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We evaluate our approaches on the DAQUAR as well as the VQA dataset where we also report various baselines, including an analysis how much information is contained in the language part only. To study human consensus, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Finally, we evaluate a rich set of design choices how to encode, combine and decode information in our proposed Deep Learning formulation.",
"title": ""
},
{
"docid": "504f3406db4465c10ffad27f2674f232",
"text": "In this paper we show the possibility of using FAUST (a programming language for function based block oriented programming) to create a fast audio processor in a single chip FPGA environment. The produced VHDL code is embedded in the on-chip processor system and utilizes the FPGA fabric for parallel processing. For the purpose of implementing and testing the code a complete System-On-Chip framework has been created. We use a Digilent board with a XILINX Virtex 2 Pro FPGA. The chip has a PowerPC 405 core and the framework uses the on chip peripheral bus to interface the core. The content of this paper presents a proof-of-concept implementation using a simple two pole IIR filter. The produced code is working, although more work has to be done for implementing complex arithmetic operations support.",
"title": ""
},
{
"docid": "23670ac6fb88e2f5d3a31badc6dc38f9",
"text": "The purpose of this review article is to report on the recent developments and the performance level achieved in the strained-Si/SiGe material system. In the first part, the technology of the growth of a high-quality strained-Si layer on a relaxed, linear or step-graded SiGe buffer layer is reviewed. Characterization results of strained-Si films obtained with secondary ion mass spectroscopy, Rutherford backscattering spectroscopy, atomic force microscopy, spectroscopic ellipsometry and Raman spectroscopy are presented. Techniques for the determination of bandgap parameters from electrical characterization of metal–oxide–semiconductor (MOS) structures on strained-Si film are discussed. In the second part, processing issues of strained-Si films in conventional Si technology with low thermal budget are critically reviewed. Thermal and low-temperature microwave plasma oxidation and nitridation of strained-Si layers are discussed. Some recent results on contact metallization of strained-Si using Ti and Pt are presented. In the last part, device applications of strained Si with special emphasis on heterostructure metal oxide semiconductor field effect transistors and modulation-doped field effect transistors are discussed. Design aspects and simulation results of nand p-MOS devices with a strained-Si channel are presented. Possible future applications of strained-Si/SiGe in high-performance SiGe CMOS technology are indicated.",
"title": ""
},
{
"docid": "ff0d27f1ba24321dedfc01cee017a23a",
"text": "In Mexico, local empirical knowledge about medicinal properties of plants is the basis for their use as home remedies. It is generally accepted by many people in Mexico and elsewhere in the world that beneficial medicinal effects can be obtained by ingesting plant products. In this review, we focus on the potential pharmacologic bases for herbal plant efficacy, but we also raise concerns about the safety of these agents, which have not been fully assessed. Although numerous randomized clinical trials of herbal medicines have been published and systematic reviews and meta-analyses of these studies are available, generalizations about the efficacy and safety of herbal medicines are clearly not possible. Recent publications have also highlighted the unintended consequences of herbal product use, including morbidity and mortality. It has been found that many phytochemicals have pharmacokinetic or pharmacodynamic interactions with drugs. The present review is limited to some herbal medicines that are native or cultivated in Mexico and that have significant use. We discuss the cultural uses, phytochemistry, pharmacological, and toxicological properties of the following plant species: nopal (Opuntia ficus), peppermint (Mentha piperita), chaparral (Larrea divaricata), dandlion (Taraxacum officinale), mullein (Verbascum densiflorum), chamomile (Matricaria recutita), nettle or stinging nettle (Urtica dioica), passionflower (Passiflora incarnata), linden flower (Tilia europea), and aloe (Aloe vera). We conclude that our knowledge of the therapeutic benefits and risks of some herbal medicines used in Mexico is still limited and efforts to elucidate them should be intensified.",
"title": ""
}
] |
scidocsrr
|
b915896fb257b3b9c4b1d38cebd80ddb
|
An improved K-nearest-neighbor algorithm for text categorization
|
[
{
"docid": "286ccc898eb9bdf2aae7ed5208b1ae18",
"text": "It has recently been argued that a Naive Bayesian classifier can be used to filter unsolicited bulk e-mail (“spam”). We conduct a thorough evaluation of this proposal on a corpus that we make publicly available, contributing towards standard benchmarks. At the same time we investigate the effect of attribute-set size, training-corpus size, lemmatization, and stop-lists on the filter’s performance, issues that had not been previously explored. After introducing appropriate cost-sensitive evaluation measures, we reach the conclusion that additional safety nets are needed for the Naive Bayesian anti-spam filter to be viable in practice.",
"title": ""
}
] |
[
{
"docid": "4edc0f70d6b8d599e28d245cbd8af31e",
"text": "To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.",
"title": ""
},
{
"docid": "d4954bab5fc4988141c509a6d6ab79db",
"text": "Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS). However, as they lack the ability to model global characteristics of speech (such as speaker individualities or speaking styles), particularly when these characteristics have not been labeled, making neural autoregressive SS systems more expressive is still an open issue. In this paper, we propose to combine VoiceLoop, an autoregressive SS model, with Variational Autoencoder (VAE). This approach, unlike traditional autoregressive SS systems, uses VAE to model the global characteristics explicitly, enabling the expressiveness of the synthesized speech to be controlled in an unsupervised manner. Experiments using the VCTK and Blizzard2012 datasets show the VAE helps VoiceLoop to generate higher quality speech and to control the expressions in its synthesized speech by incorporating global characteristics into the speech generating process.",
"title": ""
},
{
"docid": "a1d58b3a9628dc99edf53c1112dc99b8",
"text": "Multiple criteria decision-making (MCDM) research has developed rapidly and has become a main area of research for dealing with complex decision problems. The purpose of the paper is to explore the performance evaluation model. This paper develops an evaluation model based on the fuzzy analytic hierarchy process and the technique for order performance by similarity to ideal solution, fuzzy TOPSIS, to help the industrial practitioners for the performance evaluation in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterized by triangular fuzzy numbers. The proposed method enables decision analysts to better understand the complete evaluation process and provide a more accurate, effective, and systematic decision support tool. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "02bd814b19eacf70339218f910c9a644",
"text": "BACKGROUND\nAlthough \"traditional\" face-lifting techniques can achieve excellent improvement along the jawline and neck, they often have little impact on the midface area. Thus, many different types of procedures have been developed to provide rejuvenation in this region, usually contemplating various dissection planes, incisions, and suspension vectors.\n\n\nMETHODS\nA 7-year observational study of 350 patients undergoing midface lift was analyzed. The authors suspended the midface flap, anchoring to the deep temporal aponeurosis with a suspender-like suture (superolateral vector), or directly to the lower orbital rim with a belt-like suture (superomedial vector). Subjective and objective methods were used to evaluate the results. The subjective methods included a questionnaire completed by the patients. The objective method involved the evaluation of preoperative and postoperative photographs by a three-member jury instructed to compare the \"critical\" anatomical areas of the midface region: malar eminence, nasojugal groove, nasolabial fold, and jowls in the lower portion of the cheeks. The average follow-up period was 24 months.\n\n\nRESULTS\nHigh satisfaction was noticeable from the perceptions of both the jury and the patients. Objective evaluation evidenced that midface lift with temporal anchoring was more efficient for the treatment of malar eminence, whereas midface lift with transosseous periorbital anchoring was more efficient for the treatment of nasojugal groove.\n\n\nCONCLUSIONS\nThe most satisfying aspect of the adopted techniques is a dramatic facial rejuvenation and preservation of the patient's original youthful identity. Furthermore, choosing the most suitable technique respects the patient's needs and enables correction of the specific defects.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.",
"title": ""
},
{
"docid": "6ba76c4e9cbe20297bbf662250d6dc91",
"text": "Interactive TV research encompasses a rather diverse body of work (e.g. multimedia, HCI, CSCW, UIST, user modeling, media studies) that has accumulated over the past 20 years. In this article, we highlight the state-of-the-art and consider two basic issues: What is interactive TV research? Can it help us reinvent the practices of creating, sharing and watching TV? We survey the literature and identify three concepts that have been inherent in interactive TV research: 1) interactive TV as content creation, 2) interactive TV as a content and experience sharing process, and 3) interactive TV as control of audiovisual content. We propose this simple taxonomy (create-share-control) as an evolutionary step over the traditional hierarchical produce-distribute-consume paradigm. Moreover, we highlight the importance of sociability in all phases of the create-share-control model.",
"title": ""
},
{
"docid": "54b43b5e3545710dfe37f55b93084e34",
"text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.",
"title": ""
},
{
"docid": "e519d705cd52b4eb24e4e936b849b3ce",
"text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.",
"title": ""
},
{
"docid": "5184c27b7387a0cbedb1c3a393f797fa",
"text": "Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.",
"title": ""
},
{
"docid": "2e65ae613aa80aac27d5f8f6e00f5d71",
"text": "Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215",
"title": ""
},
{
"docid": "99aaea5ec8f90994a9fa01bfc0131ee2",
"text": "Beyond simply acting as thoroughfares for motor vehicles, urban streets often double as public spaces. Urban streets are places where people walk, shop, meet, and generally engage in the diverse array of social and recreational activities that, for many, are what makes urban living enjoyable. And beyond even these quality-of-life benefits, pedestrian-friendly urban streets have been increasingly linked to a host of highly desirable social outcomes, including economic growth and innovation (Florida, ), improvements in air quality (Frank et al., ), and increased physical fitness and health (Frank et al., ), to name only a few. For these reasons, many groups and individuals encourage the design of “livable” streets, or streets that seek to better integrate the needs of pedestrians and local developmental objectives into a roadway’s design. There has been a great deal of work describing the characteristics of livable streets (see Duany et al., ; Ewing, ; Jacobs, ), and there is general consensus on their characteristics: livable streets, at a minimum, seek to enhance the pedestrian character of the street by providing a continuous sidewalk network and incorporating design features that minimize the negative impacts of motor vehicle use on pedestrians. Of particular importance is the role played by roadside features such as street trees and on-street parking, which serve to buffer the pedestrian realm from potentially hazardous oncoming traffic, and to provide spatial definition to the public right-of-way. Indeed, many livability advocates assert that trees, as much as any other single feature, can play a central role in enhancing a roadway’s livability (Duany et al., ; Jacobs, ). While most would agree that the inclusion of trees and other streetscape features enhances the aesthetic quality of a roadway, there is substantive disagreement about their safety effects (see Figure ). Conventional engineering practice encourages the design of roadsides that will allow a vehicle leaving the travelway to safely recover before encountering a potentially hazardous fixed object. When one considers the aggregate statistics on run-off-roadway crashes, there is indeed ",
"title": ""
},
{
"docid": "3d9187bbc9a6bad0208ff560b3bcb57d",
"text": "Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results.",
"title": ""
},
{
"docid": "226fa477fa59b930639435f76ab6a621",
"text": "Mobile Augmented Reality (AR) is most commonly implemented using a camera and a flat screen. Such implementation removes binocular disparity from users' observation. To compensate, people use alternative depth cues (e.g. depth ordering). However, these cues may also get distorted in certain AR implementations, creating depth distortion which is problematic in situations where precise hand interaction within AR workspace is required such as when transcribing augmented instructions to physical objects (e.g. virtual tracing -- creating a physical sketch on a 2D or 3D object given a virtual image on a mobile device). In this paper we explore how depth distortion affects 3D virtual tracing by implementing a first of its kind 3D virtual tracing prototype and run an observational study. Drawing performance exceeded our expectations suggesting that the lack of visual depth cues, whilst holding the object in hand, is not as problematic as initially predicted. However, when placing the object on the stand and drawing with only one hand (the other is used for holding the phone) their performance drastically decreased.",
"title": ""
},
{
"docid": "a47e0a04383cc379994bfae6d929e0f6",
"text": "This paper shows that echo state networks are universal uniform approximants in the context of discrete-time fading memory filters with uniformly bounded inputs defined on negative infinite times. This result guarantees that any fading memory input/output system in discrete time can be realized as a simple finite-dimensional neural network-type state-space model with a static linear readout map. This approximation is valid for infinite time intervals. The proof of this statement is based on fundamental results, also presented in this work, about the topological nature of the fading memory property and about reservoir computing systems generated by continuous reservoir maps.",
"title": ""
},
{
"docid": "aedb6c6bce85ca8c58b3a4ef0850f3ff",
"text": "Data assurance and resilience are crucial security issues in cloud-based IoT applications. With the widespread adoption of drones in IoT scenarios such as warfare, agriculture and delivery, effective solutions to protect data integrity and communications between drones and the control system have been in urgent demand to prevent potential vulnerabilities that may cause heavy losses. To secure drone communication during data collection and transmission, as well as preserve the integrity of collected data, we propose a distributed solution by utilizing blockchain technology along with the traditional cloud server. Instead of registering the drone itself to the blockchain, we anchor the hashed data records collected from drones to the blockchain network and generate a blockchain receipt for each data record stored in the cloud, reducing the burden of moving drones with the limit of battery and process capability while gaining enhanced security guarantee of the data. This paper presents the idea of securing drone data collection and communication in combination with a public blockchain for provisioning data integrity and cloud auditing. The evaluation shows that our system is a reliable and distributed system for drone data assurance and resilience with acceptable overhead and scalability for a large number of drones.",
"title": ""
},
{
"docid": "45c3c54043337e91a44e71945f4d63dd",
"text": "Neutrophils are being increasingly recognized as an important element in tumor progression. They have been shown to exert important effects at nearly every stage of tumor progression with a number of studies demonstrating that their presence is critical to tumor development. Novel aspects of neutrophil biology have recently been elucidated and its contribution to tumorigenesis is only beginning to be appreciated. Neutrophil extracellular traps (NETs) are neutrophil-derived structures composed of DNA decorated with antimicrobial peptides. They have been shown to trap and kill microorganisms, playing a critical role in host defense. However, their contribution to tumor development and metastasis has recently been demonstrated in a number of studies highlighting NETs as a potentially important therapeutic target. Here, studies implicating NETs as facilitators of tumor progression and metastasis are reviewed. In addition, potential mechanisms by which NETs may exert these effects are explored. Finally, the ability to target NETs therapeutically in human neoplastic disease is highlighted.",
"title": ""
},
{
"docid": "bde769df506e361bf374bd494fc5db6f",
"text": "Molded interconnect devices (MID) allow the realization of electronic circuits on injection molded thermoplastics. MID antennas can be manufactured as part of device casings without the need for additional printed circuit boards or attachment of antennas printed on foil. Baluns, matching networks, amplifiers and connectors can be placed on the polymer in the vicinity of the antenna. A MID dipole antenna for 1 GHz is designed, manufactured and measured. A prototype of the antenna is built with laser direct structuring (LDS) on a Xantar LDS 3720 substrate. Measured return loss and calibrated gain patterns are compared to simulation results.",
"title": ""
},
{
"docid": "37c4c0d309c9543f3d9e3744b2362e4d",
"text": "The paper presents to develop a new control strategy of limiting the dc-link voltage fluctuation for a back-to-back pulsewidth modulation converter in a doubly fed induction generator (DFIG) for wind turbine systems. The reasons of dc-link voltage fluctuation are analyzed. An improved control strategy with the instantaneous rotor power feedback is proposed to limit the fluctuation range of the dc-link voltage. An experimental rig is set up to valid the proposed strategy, and the dynamic performances of the DFIG are compared with the traditional control method under a constant grid voltage. Furthermore, the capabilities of keeping the dc-link voltage stable are also compared in the ride-through control of DFIG during a three-phase grid fault, by using a developed 2 MW DFIG wind power system model. Both the experimental and simulation results have shown that the proposed control strategy is more effective, and the fluctuation of the dc-link voltage may be successfully limited in a small range under a constant grid voltage and a non-serious grid voltage dip.",
"title": ""
},
{
"docid": "5ebd92444b69b2dd8e728de2381f3663",
"text": "A mind is a computer.",
"title": ""
},
{
"docid": "024cc15c164656f90ade55bf3c391405",
"text": "Unmanned aerial vehicles (UAVs), also known as drones have many applications and they are a current trend across many industries. They can be used for delivery, sports, surveillance, professional photography, cinematography, military combat, natural disaster assistance, security, and the list grows every day. Programming opens an avenue to automate many processes of daily life and with the drone as aerial programmable eyes, security and surveillance can become more efficient and cost effective. At Barry University, parking is becoming an issue as the number of people visiting the school greatly outnumbers the convenient parking locations. This has caused a multitude of hazards in parking lots due to people illegally parking, as well as unregistered vehicles parking in reserved areas. In this paper, we explain how automated drone surveillance is utilized to detect unauthorized parking at Barry University. The automated process is incorporated into Java application and completed in three steps: collecting visual data, processing data automatically, and sending automated responses and queues to the operator of the system.",
"title": ""
},
{
"docid": "8750fc51d19bbf0cbae2830638f492fd",
"text": "Smartphones are increasingly becoming an ordinary part of our daily lives. With their remarkable capacity, applications used in these devices are extremely varied. In terms of language teaching, the use of these applications has opened new windows of opportunity, innovatively shaping the way instructors teach and students learn. This 4 week-long study aimed to investigate the effectiveness of a mobile application on teaching 40 figurative idioms from the Michigan Corpus of Academic Spoken English (MICASE) corpus compared to traditional activities. Quasi-experimental research design with pretest and posttest was employed to determine the differences between the scores of the control (n=25) and the experimental group (n=25) formed with convenience sampling. Results indicate that participants in the experimental group performed significantly better in the posttest, demonstrating the effectiveness of the mobile application used in this study on learning idioms. The study also provides recommendations towards the use of mobile applications in teaching vocabulary.",
"title": ""
}
] |
scidocsrr
|
716b6f6cedad893bc110b912526f0873
|
GestureWrist and GesturePad: Unobtrusive Wearable Interaction Devices
|
[
{
"docid": "24f141bd7a29bb8922fa010dd63181a6",
"text": "This paper reports on the development of a hand to machine interface device that provides real-time gesture, position and orientation information. The key element is a glove and the device as a whole incorporates a collection of technologies. Analog flex sensors on the glove measure finger bending. Hand position and orientation are measured either by ultrasonics, providing five degrees of freedom, or magnetic flux sensors, which provide six degrees of freedom. Piezoceramic benders provide the wearer of the glove with tactile feedback. These sensors are mounted on the light-weight glove and connected to the driving hardware via a small cable.\nApplications of the glove and its component technologies include its use in conjunction with a host computer which drives a real-time 3-dimensional model of the hand allowing the glove wearer to manipulate computer-generated objects as if they were real, interpretation of finger-spelling, evaluation of hand impairment in addition to providing an interface to a visual programming language.",
"title": ""
},
{
"docid": "c526e32c9c8b62877cb86bc5b097e2cf",
"text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.",
"title": ""
}
] |
[
{
"docid": "e2666b0eed30a4eed2ad0cde07324d73",
"text": "It is logical that the requirement for antioxidant nutrients depends on a person's exposure to endogenous and exogenous reactive oxygen species. Since cigarette smoking results in an increased cumulative exposure to reactive oxygen species from both sources, it would seem cigarette smokers would have an increased requirement for antioxidant nutrients. Logic dictates that a diet high in antioxidant-rich foods such as fruits, vegetables, and spices would be both protective and a prudent preventive strategy for smokers. This review examines available evidence of fruit and vegetable intake, and supplementation of antioxidant compounds by smokers in an attempt to make more appropriate nutritional recommendations to this population.",
"title": ""
},
{
"docid": "d90a6f0b13b42ea44d214b3584fd41d7",
"text": "Much work on the demographics of social media platforms such as Twitter has focused on the properties of individuals, such as gender or age. However, because credible detectors for organization accounts do not exist, these and future largescale studies of human behavior on social media can be contaminated by the presence of accounts belonging to organizations. We analyze organizations on Twitter to assess their distinct behavioral characteristics and determine what types of organizations are active. We first create a dataset of manually classified accounts from a representative sample of Twitter and then introduce a classifier to distinguish between organizational and personal accounts. In addition, we find that although organizations make up less than 10% of the accounts, they are significantly more connected, with an order of magnitude more friends and followers.",
"title": ""
},
{
"docid": "0a916e98a315c44a5be68bb1f9aef9a3",
"text": "Knowledge bases, which consist of concepts, entities, attributes and relations, are increasingly important in a wide range of applications. We argue that knowledge about attributes (of concepts or entities) plays a critical role in inferencing. In this paper, we propose methods to derive attributes for millions of concepts and we quantify the typicality of the attributes with regard to their corresponding concepts. We employ multiple data sources such as web documents, search logs, and existing knowledge bases, and we derive typicality scores for attributes by aggregating different distributions derived from different sources using different methods. To the best of our knowledge, ours is the first approach to integrate concept- and instance-based patterns into probabilistic typicality scores that scale to broad concept space. We have conducted extensive experiments to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "451a52573c5a4d81ea8a58a583afbca7",
"text": "Sharding is a fundamental building block of large-scale applications, but most have their own custom, ad-hoc implementations. Our goal is to make sharding as easily reusable as a filesystem or lock manager. Slicer is Google’s general purpose sharding service. It monitors signals such as load hotspots and server health to dynamically shard work over a set of servers. Its goals are to maintain high availability and reduce load imbalance while minimizing churn from moved work. In this paper, we describe Slicer’s design and implementation. Slicer has the consistency and global optimization of a centralized sharder while approaching the high availability, scalability, and low latency of systems that make local decisions. It achieves this by separating concerns: a reliable data plane forwards requests, and a smart control plane makes load-balancing decisions off the critical path. Slicer’s small but powerful API has proven useful and easy to adopt in dozens of Google applications. It is used to allocate resources for web service front-ends, coalesce writes to increase storage bandwidth, and increase the efficiency of a web cache. It currently handles 2-7M req/s of production traffic. The median production Slicer-managed workload uses 63% fewer resources than it would with static sharding.",
"title": ""
},
{
"docid": "001b5a976b6b6ccb15ab80ead4617422",
"text": "Multivariate time-series modeling and forecasting is an important problem with numerous applications. Traditional approaches such as VAR (vector auto-regressive) models and more recent approaches such as RNNs (recurrent neural networks) are indispensable tools in modeling time-series data. In many multivariate time series modeling problems, there is usually a significant linear dependency component, for which VARs are suitable, and a nonlinear component, for which RNNs are suitable. Modeling such times series with only VAR or only RNNs can lead to poor predictive performance or complex models with large training times. In this work, we propose a hybrid model called R2N2 (Residual RNN), which first models the time series with a simple linear model (like VAR) and then models its residual errors using RNNs. R2N2s can be trained using existing algorithms for VARs and RNNs. Through an extensive empirical evaluation on two real world datasets (aviation and climate domains), we show that R2N2 is competitive, usually better than VAR or RNN, used alone. We also show that R2N2 is faster to train as compared to an RNN, while requiring less number of hidden units.",
"title": ""
},
{
"docid": "cf4509b8d2b458f608a7e72165cdf22b",
"text": "Nowadays, blockchain is becoming a synonym for distributed ledger technology. However, blockchain is only one of the specializations in the field and is currently well-covered in existing literature, but mostly from a cryptographic point of view. Besides blockchain technology, a new paradigm is gaining momentum: directed acyclic graphs. The contribution presented in this paper is twofold. Firstly, the paper analyzes distributed ledger technology with an emphasis on the features relevant to distributed systems. Secondly, the paper analyses the usage of directed acyclic graph paradigm in the context of distributed ledgers, and compares it with the blockchain-based solutions. The two paradigms are compared using representative implementations: Bitcoin, Ethereum and Nano. We examine representative solutions in terms of the applied data structures for maintaining the ledger, consensus mechanisms, transaction confirmation confidence, ledger size, and scalability.",
"title": ""
},
{
"docid": "44bffd6caa0d90798f8ebc21a10fd248",
"text": "INTRODUCTION\nThis study describes quality indicators for the pre-analytical process, grouping errors according to patient risk as critical or major, and assesses their evaluation over a five-year period.\n\n\nMATERIALS AND METHODS\nA descriptive study was made of the temporal evolution of quality indicators, with a study population of 751,441 analytical requests made during the period 2007-2011. The Runs Test for randomness was calculated to assess changes in the trend of the series, and the degree of control over the process was estimated by the Six Sigma scale.\n\n\nRESULTS\nThe overall rate of critical pre-analytical errors was 0.047%, with a Six Sigma value of 4.9. The total rate of sampling errors in the study period was 13.54% (P = 0.003). The highest rates were found for the indicators \"haemolysed sample\" (8.76%), \"urine sample not submitted\" (1.66%) and \"clotted sample\" (1.41%), with Six Sigma values of 3.7, 3.7 and 2.9, respectively.\n\n\nCONCLUSION\nThe magnitude of pre-analytical errors was accurately valued. While processes that triggered critical errors are well controlled, the results obtained for those regarding specimen collection are borderline unacceptable; this is particularly so for the indicator \"haemolysed sample\".",
"title": ""
},
{
"docid": "722f7073b9bf9cf9363eed0d21ae8cb4",
"text": "By virtue of the increasingly large amount of various sensors, information about the same object can be collected from multiple views. These mutually enriched information can help many real-world applications, such as daily activity recognition in which both video cameras and on-body sensors are continuously collecting information. Such multivariate time series (m.t.s.) data from multiple views can lead to a significant improvement of classification tasks. However, the existing methods for time series data classification only focus on single-view data, and the benefits of mutual-support multiple views are not taken into account. In light of this challenge, we propose a novel approach, named Multi-view Discriminative Bilinear Projections (MDBP), for extracting discriminative features from multi-view m.t.s. data. First, MDBP keeps the original temporal structure of m.t.s. data, and projects m.t.s. from different views onto a shared latent subspace. Second, MDBP incorporates discriminative information by minimizing the within-class separability and maximizing the between-class separability of m.t.s. in the shared latent subspace. Moreover, a Laplacian regularization term is designed to preserve the temporal smoothness within m.t.s.. Extensive experiments on two real-world datasets demonstrate the effectiveness of our approach. Compared to the state-of-the-art multi-view learning and m.t.s. classification methods, our approach greatly improves the classification accuracy due to the full exploration of multi-view streaming data. Moreover, by using a feature fusion strategy, our approach further improves the classification accuracy by at least 10%.",
"title": ""
},
{
"docid": "269cff08201fd7815e3ea2c9a786d38b",
"text": "In this paper, we are interested in developing compositional models to explicit representing pose, parts and attributes and tackling the tasks of attribute recognition, pose estimation and part localization jointly. This is different from the recent trend of using CNN-based approaches for training and testing on these tasks separately with a large amount of data. Conventional attribute models typically use a large number of region-based attribute classifiers on parts of pre-trained pose estimator without explicitly detecting the object or its parts, or considering the correlations between attributes. In contrast, our approach jointly represents both the object parts and their semantic attributes within a unified compositional hierarchy. We apply our attributed grammar model to the task of human parsing by simultaneously performing part localization and attribute recognition. We show our modeling helps performance improvements on pose-estimation task and also outperforms on other existing methods on attribute prediction task.",
"title": ""
},
{
"docid": "6b92580dafc9baf21393d8f265efd5fd",
"text": "Refactoring and, in particular, remodularization operations can be performed to repair the design of a software system and remove the erosion caused by software evolution. Various approaches have been proposed to support developers during the remodularization of a software system. Most of these approaches are based on the underlying assumption that developers pursue an optimal balance between cohesion and coupling when modularizing the classes of their systems. Thus, a remodularization recommender proposes a solution that implicitly provides a (near) optimal balance between such quality attributes. However, there is still no empirical evidence that such a balance is the desideratum by developers. This article aims at analyzing both objectively and subjectively the aforementioned phenomenon. Specifically, we present the results of (1) a large study analyzing the modularization quality, in terms of package cohesion and coupling, of 100 open-source systems, and (2) a survey conducted with 29 developers aimed at understanding the driving factors they consider when performing modularization tasks. The results achieved have been used to distill a set of lessons learned that might be considered to design more effective remodularization recommenders.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "cc8b42f5b5f7de3695e169d99a0a6a22",
"text": "Dota 2 is a multiplayer online game in which two teams of five players control “heroes” and compete to earn gold and destroy enemy structures. Teamwork is essential and heroes are chosen to create a balanced team that will counter the opponents’ selections. We studied how the win rate depends on hero selection by performing logistic regression with models that incorporate interactions between heroes. Our models did not match the naive model without interactions which had a 62% win prediction rate, suggesting cleaner data or better models are needed.",
"title": ""
},
{
"docid": "23c2ea4422ec6057beb8fa0be12e57b3",
"text": "This study applied logistic regression to model urban growth in the Atlanta Metropolitan Area of Georgia in a GIS environment and to discover the relationship between urban growth and the driving forces. Historical land use/cover data of Atlanta were extracted from the 1987 and 1997 Landsat TM images. Multi-resolution calibration of a series of logistic regression models was conducted from 50 m to 300 m at intervals of 25 m. A fractal analysis pointed to 225 m as the optimal resolution of modeling. The following two groups of factors were found to affect urban growth in different degrees as indicated by odd ratios: (1) population density, distances to nearest urban clusters, activity centers and roads, and high/low density urban uses (all with odds ratios < 1); and (2) distance to the CBD, number of urban cells within a 7 · 7 cell window, bare land, crop/grass land, forest, and UTM northing coordinate (all with odds ratios > 1). A map of urban growth probability was calculated and used to predict future urban patterns. Relative operating characteristic (ROC) value of 0.85 indicates that the probability map is valid. It was concluded that despite logistic regression’s lack of temporal dynamics, it was spatially explicit and suitable for multi-scale analysis, and most importantly, allowed much deeper understanding of the forces driving the growth and the formation of the urban spatial pattern. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1350f4e274947881f4562ab6596da6fd",
"text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.",
"title": ""
},
{
"docid": "118738ca4b870e164c7be53e882a9ab4",
"text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470",
"title": ""
},
{
"docid": "49bc648b7588e3d6d512a65688ce23aa",
"text": "Many Chinese websites (relying parties) use OAuth 2.0 as the basis of a single sign-on service to ease password management for users. Many sites support five or more different OAuth 2.0 identity providers, giving users choice in their trust point. However, although OAuth 2.0 has been widely implemented (particularly in China), little attention has been paid to security in practice. In this paper we report on a detailed study of OAuth 2.0 implementation security for ten major identity providers and 60 relying parties, all based in China. This study reveals two critical vulnerabilities present in many implementations, both allowing an attacker to control a victim user’s accounts at a relying party without knowing the user’s account name or password. We provide simple, practical recommendations for identity providers and relying parties to enable them to mitigate these vulnerabilities. The vulnerabilities have been reported to the parties concerned.",
"title": ""
},
{
"docid": "5dee244ee673909c3ba3d3d174a7bf83",
"text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.",
"title": ""
},
{
"docid": "9e46a59546d270aa74ffbe48a968b07b",
"text": "We tested whether an opposing expert is an effective method of educating jurors about scientific validity by manipulating the methodological quality of defense expert testimony and the type of opposing prosecution expert testimony (none, standard, addresses the other expert's methodology) within the context of a written trial transcript. The presence of opposing expert testimony caused jurors to be skeptical of all expert testimony rather than sensitizing them to flaws in the other expert's testimony. Jurors rendered more guilty verdicts when they heard opposing expert testimony than when opposing expert testimony was absent, regardless of whether the opposing testimony addressed the methodology of the original expert or the validity of the original expert's testimony. Thus, contrary to the assumptions in the Supreme Court's decision in Daubert, opposing expert testimony may not be an effective safeguard against junk science in the courtroom.",
"title": ""
},
{
"docid": "ab71df85da9c1798a88b2bb3572bf24f",
"text": "In order to develop an efficient and reliable pulsed power supply for excimer dielectric barrier discharge (DBD) ultraviolet (UV) sources, a pulse generator using Marx topology is adopted. MOSFETs are used as switches. The 12-stage pulse generator operates with a voltage amplitude in the range of 0-5.5 kV. The repetition rate and pulsewidth can be adjusted from 0.1 to 50 kHz and 2 to 20 μs, respectively. It is used to excite KrCl* excilamp, a typical DBD UV source. In order to evaluate the performance of the pulse generator, a sinusoidal voltage power supply dedicated for DBD lamp is also used to excite the KrCl* excilamp. It shows that the lamp excited by the pulse generator has better performance with regard to radiant power and system efficiency. The influence of voltage amplitude, repetition rate, pulsewidth, and rise and fall times on radiant power and system efficiency is investigated using the pulse generator. An inductor is inserted between the pulse generator and the KrCl* excilamp to reduce electromagnetic interference and enhance system reliability.",
"title": ""
},
{
"docid": "03966c28d31e1c45896eab46a1dcce57",
"text": "For many applications it is useful to sample from a nite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M suuciently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be diicult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly eecient. Using our approach one can sample from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a nite distributive lattice.",
"title": ""
}
] |
scidocsrr
|
0297c798889ab7375a493f2e6e1e761b
|
The career of metaphor.
|
[
{
"docid": "2a61c7755dd99999721c5c8941666770",
"text": "People construct ad hoc categories to achieve goals. For example, constructing the category of \"things to sell at a garage sale\" can be instrumental to achieving the goal of selling unwanted possessions. These categories differ from common categories (e.g., \"fruit,\" \"furniture\") in that ad hoc categories violate the correlational structure of the environment and are not well established in memory. Regarding the latter property, the category concepts, concept-to-instance associations, and instance-to-concept associations structuring ad hoc categories are shown to be much less established in memory than those of common categories. Regardless of these differences, however, ad hoc categories possess graded structures [i.e., typicality gradients) as salient as those structuring common categories. This appears to be the result of a similarity comparison process that imposes graded structure on any category regardless of type.",
"title": ""
}
] |
[
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "ab3c0d4fecf7722a4b592473eb0de8dc",
"text": "IOT( Internet of Things) relying on exchange of information through radio frequency identification(RFID), is emerging as one of important technologies that find its use in various applications ranging from healthcare, construction, hospitality to transportation sector and many more. This paper describes about IOT, concentrating its use in improving and securing future shopping. This paper shows how RFID technology makes life easier and secure and thus helpful in the future. KeywordsIOT,RFID, Intelligent shopping, RFID tags, RFID reader, Radio frequency",
"title": ""
},
{
"docid": "66d584c242fb96527cef9b3b084d23a8",
"text": "Online discussions boards represent a rich repository of knowledge organized in a collection of user generated content. These conversational cyberspaces allow users to express opinions, ideas and pose questions and answers without imposing strict limitations about the content. This freedom, in turn, creates an environment in which discussions are not bounded and often stray from the initial topic being discussed. In this paper we focus on approaches to assess the relevance of posts to a thread and detecting when discussions have been steered off-topic. A set of metrics estimating the level of novelty in online discussion posts are presented. These metrics are based on topical estimation and contextual similarity between posts within a given thread. The metrics are aggregated to rank posts based on the degree of relevance they maintain. The aggregation scheme is data-dependent and is normalized relative to the post length.",
"title": ""
},
{
"docid": "20441819838ba1b60279e19523abe551",
"text": "Chinese remainder problem Given: rl, ... , rn E R (remainders) 11, ... , In ideals in R (moduli), such that Ii + Ij = R for all i =f. j Find: r E R, such that r == ri mod Ii for 1 ::: i ::: n The abstract Chinese remainder problem can be treated basically in the same way as the CRP over Euclidean domains. Again there is a Lagrangian and a Newtonian approach and one can show that the problem always has a solution and if r is a solution then the set of all solutions is given by r + II n ... n In. 3.1 Chinese remainder problem 57 That is, the map ¢J: r t-+ (r + h, ... , r + In) is a homomorphism from R onto nj=1 R/lj with kernel II n ... n In. However, in the absence of the Euclidean algorithm it is not possible to compute a solution of the abstract CRP. See Lauer (1983). A preconditioned Chinese remainder algorithm If the CRA is applied in a setting where many conversions w.r.t. a fixed set of moduli have to be computed, it is reasonable to precompute all partial results depending on the moduli alone. This idea leads to a preconditioned CRA, as described in Aho et al. (1974). Theorem 3.1.7. Let rl, ... , rn and ml, ... , mn be the remainders and moduli, respectively, of a CRP in the Euclidean domain D. Let m be the product of all the moduli. Let Ci = m/mi and di = ci l mod mi for 1 :::s i :::s n. Then n r = LCidiri mod m i=1 is a solution to the corresponding CRP. (3.l.1) Proof Since Ci is divisible by mj for j =1= i, we have Cidiri == 0 mod mj for j =1= i. Therefore n LCidiri == cjdjrj == rj mod mj, for all l:::s j :::s n . 0 i=1 A more detailed analysis of (3.l.1) reveals many common factors of the expressions Cidiri. Let us assume that n is a power of 2, n = 2t. Obviously, ml ..... mn/2 is a factor of Cidiri for all i > n/2 and m n/2+1 ..... mn is a factor of Cidiri for all i :::s n/2. So we could write (3.1.1) as",
"title": ""
},
{
"docid": "bf23a6fcf1a015d379dee393a294761c",
"text": "This study addresses the inconsistency of contemporary literature on defining the link between leadership styles and personality traits. The plethora of literature on personality traits has culminated into symbolic big five personality dimensions but there is still a dearth of research on developing representative leadership styles despite the perennial fascination with the subject. Absence of an unequivocal model for developing representative styles in conjunction with the use of several non-mutually exclusive existing leadership styles has created a discrepancy in developing a coherent link between leadership and personality. This study sums up 39 different styles of leadership into five distinct representative styles on the basis of similar theoretical underpinnings and common characteristics to explore how each of these five representative leadership style relates to personality dimensions proposed by big five model.",
"title": ""
},
{
"docid": "1f28f5efa70a6387b00e335a8cf1e1d0",
"text": "The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. In this paper, we present a novel generative adversarial network based approach. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. Further, to generate more lifelike facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects in a finer manner. The proposed method is applicable to diverse face samples in the presence of variations in pose, expression, makeup, etc., and remarkably vivid aging effects are achieved. Both visual fidelity and quantitative evaluations show that the approach advances the state-of-the-art.",
"title": ""
},
{
"docid": "97dfc2b23b527a05f7de443f10a89543",
"text": "Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users’ quality of experience (QoE). Developing models that can accurately predict users’ QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer’s recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events factors that interact in a complex way to affect a user’s QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.",
"title": ""
},
{
"docid": "7fe4f5ca8e770a51deef16d05f40b335",
"text": "Ultrasonic flow meters are gaining wide usage in commercial, industrial and medical applications. Major benefits of utilizing this type of flowmeter are higher accuracy, low maintenance (no moving parts), noninvasive flow measurement, and the ability to regularly diagnose health of the meter. This application note is intended as an introduction to ultrasonic time-of-flight (TOF) flow sensing using the TDC1000 ultrasonic analog-front-end (AFE) and the TDC7200 picosecond accurate stopwatch. Information regarding a typical off-the-shelf ultrasonic flow sensor is provided, along with related equations for calculation of flow velocity and flow rate. Included in the appendix is a summary of standards for water meters and a list of low cost sensors suitable for this application space. Topic ........................................................................................................................... Page",
"title": ""
},
{
"docid": "048ff79b90371eb86b9d62810cfea31f",
"text": "In October, 2006 Netflix released a dataset containing 100 million anonymous movie ratings and challenged the data mining, machine learning and computer science communities to develop systems that could beat the accuracy of its recommendation system, Cinematch. We briefly describe the challenge itself, review related work and efforts, and summarize visible progress to date. Other potential uses of the data are outlined, including its application to the KDD Cup 2007.",
"title": ""
},
{
"docid": "d7e2654767d1178871f3f787f7616a94",
"text": "We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans - with manually segmented white matter, cerebral cortex, ventricles and subcortical structures - to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.",
"title": ""
},
{
"docid": "42c0f8504f26d46a4cc92d3c19eb900d",
"text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.",
"title": ""
},
{
"docid": "1d61e1eb5275444c6a2a3f8ad5c2865a",
"text": "We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore,we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance fetures is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. European Conference on Computer Vision (ECCV) This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2006 201 Broadway, Cambridge, Massachusetts 02139 Region Covariance: A Fast Descriptor for Detection and Classification Oncel Tuzel, Fatih Porikli, and Peter Meer 1 Computer Science Department, 2 Electrical and Computer Engineering Department, Rutgers University, Piscataway, NJ 08854 {otuzel, meer}@caip.rutgers.edu 3 Mitsubishi Electric Research Laboratories, Cambridge, MA 02139 {fatih}@merl.com Abstract. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix. We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.",
"title": ""
},
{
"docid": "2943c046bae638a287ddaf72129bee0e",
"text": "The use of graphene for fixed-beam reflectarray antennas at Terahertz (THz) is proposed. Graphene's unique electronic band structure leads to a complex surface conductivity at THz frequencies, which allows the propagation of very slow plasmonic modes. This leads to a drastic reduction of the electrical size of the array unit cell and thereby good array performance. The proposed reflectarray has been designed at 1.3 THz and comprises more than 25000 elements of size about λ0/16. The array reflective unit cell is analyzed using a full vectorial approach, taking into account the variation of the angle of incidence and assuming local periodicity. Good performance is obtained in terms of bandwidth, cross-polar, and grating lobes suppression, proving the feasibility of graphene-based reflectarrays and other similar spatially fed structures at Terahertz frequencies. This result is also a first important step toward reconfigurable THz reflectarrays using graphene electric field effect.",
"title": ""
},
{
"docid": "f87e8f9d733ed60cedfda1cbfe176cbf",
"text": "Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods.",
"title": ""
},
{
"docid": "7e17c1842a70e416f0a90bdcade31a8e",
"text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.",
"title": ""
},
{
"docid": "2283e43c2bad5ac682fe185cb2b8a9c1",
"text": "As widely recognized in the literature, information technology (IT) investments have several special characteristics that make assessing their costs and benefits complicated. Here, we address the problem of evaluating a web content management system for both internal and external use. The investment is presently undergoing an evaluation process in a multinational company. We aim at making explicit the desired benefits and expected risks of the system investment. An evaluation hierarchy at general level is constructed. After this, a more detailed hierarchy is constructed to take into account the contextual issues. To catch the contextual issues key company representatives were interviewed. The investment alternatives are compared applying the principles of the Analytic Hierarchy Process (AHP). Due to the subjective and uncertain characteristics of the strategic IT investments a wide range of sensitivity analyses is performed.",
"title": ""
},
{
"docid": "9bbf9422ae450a17e0c46d14acf3a3e3",
"text": "This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework.",
"title": ""
},
{
"docid": "619a699d6e848ff692a581dc40a86a10",
"text": "Intelligent Transportation System (ITS) is a significant part of smart city, and short-term traffic flow prediction plays an important role in intelligent transportation management and route guidance. A number of models and algorithms based on time series prediction and machine learning were applied to short-term traffic flow prediction and achieved good results. However, most of the models require the length of the input historical data to be predefined and static, which cannot automatically determine the optimal time lags. To overcome this shortage, a model called Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is proposed in this paper, which takes advantages of the three multiplicative units in the memory block to determine the optimal time lags dynamically. The dataset from Caltrans Performance Measurement System (PeMS) is used for building the model and comparing LSTM RNN with several well-known models, such as random walk(RW), support vector machine(SVM), single layer feed forward neural network(FFNN) and stacked autoencoder(SAE). The results show that the proposed prediction model achieves higher accuracy and generalizes well.",
"title": ""
},
{
"docid": "b68f0c4aa0b5638a2a426bf9bd97a2ab",
"text": "The interrelationship between ionizing radiation and the immune system is complex, multifactorial, and dependent on radiation dose/quality and immune cell type. High-dose radiation usually results in immune suppression. On the contrary, low-dose radiation (LDR) modulates a variety of immune responses that have exhibited the properties of immune hormesis. Although the underlying molecular mechanism is not fully understood yet, LDR has been used clinically for the treatment of autoimmune diseases and malignant tumors. These advancements in preclinical and clinical studies suggest that LDR-mediated immune modulation is a well-orchestrated phenomenon with clinical potential. We summarize recent developments in the understanding of LDR-mediated immune modulation, with an emphasis on its potential clinical applications.",
"title": ""
}
] |
scidocsrr
|
6631943e62c02be627313a2d5f410fb8
|
Unsupervised, Efficient and Semantic Expertise Retrieval
|
[
{
"docid": "66b2f59c4f46b917ff6755e2b2fbb39c",
"text": "Overview • Learning flexible word representations is the first step towards learning semantics. •The best current approach to learning word embeddings involves training a neural language model to predict each word in a sentence from its neighbours. – Need to use a lot of data and high-dimensional embeddings to achieve competitive performance. – More scalable methods translate to better results. •We propose a simple and scalable approach to learning word embeddings based on training lightweight models with noise-contrastive estimation. – It is simpler, faster, and produces better results than the current state-of-the art method.",
"title": ""
}
] |
[
{
"docid": "b5f9535fb63cae3d115e1e5bded4795c",
"text": "This study uses a hostage negotiation setting to demonstrate how a team of strategic police officers can utilize specific coping strategies to minimize uncertainty at different stages of their decision-making in order to foster resilient decision-making to effectively manage a high-risk critical incident. The presented model extends the existing research on coping with uncertainty by (1) applying the RAWFS heuristic (Lipshitz and Strauss in Organ Behav Human Decis Process 69:149–163, 1997) of individual decision-making under uncertainty to a team critical incident decision-making domain; (2) testing the use of various coping strategies during “in situ” team decision-making by using a live simulated hostage negotiation exercise; and (3) including an additional coping strategy (“reflection-in-action”; Schön in The reflective practitioner: how professionals think in action. Temple Smith, London, 1983) that aids naturalistic team decision-making. The data for this study were derived from a videoed strategic command meeting held within a simulated live hostage training event; these video data were coded along three themes: (1) decision phase; (2) uncertainty management strategy; and (3) decision implemented or omitted. Results illustrate that, when assessing dynamic and high-risk situations, teams of police officers cope with uncertainty by relying on “reduction” strategies to seek additional information and iteratively update these assessments using “reflection-in-action” (Schön 1983) based on previous experience. They subsequently progress to a plan formulation phase and use “assumption-based reasoning” techniques in order to mentally simulate their intended courses of action (Klein et al. 2007), and identify a preferred formulated strategy through “weighing the pros and cons” of each option. In the unlikely event that uncertainty persists to the plan execution phase, it is managed by “reduction” in the form of relying on plans and standard operating procedures or by “forestalling” and intentionally deferring the decision while contingency planning for worst-case scenarios.",
"title": ""
},
{
"docid": "e51f7fde238b0896df22d196b8c59c1a",
"text": "The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions such as the grey-world and white patch assumptions. In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions found in images. To this end, images are first classified into stages (rough 3D geometry models). According to the stage models, images are divided into different regions using hard and soft segmentation. After that, the best color constancy algorithm is selected for each geometry segment. As a result, light source estimation is tuned to the global scene geometry. Our algorithm opens the possibility to estimate the remote scene illumination color, by distinguishing nearby light source from distant illuminants. Experiments on large scale image datasets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 14% of median angular error. When using an ideal classifier (i.e, all of the test images are correctly classified into stages), the performance of the proposed method achieves an improvement of 31% of median angular error compared to the best-performing single color constancy algorithm.",
"title": ""
},
{
"docid": "5f26bbe4c1a32f7806e4e9f2963a4f6f",
"text": "Over the millennia, microorganisms have evolved evasion strategies to overcome a myriad of chemical and environmental challenges, including antimicrobial drugs. Even before the first clinical use of antibiotics more than 60 years ago, resistant organisms had been isolated. Moreover, the potential problem of the widespread distribution of antibiotic resistant bacteria was recognized by scientists and healthcare specialists from the initial use of these drugs. Why is resistance inevitable and where does it come from? Understanding the molecular diversity that underlies resistance will inform our use of these drugs and guide efforts to develop new efficacious antibiotics.",
"title": ""
},
{
"docid": "e98b2cb8bfc56fd2eb75352eec0346a6",
"text": "Decreasing magnetic resonance (MR) image acquisition times can potentially reduce procedural cost and make MR examinations more accessible. Compressed sensing (CS)based image reconstruction methods, for example, decrease MR acquisition time by reconstructing high-quality images from data that were originally sampled at rates inferior to the NyquistShannon sampling theorem. Iterative algorithms with data regularization are the standard approach to solving ill-posed, CS inverse problems. These solutions are usually slow, therefore, preventing near-real time image reconstruction. Recently, deeplearning methods have been used to solve the CS MR reconstruction problem. These proposed methods have the advantage of being able to quickly reconstruct images in a single pass using an appropriately trained network. Some recent studies have demonstrated that the quality of their reconstruction equals and sometimes even surpasses the quality of the conventional iterative approaches. A variety of different network architectures (e.g., U-nets and Residual U-nets) have been proposed to tackle the CS reconstruction problem. A drawback of these architectures is that they typically only work on image domain data. For undersampled data, the images computed by applying the inverse Fast Fourier Transform (iFFT) are aliased. In this work we propose a hybrid architecture that works both in the k-space (or frequency-domain) and the image (or spatial) domains. Our network is composed of a complex-valued residual U-net in the k-space domain, an iFFT operation, and a real-valued Unet in the image domain. Our experiments demonstrated, using MR raw k-space data, that the proposed hybrid approach can potentially improve CS reconstruction compared to deep-learning networks that operate only in the image domain. In this study we compare our method with four previously published deep neural networks and examine their ability to reconstruct images that are subsequently used to generate regional volume estimates. We evaluated undersampling ratios of 75% and 80%. Our technique was ranked second in the quantitative analysis, but qualitative analysis indicated that our reconstruction performed the best in hard to reconstruct regions, such as the cerebellum. All images reconstructed with our method were successfully post-processed, and showed good volumetry agreement compared with the fully sampled reconstruction measures.",
"title": ""
},
{
"docid": "6656e5e9af70b883047d925d985a56ef",
"text": "This paper presents a case study of data mining modeling techniques for direct marketing. It focuses to three stages of the CRISP-DM process for data mining projects: data preparation, modeling, and evaluation. We address some gaps in previous studies, namely: selection of model hyper-parameters and controlling the problem of under-fitting and over-fitting; dealing with randomness and 'lucky' set composition; the role of variable selection and data saturation. In order to avoid overestimation of the model performance, we applied a double-testing procedure, which combines cross-validation, multiple runs over random selection of the folds and hyper-parameters, and multiple runs over random selection of partitions. The paper compares modeling techniques, such as neural nets, logistic regression, naive Bayes, linear and quadratic discriminant analysis, all tested at different levels of data saturation. To illustrate the issues discussed, we built predictive models, which outperform those proposed by other studies.",
"title": ""
},
{
"docid": "44050ba52838a583e2efb723b10f0234",
"text": "This paper presents a novel approach to the reconstruction of geometric models and surfaces from given sets of points using volume splines. It results in the representation of a solid by the inequality The volume spline is based on use of the Green’s function for interpolation of scalar function values of a chosen “carrier” solid. Our algorithm is capable of generating highly concave and branching objects automatically. The particular case where the surface is reconstructed from cross-sections is discussed too. Potential applications of this algorithm are in tomography, image processing, animation and CAD f o r bodies with complex surfaces.",
"title": ""
},
{
"docid": "f3c2663cb0341576d754bb6cd5f2c0f5",
"text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.",
"title": ""
},
{
"docid": "651d048aaae1ce1608d3d9f0f09d4b9b",
"text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.",
"title": ""
},
{
"docid": "78e712f5d052c08a7dcbc2ee6fd92f96",
"text": "Bug report contains a vital role during software development, However bug reports belongs to different categories such as performance, usability, security etc. This paper focuses on security bug and presents a bug mining system for the identification of security and non-security bugs using the term frequency-inverse document frequency (TF-IDF) weights and naïve bayes. We performed experiments on bug report repositories of bug tracking systems such as bugzilla and debugger. In the proposed approach we apply text mining methodology and TF-IDF on the existing historic bug report database based on the bug s description to predict the nature of the bug and to train a statistical model for manually mislabeled bug reports present in the database. The tool helps in deciding the priorities of the incoming bugs depending on the category of the bugs i.e. whether it is a security bug report or a non-security bug report, using naïve bayes. Our evaluation shows that our tool using TF-IDF is giving better results than the naïve bayes method.",
"title": ""
},
{
"docid": "bae04c2c53409e8362cdbe31f3999b3a",
"text": "This paper presents a method for vision based estimation of the pose of human hands in interaction with objects. Despite the fact that most robotics applications of human hand tracking involve grasping and manipulation of objects, the majority of methods in the literature assume a free hand, isolated from the surrounding environment. Our hand tracking method is non-parametric, performing a nearest neighbor search in a large database (100000 entries) of hand poses with and without grasped objects. The system operates in real time, it is robust to self occlusions, object occlusions and segmentation errors, and provides full hand pose reconstruction from markerless video. Temporal consistency in hand pose is taken into account, without explicitly tracking the hand in the high dimensional pose space.",
"title": ""
},
{
"docid": "565f815ef0c1dd5107f053ad39dade20",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "693a29ecfff767ad3f8b28ead3b6bdd9",
"text": "This paper proposes a reinforcement learning algorithm that solves the problem of scheduling the charging of a plug-in electric vehicle's (PEV) battery. The algorithm is employed in the demand side management of smart grids. The goal of the algorithm is to minimize the charging cost of the consumer over long term time horizon. The PEV battery charging problem is modeled as a Markov decision process (MDP) with unknown transition probabilities. A Sarsa reinforcement learning method with eligibility traces is proposed for learning the pricing patterns and solving the charging problem. The model uses true day-ahead prices for the current day and predicted prices for the next day. Simulation results using true pricing data demonstrate the cost savings to the consumer.",
"title": ""
},
{
"docid": "b8b7abcef8e23f774bd4e74067a27e6f",
"text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright 1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA",
"title": ""
},
{
"docid": "fdb9da0c4b6225c69de16411c79ac9dc",
"text": "Phylogenetic analyses reveal the evolutionary derivation of species. A phylogenetic tree can be inferred from multiple sequence alignments of proteins or genes. The alignment of whole genome sequences of higher eukaryotes is a computational intensive and ambitious task as is the computation of phylogenetic trees based on these alignments. To overcome these limitations, we here used an alignment-free method to compare genomes of the Brassicales clade. For each nucleotide sequence a Chaos Game Representation (CGR) can be computed, which represents each nucleotide of the sequence as a point in a square defined by the four nucleotides as vertices. Each CGR is therefore a unique fingerprint of the underlying sequence. If the CGRs are divided by grid lines each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence (Frequency Chaos Game Representation, FCGR). Here, we used distance measures between FCGRs to infer phylogenetic trees of Brassicales species. Three types of data were analyzed because of their different characteristics: (A) Whole genome assemblies as far as available for species belonging to the Malvidae taxon. (B) EST data of species of the Brassicales clade. (C) Mitochondrial genomes of the Rosids branch, a supergroup of the Malvidae. The trees reconstructed based on the Euclidean distance method are in general agreement with single gene trees. The Fitch-Margoliash and Neighbor joining algorithms resulted in similar to identical trees. Here, for the first time we have applied the bootstrap re-sampling concept to trees based on FCGRs to determine the support of the branchings. FCGRs have the advantage that they are fast to calculate, and can be used as additional information to alignment based data and morphological characteristics to improve the phylogenetic classification of species in ambiguous cases.",
"title": ""
},
{
"docid": "59d7b02c178970c4cc8bfb0242e6740a",
"text": "The purpose of this paper is creating a mobile app on a Smartphone device so that the user can control electronic devices; see the amount of flow that has been used in the amount of dollars, so the problem is the difficulty in saving electricity can be resolved. Development and design was done by collecting data using questionnaires to the respondents. Design method using observations, distributing questionnaires and to study literature, and then after that do the design in hardware (microcontroller) made United Modeling Language (UML), database design, code implementation and creation of user interfaces on IOS and Android. The result of this research is the implementation of a remote home automation application in mobile which can help users in order to control the home and determine the cost of electricity that has been used in every electronic device so that the optimization can be achieved. Keywords— Arduino, Automation, Home remote system, Microcontroller",
"title": ""
},
{
"docid": "a10b0a69ba7d3f902590b35cf0d5ea32",
"text": "This article distills insights from historical, sociological, and psychological perspectives on marriage to develop the suffocation model of marriage in America. According to this model, contemporary Americans are asking their marriage to help them fulfill different sets of goals than in the past. Whereas they ask their marriage to help them fulfill their physiological and safety needs much less than in the past, they ask it to help them fulfill their esteem and self-actualization needs much more than in the past. Asking the marriage to help them fulfill the latter, higher level needs typically requires sufficient investment of time and psychological resources to ensure that the two spouses develop a deep bond and profound insight into each other’s essential qualities. Although some spouses are investing sufficient resources—and reaping the marital and psychological benefits of doing so—most are not. Indeed, they are, on average, investing less than in the past. As a result, mean levels of marital quality and personal well-being are declining over time. According to the suffocation model, spouses who are struggling with an imbalance between what they are asking from their marriage and what they are investing in it have several promising options for corrective action: intervening to optimize their available resources, increasing their investment of resources in the marriage, and asking less of the marriage in terms of facilitating the fulfillment of spouses’ higher needs. Discussion explores the implications of the suffocation model for understanding dating and courtship, sociodemographic variation, and marriage beyond American’s borders.",
"title": ""
},
{
"docid": "f3c7a2eb1f76a5c72ae8de2134f6a61d",
"text": "The amyloid hypothesis has driven drug development strategies for Alzheimer's disease for over 20 years. We review why accumulation of amyloid-beta (Aβ) oligomers is generally considered causal for synaptic loss and neurodegeneration in AD. We elaborate on and update arguments for and against the amyloid hypothesis with new data and interpretations, and consider why the amyloid hypothesis may be failing therapeutically. We note several unresolved issues in the field including the presence of Aβ deposition in cognitively normal individuals, the weak correlation between plaque load and cognition, questions regarding the biochemical nature, presence and role of Aβ oligomeric assemblies in vivo, the bias of pre-clinical AD models toward the amyloid hypothesis and the poorly explained pathological heterogeneity and comorbidities associated with AD. We also illustrate how extensive data cited in support of the amyloid hypothesis, including genetic links to disease, can be interpreted independently of a role for Aβ in AD. We conclude it is essential to expand our view of pathogenesis beyond Aβ and tau pathology and suggest several future directions for AD research, which we argue will be critical to understanding AD pathogenesis.",
"title": ""
},
{
"docid": "11a3ee5afc835a47a6a9529940d237f1",
"text": "BACKGROUND\nAntibiotic therapy is commonly used to treat hidradenitis suppurativa (HS). Although concern for antibiotic resistance exists, data examining the association between antibiotics and antimicrobial resistance in HS lesions are limited.\n\n\nOBJECTIVE\nWe sought to determine the frequency of antimicrobial resistance in HS lesions from patients on antibiotic therapy.\n\n\nMETHODOLOGY\nA cross-sectional analysis was conducted on 239 patients with HS seen at the Johns Hopkins Medical Institutions from 2010 through 2015.\n\n\nRESULTS\nPatients using topical clindamycin were more likely to grow clindamycin-resistant Staphylococcus aureus compared with patients using no antibiotics (63% vs 17%; P = .03). Patients taking ciprofloxacin were more likely to grow ciprofloxacin-resistant methicillin-resistant S aureus compared with patients using no antibiotics (100% vs 10%; P = .045). Patients taking trimethoprim/sulfamethoxazole were more likely to grow trimethoprim/sulfamethoxazole-resistant Proteus species compared with patients using no antibiotics (88% vs 0%; P < .001). No significant antimicrobial resistance was observed with tetracyclines or oral clindamycin.\n\n\nLIMITATIONS\nData on disease characteristics and antimicrobial susceptibilities for certain bacteria were limited.\n\n\nCONCLUSIONS\nAntibiotic therapy for HS treatment may be inducing antibiotic resistance. These findings highlight the importance of stewardship in antibiotic therapy for HS and raise questions regarding the balance of antibiotic use versus potential harms associated with antibiotic resistance.",
"title": ""
},
{
"docid": "14024a813302548d0bd695077185de1c",
"text": "In this paper, we propose an innovative touch-less palm print recognition system. This project is motivated by the public’s demand for non-invasive and hygienic biometric technology. For various reasons, users are concerned about touching the biometric scanners. Therefore, we propose to use a low-resolution web camera to capture the user’s hand at a distance for recognition. The users do not need to touch any device for their palm print to be acquired. A novel hand tracking and palm print region of interest (ROI) extraction technique are used to track and capture the user’s palm in real-time video stream. The discriminative palm print features are extracted based on a new method that applies local binary pattern (LBP) texture descriptor on the palm print directional gradient responses. Experiments show promising result using the proposed method. Performance can be further improved when a modified probabilistic neural network (PNN) is used for feature matching. Verification can be performed in less than one second in the proposed system. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "779a73da4551831f50b8705f3339b5e0",
"text": "Android’s permission system offers an all-or-nothing choice when installing an app. To make it more flexible and fine-grained, users may choose a popular app tool, called permission manager, to selectively grant or revoke an app’s permissions at runtime. A fundamental requirement for such permission manager is that the granted or revoked permissions should be enforced faithfully. However, we discover that none of existing permission managers meet this requirement due to permission leaks, in which an unprivileged app can exercise certain permissions which are revoked or not-granted through communicating with a privileged app.To address this problem, we propose a secure, usable, and transparent OS-level middleware for any permission manager to defend against the permission leaks. The middleware is provably secure in a sense that it can effectively block all possible permission leaks.The middleware is designed to have a minimal impact on the usability of running apps. In addition, the middleware is transparent to users and app developers and it requires minor modifications on permission managers and Android OS. Finally, our evaluation shows that the middleware incurs relatively low performance overhead and power consumption.",
"title": ""
}
] |
scidocsrr
|
b5ce22ebf5f20bc3ffbebd3dff8eac56
|
Bregman Divergence-Based Regularization for Transfer Subspace Learning
|
[
{
"docid": "342c39b533e6a94edd72530ca3d57a54",
"text": "Graph-embedding along with its linearization and kernelization provides a general framework that unifies most traditional dimensionality reduction algorithms. From this framework, we propose a new manifold learning technique called discriminant locally linear embedding (DLLE), in which the local geometric properties within each class are preserved according to the locally linear embedding (LLE) criterion, and the separability between different classes is enforced by maximizing margins between point pairs on different classes. To deal with the out-of-sample problem in visual recognition with vector input, the linear version of DLLE, i.e., linearization of DLLE (DLLE/L), is directly proposed through the graph-embedding framework. Moreover, we propose its multilinear version, i.e., tensorization of DLLE, for the out-of-sample problem with high-order tensor input. Based on DLLE, a procedure for gait recognition is described. We conduct comprehensive experiments on both gait and face recognition, and observe that: 1) DLLE along its linearization and tensorization outperforms the related versions of linear discriminant analysis, and DLLE/L demonstrates greater effectiveness than the linearization of LLE; 2) algorithms based on tensor representations are generally superior to linear algorithms when dealing with intrinsically high-order data; and 3) for human gait recognition, DLLE/L generally obtains higher accuracy than state-of-the-art gait recognition algorithms on the standard University of South Florida gait database.",
"title": ""
},
{
"docid": "c41c56eeb56975c4d65e3847aa6b8b01",
"text": "We address the problem of comparing sets of images for object recognition, where the sets may represent variations in an object's appearance due to changing camera pose and lighting conditions. canonical correlations (also known as principal or canonical angles), which can be thought of as the angles between two d-dimensional subspaces, have recently attracted attention for image set matching. Canonical correlations offer many benefits in accuracy, efficiency, and robustness compared to the two main classical methods: parametric distribution-based and nonparametric sample-based matching of sets. Here, this is first demonstrated experimentally for reasonably sized data sets using existing methods exploiting canonical correlations. Motivated by their proven effectiveness, a novel discriminative learning method over sets is proposed for set classification. Specifically, inspired by classical linear discriminant analysis (LDA), we develop a linear discriminant function that maximizes the canonical correlations of within-class sets and minimizes the canonical correlations of between-class sets. Image sets transformed by the discriminant function are then compared by the canonical correlations. Classical orthogonal subspace method (OSM) is also investigated for the similar purpose and compared with the proposed method. The proposed method is evaluated on various object recognition problems using face image sets with arbitrary motion captured under different illuminations and image sets of 500 general objects taken at different views. The method is also applied to object category recognition using ETH-80 database. The proposed method is shown to outperform the state-of-the-art methods in terms of accuracy and efficiency",
"title": ""
},
{
"docid": "04ba17b4fc6b506ee236ba501d6cb0cf",
"text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.",
"title": ""
}
] |
[
{
"docid": "1862a9fa9db1fa4b4f2c34873686f190",
"text": "This paper surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and show how a rich language can be built up from surprisingly few primitives. This language can distinguish between convex and a variety of concave shapes and there is also an extension which handles regions with uncertain boundaries. We also present a variety of reasoning techniques, both for static and dynamic situations. A number of possible application areas are briefly mentioned.",
"title": ""
},
{
"docid": "44258b538f61434d66dbde7f989e9c82",
"text": "Studies in animals showed that stress results in damage to the hippocampus, a brain area involved in learning and memory, with associated memory deficits. The mechanism involves glucocorticoids and possibly serotonin acting through excitatory amino acids to mediate hippocampal atrophy. Patients with posttraumatic stress disorder (PTSD) from Vietnam combat and childhood abuse had deficits on neuropsychological measures that have been validated as probes of hippocampal function. In addition, magnetic resonance imaging (MRI) showed reduction in volume of the hippocampus in both combat veterans and victims of childhood abuse. In combat veterans, hippocampal volume reduction was correlated with deficits in verbal memory on neuropsychological testing. These studies introduce the possibility that experiences in the form of traumatic stressors can have long-term effects on the structure and function of the brain.",
"title": ""
},
{
"docid": "da7b39dce3c7c8a08f11db132925fe37",
"text": "In this paper, a new language identification system is presented based on the total variability approach previously developed in the field of speaker identification. Various techniques are employed to extract the most salient features in the lower dimensional i-vector space and the system developed results in excellent performance on the 2009 LRE evaluation set without the need for any post-processing or backend techniques. Additional performance gains are observed when the system is combined with other acoustic systems.",
"title": ""
},
{
"docid": "6ddb475ef1529ab496ab9f40dc51cb99",
"text": "While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.",
"title": ""
},
{
"docid": "70df369be2c95afd04467cd291e60175",
"text": "In this paper, we introduce two novel metric learning algorithms, χ-LMNN and GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The two approaches achieve this goal in fundamentally different ways: χ-LMNN inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear χ-distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach’s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not only match the current state-of-the-art in terms of kNN classification error, but in the case of χ-LMNN, obtain best results in 19 out of 20 learning settings.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "73080f337ae7ec5ef0639aec374624de",
"text": "We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called \"Multi-Atlas Label Propagation with Expectation-Maximisation based refinement\" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.",
"title": ""
},
{
"docid": "ac9d2bdb126549e160cf2bcaa4b8e4e8",
"text": "In the field of Brain Computer Interface, Emotion recognition plays an increasingly crucial role. As psychological understanding of emotions progresses, feature extraction along with classification of electroencephalogram (EEG) representation of these emotions becomes a more important challenge. In this work, Neural Networks as a type of high accuracy robust statistical learning model was employed in order to classify human emotions from the DEAP [7] dataset containing the measured EEG signals for Emotion Classification research. We take advantage of two Neural Network based models the first one of which is the Deep Neural Network and the other one is the Convolutional Neural Network in order to classify Valence, Arousal, Dominance and liking into two categories of Yes or No (High and Low) and to classify Valence and Arousal into three categories of (High, Normal, Low), The achieved accuracy surpasses those achieved in other papers indicating that these models carry the ability to be used as a high achieving classifier for BCI signals.",
"title": ""
},
{
"docid": "395dcc7c09562f358c07af9c999fbdc7",
"text": "Protecting source code against reverse engineering and theft is an important problem. The goal is to carry out computations using confidential algorithms on an untrusted party while ensuring confidentiality of algorithms. This problem has been addressed for Boolean circuits known as ‘circuit privacy’. Circuits corresponding to real-world programs are impractical. Well-known obfuscation techniques are highly practicable, but provide only limited security, e.g., no piracy protection. In this work, we modify source code yielding programs with adjustable performance and security guarantees ranging from indistinguishability obfuscators to (non-secure) ordinary obfuscation. The idea is to artificially generate ‘misleading’ statements. Their results are combined with the outcome of a confidential statement using encrypted selector variables. Thus, an attacker must ‘guess’ the encrypted selector variables to disguise the confidential source code. We evaluated our method using more than ten programmers as well as pattern mining across open source code repositories to gain insights of (micro-)coding patterns that are relevant for generating misleading statements. The evaluation reveals that our approach is effective in that it successfully preserves source code confidentiality.",
"title": ""
},
{
"docid": "7ee4a708d41065c619a5bf9e86f871a3",
"text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.",
"title": ""
},
{
"docid": "aaf6ed732f2cb5ceff714f1d84dac9ed",
"text": "Video caption refers to generating a descriptive sentence for a specific short video clip automatically, which has achieved remarkable success recently. However, most of the existing methods focus more on visual information while ignoring the synchronized audio cues. We propose three multimodal deep fusion strategies to maximize the benefits of visual-audio resonance information. The first one explores the impact on cross-modalities feature fusion from low to high order. The second establishes the visual-audio short-term dependency by sharing weights of corresponding front-end networks. The third extends the temporal dependency to long-term through sharing multimodal memory across visual and audio modalities. Extensive experiments have validated the effectiveness of our three cross-modalities fusion strategies on two benchmark datasets, including Microsoft Research Video to Text (MSRVTT) and Microsoft Video Description (MSVD). It is worth mentioning that sharing weight can coordinate visualaudio feature fusion effectively and achieve the state-of-art performance on both BELU and METEOR metrics. Furthermore, we first propose a dynamic multimodal feature fusion framework to deal with the part modalities missing case. Experimental results demonstrate that even in the audio absence mode, we can still obtain comparable results with the aid of the additional audio modality inference module.",
"title": ""
},
{
"docid": "cd5bee864efd59b3122752f06f34f3b6",
"text": "Prior background knowledge is essential for human reading and understanding. In this work, we investigate how to leverage external knowledge to improve question answering. We primarily focus on multiple-choice question answering tasks that require external knowledge to answer questions. We investigate the effects of utilizing external in-domain multiple-choice question answering datasets and enriching the reference corpus by external out-domain corpora (i.e., Wikipedia articles). Experimental results demonstrate the effectiveness of external knowledge on two challenging multiple-choice question answering tasks: ARC and OpenBookQA.",
"title": ""
},
{
"docid": "75f43dc0731d442e0d293c6e2f360f85",
"text": "A novel polarization reconfigurable array antenna is proposed and fabricated. Measured results validate the performance improvement in port isolation and cross polarization level. The antenna can operate with vertical and horizontal polarizations at the same time and hence realize polarization diversity. By adding a polarization switch, the antenna can provide either RHCP or LHCP radiations. A total of four polarization modes can be facilitated with this antenna array and may find applications in new and up-coming wireless communication standards that require polarization diversity.",
"title": ""
},
{
"docid": "986a2771edc62a5658c0099e5cc0a920",
"text": "Very-low-energy diets (VLEDs) and ketogenic low-carbohydrate diets (KLCDs) are two dietary strategies that have been associated with a suppression of appetite. However, the results of clinical trials investigating the effect of ketogenic diets on appetite are inconsistent. To evaluate quantitatively the effect of ketogenic diets on subjective appetite ratings, we conducted a systematic literature search and meta-analysis of studies that assessed appetite with visual analogue scales before (in energy balance) and during (while in ketosis) adherence to VLED or KLCD. Individuals were less hungry and exhibited greater fullness/satiety while adhering to VLED, and individuals adhering to KLCD were less hungry and had a reduced desire to eat. Although these absolute changes in appetite were small, they occurred within the context of energy restriction, which is known to increase appetite in obese people. Thus, the clinical benefit of a ketogenic diet is in preventing an increase in appetite, despite weight loss, although individuals may indeed feel slightly less hungry (or more full or satisfied). Ketosis appears to provide a plausible explanation for this suppression of appetite. Future studies should investigate the minimum level of ketosis required to achieve appetite suppression during ketogenic weight loss diets, as this could enable inclusion of a greater variety of healthy carbohydrate-containing foods into the diet.",
"title": ""
},
{
"docid": "7e26a6ccd587ae420b9d2b83f6b54350",
"text": "Because of the SARS epidemic in Asia, people chose to the Internet shopping instead of going shopping on streets. In other words, SARS actually gave the Internet an opportunity to revive from its earlier bubbles. The purpose of this research is to provide managers of shopping Websites regarding consumer purchasing decisions based on the CSI (Consumer Styles Inventory) which was proposed by Sproles (1985) and Sproles & Kendall (1986). According to the CSI, one can capture the decision-making styles of online shoppers. Furthermore, this research also discusses the gender differences among online shoppers. Exploratory factor analysis (EFA) was used to understand the decision-making styles and discriminant analysis was used to distinguish the differences between female and male shoppers. Managers of Internet shopping Websites can design a proper marketing mix with the findings that there are differences in purchasing decisions between genders.",
"title": ""
},
{
"docid": "96a0e29eb5a55f71bce6d51ce0fedc7d",
"text": "This article describes a new method for assessing the effect of a given film on viewers’ brain activity. Brain activity was measured using functional magnetic resonance imaging (fMRI) during free viewing of films, and inter-subject correlation analysis (ISC) was used to assess similarities in the spatiotemporal responses across viewers’ brains during movie watching. Our results demonstrate that some films can exert considerable control over brain activity and eye movements. However, this was not the case for all types of motion picture sequences, and the level of control over viewers’ brain activity differed as a function of movie content, editing, and directing style. We propose that ISC may be useful to film studies by providing a quantitative neuroscientific assessment of the impact of different styles of filmmaking on viewers’ brains, and a valuable method for the film industry to better assess its products. Finally, we suggest that this method brings together two separate and largely unrelated disciplines, cognitive neuroscience and film studies, and may open the way for a new interdisciplinary field of “neurocinematic” studies.",
"title": ""
},
{
"docid": "11a140232485cb8bcc4914b8538ab5ea",
"text": "We explain why we feel that the comparison betwen Common Lisp and Fortran in a recent article by Fateman et al. in this journal is not entirely fair.",
"title": ""
},
{
"docid": "d5b57f5e4d2d4bd39db847dbe942ee78",
"text": "We describe an application of the Invariant Extended Kalman Filter (IEKF) design methodology to the scan matching SLAM problem. We review the theoretical foundations of the IEKF and its practical interest of guaranteeing robustness to poor state estimates, then implement the filter on a wheeled robot hardware platform. The proposed design is successfully validated in experimental testing.",
"title": ""
},
{
"docid": "1501bcf4b56814ab4961c229262a1232",
"text": "This paper employs case-based reasoning (CBR) to capture the personal styles of individual artists and generate the human facial portraits from photos accordingly. For each human artist to be mimicked, a series of cases are firstly built-up from her/his exemplars of source facial photo and hand-drawn sketch, and then its stylization for facial photo is transformed as a style-transferring process of iterative refinement by looking-for and applying best-fit cases in a sense of style optimization. Two models, fitness evaluation model and parameter estimation model, are learned for case retrieval and adaptation respectively from these cases. The fitness evaluation model is to decide which case is best-fitted to the sketching of current interest, and the parameter estimation model is to automate case adaptation. The resultant sketch is synthesized progressively with an iterative loop of retrieval and adaptation of candidate cases until the desired aesthetic style is achieved. To explore the effectiveness and advantages of the novel approach, we experimentally compare the sketch portraits generated by the proposed method with that of a state-of-the-art example-based facial sketch generation algorithm as well as a couple commercial software packages. The comparisons reveal that our CBR based synthesis method for facial portraits is superior both in capturing and reproducing artists’ personal illustration styles to the peer methods.",
"title": ""
}
] |
scidocsrr
|
3f1a93fba39255cd00016021526dfd7a
|
Design and control of a micro ball-balancing robot (MBBR) with orthogonal midlatitude omniwheel placement
|
[
{
"docid": "287572e1c394ec6959853f62b7707233",
"text": "This paper presents a method for state estimation on a ballbot; i.e., a robot balancing on a single sphere. Within the framework of an extended Kalman filter and by utilizing a complete kinematic model of the robot, sensory information from different sources is combined and fused to obtain accurate estimates of the robot's attitude, velocity, and position. This information is to be used for state feedback control of the dynamically unstable system. Three incremental encoders (attached to the omniwheels that drive the ball of the robot) as well as three rate gyroscopes and accelerometers (attached to the robot's main body) are used as sensors. For the presented method, observability is proven analytically for all essential states in the system, and the algorithm is experimentally evaluated on the Ballbot Rezero.",
"title": ""
}
] |
[
{
"docid": "fa04e8e2e263d18ee821c7aa6ebed08e",
"text": "In this study we examined the effect of physical activity based labels on the calorie content of meals selected from a sample fast food menu. Using a web-based survey, participants were randomly assigned to one of four menus which differed only in their labeling schemes (n=802): (1) a menu with no nutritional information, (2) a menu with calorie information, (3) a menu with calorie information and minutes to walk to burn those calories, or (4) a menu with calorie information and miles to walk to burn those calories. There was a significant difference in the mean number of calories ordered based on menu type (p=0.02), with an average of 1020 calories ordered from a menu with no nutritional information, 927 calories ordered from a menu with only calorie information, 916 calories ordered from a menu with both calorie information and minutes to walk to burn those calories, and 826 calories ordered from the menu with calorie information and the number of miles to walk to burn those calories. The menu with calories and the number of miles to walk to burn those calories appeared the most effective in influencing the selection of lower calorie meals (p=0.0007) when compared to the menu with no nutritional information provided. The majority of participants (82%) reported a preference for physical activity based menu labels over labels with calorie information alone and no nutritional information. Whether these labels are effective in real-life scenarios remains to be tested.",
"title": ""
},
{
"docid": "236d65f840d0f26ceedd1db31125bbfe",
"text": "According to Gershenfeld and Vasseur (2014) the impressive growth of the Internet in the past two decades is about to be overshadowed as the \"things\" that surround us start going online. The \"Internet of Things\" (IOT), a term coined by Kevin Ashton of Procter & Gamble in 1998, has become a new paradigm that views all objects around us connected to the network, providing anyone with “anytime, anywhere” access to information (ITU, 2005; Gomez et al., 2013). The IOT describes the interconnection of objects or “things” for various purposes including identification, communication, sensing, and data collection (Oriwoh et al., 2013). “Things” range from mobile devices to general household objects embedded with capabilities for sensing or communication through the use of technologies such as radio frequency identification (RFID) (Oriwoh et al., 2013; Gomez et al., 2013). The IOT represents the future of computing and communications, and its developThis article investigates challenges pertaining to business model design in the emerging context of the Internet of Things (IOT). The evolution of business perspectives to the IOT is driven by two underlying trends: i) the change of focus from viewing the IOT primarily as a technology platform to viewing it as a business ecosystem; and ii) the shift from focusing on the business model of a firm to designing ecosystem business models. An ecosystem business model is a business model composed of value pillars anchored in ecosystems and focuses on both the firm's method of creating and capturing value as well as any part of the ecosystem's method of creating and capturing value. The article highlights three major challenges of designing ecosystem business models for the IOT, including the diversity of objects, the immaturity of innovation, and the unstructured ecosystems. Diversity refers to the difficulty of designing business models for the IOT due to a multitude of different types of connected objects combined with only modest standardization of interfaces. Immaturity suggests that quintessential IOT technologies and innovations are not yet products and services but a \"mess that runs deep\". The unstructured ecosystems mean that it is too early to tell who the participants will be and which roles they will have in the evolving ecosystems. The study argues that managers can overcome these challenges by using a business model design tool that takes into account the ecosystemic nature of the IOT. The study concludes by proposing the grounds for a new design tool for ecosystem business models and suggesting that \"value design\" might be a more appropriate term when talking about business models in ecosystems. New web-based business models being hatched for the Internet of Things are bringing together market players who previously had no business dealings with each other. Through partnerships and acquisitions, [...] they have to sort out how they will coordinate their business development efforts with customers and interfaces with other stakeholders.",
"title": ""
},
{
"docid": "f021adafc543f3bfc064894b8575ccde",
"text": "A dialogue agent is one that can interact and communicate with other agents in a coherent manner not just with one shot messages but with a sequence of related messages all on the same topic or in service of an overall goal Following the basic insights of speech act theory these communications are seen not just as transmitting information but as actions which change the state of the world Most of these changes will be to the mental states of the agents involved in the conversation as well as the state or context of the dialogue As such speech act theory allows an agent theorist or designer to place agent communication within the same general framework as agent action In general though communicative action requires a more expressive logic of action than is required for something like the single agent blocks world familiar in classical AI planning For one thing there are multiple agents and there is also a possibility of simultaneous and fallible action In studying speech acts the focus is on pragmatics rather than semantics that is how language is used by agents not what the messages themselves mean in terms of truth conditions in a model see for a good introduction to issues in natural language pragmatics As with other aspects of pragmatics such as implicature and presupposition an important concern is what can be inferred perhaps only provisionally or with a certain likelihood as a result of the performance of a speech act While much of speech act work has been analyzing interaction in natural language speech acts are also a convenient level of analysis for arti cial communication languages While the rules for interpreting whether a particular act has been performed will be di erent for an arti cial language presumably simpler with less concern about vague and",
"title": ""
},
{
"docid": "4e28055d48d6c00aebb7ddb6a287636d",
"text": "BACKGROUND\nIt is commonly assumed that motion sickness caused by moving visual scenes arises from the illusion of self-motion (i.e., vection).\n\n\nHYPOTHESES\nBoth studies reported here investigated whether sickness and vection were correlated. The first study compared sickness and vection created by real and virtual visual displays. The second study investigated whether visual fixation to suppress eye movements affected motion sickness or vection.\n\n\nMETHOD\nIn the first experiment subjects viewed an optokinetic drum and a virtual simulation of the optokinetic drum. The second experiment investigated two conditions on a virtual display: a) moving black and white stripes; and b) moving black and white stripes with a stationary cross on which subjects fixated to reduce eye movements.\n\n\nRESULTS\nIn the first study, ratings of motion sickness were correlated between the conditions (real and the virtual drum), as were ratings of vection. With both conditions, subjects with poor visual acuity experienced greater sickness. There was no correlation between ratings of vection and ratings of sickness in either condition. In the second study, fixation reduced motion sickness but had no affect on vection. Motion sickness was correlated with visual acuity without fixation, but not with fixation. Again, there was no correlation between vection and motion sickness.\n\n\nCONCLUSIONS\nVection is not the primary cause of sickness with optokinetic stimuli. Vection appears to be influenced by peripheral vision whereas motion sickness is influenced by central vision. When the eyes are free to track moving stimuli, there is an association between visual acuity and motion sickness. Virtual displays can create vection and may be used to investigate visually induced motion sickness.",
"title": ""
},
{
"docid": "e3461568f90b10dcbe05f1228b4a8614",
"text": "A 2.4 GHz band high-efficiency RF rectifier and high sensitive dc voltage sensing circuit is implemented. A passive RF to DC rectifier of multiplier voltage type has no current consumption. This rectifier is using native threshold voltage diode-connected NMOS transistors to avoid the power loss due to the threshold voltage. It consumes only 900nA with 1.5V supply voltage adopting ultra low power DC sensing circuit using subthreshold current reference. These block incorporates a digital demodulation logic blocks. It can recognize OOK digital information and existence of RF input signal above sensitivity level or not. A low power RF rectifier and DC sensing circuit was fabricated in 0.18um CMOS technology with native threshold voltage NMOS; This RF wake up receiver has -28dBm sensitivity at 2.4 GHz band.",
"title": ""
},
{
"docid": "e6a97c3365e16d77642a84f0a80863e2",
"text": "The current statuses and future promises of the Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano-Things (IoNT) are extensively reviewed and a summarized survey is presented. The analysis clearly distinguishes between IoT and IoE, which are wrongly considered to be the same by many commentators. After evaluating the current trends of advancement in the fields of IoT, IoE and IoNT, this paper identifies the 21 most significant current and future challenges as well as scenarios for the possible future expansion of their applications. Despite possible negative aspects of these developments, there are grounds for general optimism about the coming technologies. Certainly, many tedious tasks can be taken over by IoT devices. However, the dangers of criminal and other nefarious activities, plus those of hardware and software errors, pose major challenges that are a priority for further research. Major specific priority issues for research are identified.",
"title": ""
},
{
"docid": "9bcd4c9372146f5a93c92addee0b4cc0",
"text": "Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems.",
"title": ""
},
{
"docid": "c5dc7a1ff0a3db20232fdff9cfb65381",
"text": "We replace the output layer of deep neural nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical neural nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and the code is available at https://github.com/ BaoWangMath/DNN-DataDependentActivation.",
"title": ""
},
{
"docid": "08fedcf80c0905de2598ccd45da706a5",
"text": "Translation of named entities (NEs), such as person names, organization names and location names is crucial for cross lingual information retrieval, machine translation, and many other natural language processing applications. Newly named entities are introduced on daily basis in newswire and this greatly complicates the translation task. Also, while some names can be translated, others must be transliterated, and, still, others are mixed. In this paper we introduce an integrated approach for named entity translation deploying phrase-based translation, word-based translation, and transliteration modules into a single framework. While Arabic based, the approach introduced here is a unified approach that can be applied to NE translation for any language pair.",
"title": ""
},
{
"docid": "c38ace7b9e86549455440011153e686c",
"text": "In the current fast-paced world, people tend to possess limited knowledge about things from the past. For example, some young users may not know that Walkman played similar function as iPod does nowadays. In this paper, we approach the temporal correspondence problem in which, given an input term (e.g., iPod) and the target time (e.g. 1980s), the task is to find the counterpart of the query that existed in the target time. We propose an approach that transforms word contexts across time based on their neural network representations. We then experimentally demonstrate the effectiveness of our method on the New York Times Annotated Corpus.",
"title": ""
},
{
"docid": "dae567414224b24dbb7bc06b9b9ea57f",
"text": "With the increasing computational power of computers, software design systems are progressing from being tools enabling architects and designers to express their ideas, to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design, thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D solid objects.",
"title": ""
},
{
"docid": "20d00f63848b70f3a5688b68181088f2",
"text": "This paper presents a method for modeling player decision making through the use of agents as AI-driven personas. The paper argues that artificial agents, as generative player models, have properties that allow them to be used as psychometrically valid, abstract simulations of a human player’s internal decision making processes. Such agents can then be used to interpret human decision making, as personas and playtesting tools in the game design process, as baselines for adapting agents to mimic classes of human players, or as believable, human-like opponents. This argument is explored in a crowdsourced decision making experiment, in which the decisions of human players are recorded in a small-scale dungeon themed puzzle game. Human decisions are compared to the decisions of a number of a priori defined“archetypical” agent-personas, and the humans are characterized by their likeness to or divergence from these. Essentially, at each step the action of the human is compared to what actions a number of reinforcement-learned agents would have taken in the same situation, where each agent is trained using a different reward scheme. Finally, extensions are outlined for adapting the agents to represent sub-classes found in the human decision making traces.",
"title": ""
},
{
"docid": "9df05fbd6e24b73039019bac5c1c4387",
"text": "This paper discusses the modelling of rainfall-flow (rainfall-run-off) and flow-routeing processes in river systems within the context of real-time flood forecasting. It is argued that deterministic, reductionist (or 'bottom-up') models are inappropriate for real-time forecasting because of the inherent uncertainty that characterizes river-catchment dynamics and the problems of model over-parametrization. The advantages of alternative, efficiently parametrized data-based mechanistic models, identified and estimated using statistical methods, are discussed. It is shown that such models are in an ideal form for incorporation in a real-time, adaptive forecasting system based on recursive state-space estimation (an adaptive version of the stochastic Kalman filter algorithm). An illustrative example, based on the analysis of a limited set of hourly rainfall-flow data from the River Hodder in northwest England, demonstrates the utility of this methodology in difficult circumstances and illustrates the advantages of incorporating real-time state and parameter adaption.",
"title": ""
},
{
"docid": "8d71cea3459c83a265b81cc37aa14b70",
"text": "BACKGROUND\nThe aim of this study was to determine the relevance of apelin and insulin resistance (IR) with polycystic ovary syndrome (PCOS) and to assess the possible therapeutic effect of the combined therapy of drospirenone-ethinylestradiol (DRSP-EE) combined with metformin.\n\n\nMATERIAL AND METHODS\nSixty-three PCOS patients and 40 non-PCOS infertile patients were recruited. The fasting serum levels of follicle stimulating hormone (FSH), luteinizing hormone (LH), testosterone (T), prolactin (PRL), estradiol (E2), glucose (FBG), insulin (FINS), and apelin at the early follicular phase were measured. To further investigate the relation between apelin and IR, we treated the PCOS patients with DRSP-EE (1 tablet daily, 21 d/month) plus metformin (500 mg tid) for 3 months. All of the above indices were measured again after treatment.\n\n\nRESULTS\n1) Levels of apelin, LH, LH/FSH, T, and FINS, as well as homeostatic model assessment of IR (HOMA-IR) in PCOS patients, were significantly higher than in the control group before treatment. 2) These indices significantly decreased after treatment with DRSP-EE plus metformin. 3) Correlation analysis showed that apelin level was positively correlated with body mass index (BMI), FINS level, and HOMA-IR.\n\n\nCONCLUSIONS\nApelin level significantly increased in PCOS patients. The combined therapy of DRSP-EE plus metformin not only decreases IR, but also improves apelin level. This combination is a superior approach for PCOS treatment.",
"title": ""
},
{
"docid": "9e4bb8ced136a3e09f93d319a87b1db7",
"text": "Requirements are the basis upon which software architecture lies. As a consequence they should be expressed as precisely as possible in order to propose the best compromise between stakeholder needs and engineering constraints.\n While some measurements such as frame rate or latency are a widely known mean of expressing requirements in the 3D community, they often are loosely defined. This leads to software engineering decisions which exclude some of the most promising options.\n This paper proposes to adapt a non-functional requirements expression template used in general software architecture to the specific case of 3D based systems engineering. It shows that in the process some interesting proposals appear as a straightforward consequence of the better definition of the system to be built.",
"title": ""
},
{
"docid": "f7d06c6f2313417fd2795ce4c4402f0e",
"text": "Decades of research suggest that similarity in demographics, values, activities, and attitudes predicts higher marital satisfaction. The present study examined the relationship between similarity in Big Five personality factors and initial levels and 12-year trajectories of marital satisfaction in long-term couples, who were in their 40s and 60s at the beginning of the study. Across the entire sample, greater overall personality similarity predicted more negative slopes in marital satisfaction trajectories. In addition, spousal similarity on Conscientiousness and Extraversion more strongly predicted negative marital satisfaction outcomes among the midlife sample than among the older sample. Results are discussed in terms of the different life tasks faced by young, midlife, and older adults, and the implications of these tasks for the \"ingredients\" of marital satisfaction.",
"title": ""
},
{
"docid": "1cb39c8a2dd05a8b2241c9c795ca265f",
"text": "An ever growing interest and wide adoption of Internet of Things (IoT) and Web technologies are unleashing a true potential of designing a broad range of high-quality consumer applications. Smart cities, smart buildings, and e-health are among various application domains which are currently benefiting and will continue to benefit from IoT and Web technologies in a foreseeable future. Similarly, semantic technologies have proven their effectiveness in various domains and a few among multiple challenges which semantic Web technologies are addressing are to (i) mitigate heterogeneity by providing semantic inter-operability, (ii) facilitate easy integration of data application, (iii) deduce and extract new knowledge to build applications providing smart solutions, and (iv) facilitate inter-operability among various data processes including representation, management and storage of data. In this tutorial, our focus will be on the combination of Web technologies, Semantic Web, and IoT technologies and we will present to our audience that how a merger of these technologies is leading towards an evolution from IoT to Web of Things (WoT) to Semantic Web of Things. This tutorial will introduce the basics of Internet of Things, Web of Things and Semantic Web and will demonstrate tools and techniques designed to enable the rapid development of semantics-based Web of Things applications. One key aspect of this tutorial is to familiarize its audience with the open source tools designed by different semantic Web, IoT and WoT based projects and provide the audience a rich hands-on experience to use these tools and build smart applications with minimal efforts. Thus, reducing the learning curve to its maximum. We will showcase real-world use case scenarios which are designed using semantically-enabled WoT frameworks (e.g. CityPulse, FIESTA-IoT and M3).",
"title": ""
},
{
"docid": "522efee981fb9eb26ba31d02230604fa",
"text": "The lack of an integrated medical information service model has been considered as a main issue in ensuring the continuity of healthcare from doctors, healthcare professionals to patients; the resultant unavailable, inaccurate, or unconformable healthcare information services have been recognized as main causes to the annual millions of medication errors. This paper proposes an Internet computing model aimed at providing an affordable, interoperable, ease of integration, and systematic approach to the development of a medical information service network to enable the delivery of continuity of healthcare. Web services, wireless, and advanced automatic identification technologies are fully integrated in the proposed service model. Some preliminary research results are presented.",
"title": ""
},
{
"docid": "6d141d99945bfa55fe8cc187f8c1b864",
"text": "Many software development and maintenance tools involve matching between natural language words in different software artifacts (e.g., traceability) or between queries submitted by a user and software artifacts (e.g., code search). Because different people likely created the queries and various artifacts, the effectiveness of these tools is often improved by expanding queries and adding related words to textual artifact representations. Synonyms are particularly useful to overcome the mismatch in vocabularies, as well as other word relations that indicate semantic similarity. However, experience shows that many words are semantically similar in computer science situations, but not in typical natural language documents. In this paper, we present an automatic technique to mine semantically similar words, particularly in the software context. We leverage the role of leading comments for methods and programmer conventions in writing them. Our evaluation of our mined related comment-code word mappings that do not already occur in WordNet are indeed viewed as computer science, semantically-similar word pairs in high proportions.",
"title": ""
},
{
"docid": "66a0c31ee0722ad9fc67bad142de1fb0",
"text": "One of the key challenges facing wireless sensor networks (WSNs) is extending network lifetime due to sensor nodes having limited power supplies. Extending WSN lifetime is complicated because nodes often experience differential power consumption. For example, nodes closer to the sink in a given routing topology transmit more data and thus consume power more rapidly than nodes farther from the sink. Inspired by the huddling behavior of emperor penguins where the penguins take turns on the cold extremities of a penguin “huddle”, we propose mobile node rotation, a new method for using low-cost mobile sensor nodes to address differential power consumption and extend WSN lifetime. Specifically, we propose to rotate the nodes through the high power consumption locations. We propose efficient algorithms for single and multiple rounds of rotations. Our extensive simulations show that mobile node rotation can extend WSN topology lifetime by more than eight times on average which is significantly better than existing alternatives.",
"title": ""
}
] |
scidocsrr
|
040dbe51b012768f8a43cb51f6377a01
|
A Generative Approach for Dynamically Varying Photorealistic Facial Expressions in Human-Agent Interactions
|
[
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
}
] |
[
{
"docid": "d9947d2a6b6e184cf27515ad72cc7f98",
"text": "This study examined the role of a social network site (SNS) in the lives of 11 high school teenagers from low-income families in the U.S. We conducted interviews, talk-alouds and content analysis of MySpace profiles. Qualitative analysis of these data revealed three themes. First, SNSs facilitated emotional support, helped maintain relationships, and provided a platform for self-presentation. Second, students used their online social network to fulfill essential social learning functions. Third, within their SNS, students engaged in a complex array of communicative and creative endeavors. In several instances, students’ use of social network sites demonstrated the new literacy practices currently being discussed within education reform efforts. Based on our findings, we suggest additional directions for related research and educational practices.",
"title": ""
},
{
"docid": "6d31096c16817f13641b23ae808b0dce",
"text": "In the competitive environment of the internet, retaining and growing one's user base is of major concern to most web services. Furthermore, the economic model of many web services is allowing free access to most content, and generating revenue through advertising. This unique model requires securing user time on a site rather than the purchase of good which makes it crucially important to create new kinds of metrics and solutions for growth and retention efforts for web services. In this work, we address this problem by proposing a new retention metric for web services by concentrating on the rate of user return. We further apply predictive analysis to the proposed retention metric on a service, as a means for characterizing lost customers. Finally, we set up a simple yet effective framework to evaluate a multitude of factors that contribute to user return. Specifically, we define the problem of return time prediction for free web services. Our solution is based on the Cox's proportional hazard model from survival analysis. The hazard based approach offers several benefits including the ability to work with censored data, to model the dynamics in user return rates, and to easily incorporate different types of covariates in the model. We compare the performance of our hazard based model in predicting the user return time and in categorizing users into buckets based on their predicted return time, against several baseline regression and classification methods and find the hazard based approach to be superior.",
"title": ""
},
{
"docid": "d414dd7d2fd699e58cae194a828ae042",
"text": "Network design problems consist of identifying an optimal subgraph of a graph, subject to side constraints. In generalized network design problems, the vertex set is partitioned into clusters and the feasibility conditions are expressed in terms of the clusters. Several applications of generalized network design problems arise in the fields of telecommunications, transportation and biology. The aim of this review article is to formally define generalized network design problems, to study their properties and to provide some applications.",
"title": ""
},
{
"docid": "5f3dfd97498034d0a104bf41149651f2",
"text": "BACKGROUND\nResearch questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example.\n\n\nMETHODS\nA questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted.\n\n\nRESULTS\nThe adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting.\n\n\nCONCLUSIONS\nThe failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times.",
"title": ""
},
{
"docid": "4a201e61cbb168df4df48fe331817260",
"text": "The use of qualitative research methodology is well established for data generation within healthcare research generally and clinical pharmacy research specifically. In the past, qualitative research methodology has been criticized for lacking rigour, transparency, justification of data collection and analysis methods being used, and hence the integrity of findings. Demonstrating rigour in qualitative studies is essential so that the research findings have the “integrity” to make an impact on practice, policy or both. Unlike other healthcare disciplines, the issue of “quality” of qualitative research has not been discussed much in the clinical pharmacy discipline. The aim of this paper is to highlight the importance of rigour in qualitative research, present different philosophical standpoints on the issue of quality in qualitative research and to discuss briefly strategies to ensure rigour in qualitative research. Finally, a mini review of recent research is presented to illustrate the strategies reported by clinical pharmacy researchers to ensure rigour in their qualitative research studies.",
"title": ""
},
{
"docid": "c80b01048778e5863882868774e3e98d",
"text": "A new liaison role between Information Systems (IS) and users, the relationship manager (RM), has recently emerged. Accolding to the prescriptive literature, RMs add value by deep understanding of the businesses they serve and technologyleadership. Uttle is known, however, about their actual work practices. Is the RM an intermediary, filtering information and sometimes misinformation, from clients to IS, or do they play more pivotal roles as entrepreneurs and change agents? This article addresses these questions by studying four RMs in four different industries. The RMs were studied using the structured observation methodology employed by Mintzberg (CEOs), Ives and Olson (MIS managers), and Stephens et at. (CIOs), l'he findings suggest that while RMs spend less time communicating with users than one would expect, they are leaders, often mavericks, in the entrepreneurial work practices necessary to build partnerships with clients and to make the IS infrastructure more responsive to client needs.",
"title": ""
},
{
"docid": "cbc22adbd8f7a82d1972e6b53bc5e000",
"text": "This thesis examines several aspects of narrative in video games, in order to construct a detailed image of the characteristics that separate video game narrative from other, noninteractive narrative forms. These findings are subsequently used to identify and define three basic models of video game narrative. Since it has also been argued that video games should not have narrative in the first place, the validity of this question is also examined. Overall, it is found that while the interactive nature of the video game does indeed cause some problems for the implementation of narrative, this relationship is not as problematic as has been claimed, and there seems to be no reason to argue that video games and narrative should be kept separate from each other. It is also found that the interactivity of the video game encourages the use of certain narrative tools while discouraging or disabling the author’s access to other options. Thus, video games in general allow for a much greater degree of subjectivity than is typical in non-interactive narrative forms. At the same time, the narrator’s ability to manipulate time within the story is restricted precisely because of this increased subjectivity. Another interesting trait of video game narrative is that it opens up the possibility of the game player sharing some of the author’s abilities as the narrator. Three models of video game narrative are suggested. These included the linear ‘string of pearls’ model, where the player is given a certain degree of freedom at certain times during the game, but ultimately still follows a linear storyline; the ‘branching narrative’ model, where the player helps define the course and ending of the story by selecting from narrative branches; and the ‘amusement park’ model, where the player is invited to put together a story out of a group of optional subplots. The existence of a fourth model, the ‘building blocks’ model, is also noted, but this model is not discussed in detail as it does not utilise any traditional narrative structure, instead allowing the players to define every aspect of the story.",
"title": ""
},
{
"docid": "b829049a8abf47f8f13595ca54eaa009",
"text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.",
"title": ""
},
{
"docid": "d1f130e8b742023e5224a2f99c3639b5",
"text": "An increasing number of firms are responding to new opportunities and risks originating from digital technologies by introducing company-wide digital transformation strategies as a means to systematically address their digital transformation. Yet, what processes and strategizing activities affect the formation of digital transformation strategies in organizations are not well understood. We adopt a phenomenon-based approach and investigate the formation of digital transformation strategies in organizations from a process perspective. Drawing on an activity-based process model that links Mintzberg’s strategy typology with the concept of IS strategizing, we conduct a multiple-case study at three European car manufacturers. Our results indicate that digital transformation strategies are predominantly shaped by a diversity of emergent strategizing activities of separate organizational subcommunities through a bottom-up process and prior to the initiation of a holistic digital transformation strategy by top management. As a result, top management’s deliberate strategies seek to accomplish the subsequent alignment of preexisting emergent strategy contents with their intentions and to simultaneously increase the share of deliberate contents. Besides providing practical implications for the formulation and implementation of a digital transformation strategy, we contribute to the literature on digital transformation and IS strategizing.",
"title": ""
},
{
"docid": "16708c9e697dbd867aa81420bc669953",
"text": "We propose a dynamic trust management protocol for Internet of Things (IoT) systems to deal with misbehaving nodes whose status or behavior may change dynamically. We consider an IoT system being deployed in a smart community where each node autonomously performs trust evaluation. We provide a formal treatment of the convergence, accuracy, and resilience properties of our dynamic trust management protocol and validate these desirable properties through simulation. We demonstrate the effectiveness of our dynamic trust management protocol with a trust-based service composition application in IoT environments. Our results indicate that trust-based service composition significantly outperforms non-trust-based service composition and approaches the maximum achievable performance based on ground truth status. Furthermore, our dynamic trust management protocol is capable of adaptively adjusting the best trust parameter setting in response to dynamically changing environments to maximize application performance.",
"title": ""
},
{
"docid": "ce1f67735cfa0e68246e92c53072155f",
"text": "Event and relation extraction are central tasks in biomedical text mining. Where relation extraction concerns the detection of semantic connections between pairs of entities, event extraction expands this concept with the addition of trigger words, multiple arguments and nested events, in order to more accurately model the diversity of natural language. In this work we develop a convolutional neural network that can be used for both event and relation extraction. We use a linear representation of the input text, where information is encoded with various vector space embeddings. Most notably, we encode the parse graph into this linear space using dependency path embeddings. We integrate our neural network into the open source Turku Event Extraction System (TEES) framework. Using this system, our machine learning model can be easily applied to a large set of corpora from e.g. the BioNLP, DDI Extraction and BioCreative shared tasks. We evaluate our system on 12 different event, relation and NER corpora, showing good generalizability to many tasks and achieving improved performance on several corpora.",
"title": ""
},
{
"docid": "c1220bd89725bf06b811f3ae14fc1a3f",
"text": "In the simultaneous localization and mapping (SLAM) problem, a mobile robot must build a map of its environment while simultaneously determining its location within that map. We propose a new algorithm, for visual SLAM (VSLAM), in which the robot's only sensory information is video imagery. Our approach combines stereo vision with a popular sequential Monte Carlo (SMC) algorithm, the Rao-Blackwellised particle filter, to simultaneously explore multiple hypotheses about the robot's six degree-of-freedom trajectory through space and maintain a distinct stochastic map for each of those candidate trajectories. We demonstrate the algorithm's effectiveness in mapping a large outdoor virtual reality environment in the presence of odometry error",
"title": ""
},
{
"docid": "41d97d98a524e5f1e45ae724017819d9",
"text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.",
"title": ""
},
{
"docid": "a07fe75974cc12b12c274a72b1a1fdf5",
"text": "We study a model of incentivizing correct computations in a variety of cryptographic tasks. For each of these tasks we propose a formal model and design protocols satisfying our model's constraints in a hybrid model where parties have access to special ideal functionalities that enable monetary transactions. We summarize our results: Verifiable computation. We consider a setting where a delegator outsources computation to a worker who expects to get paid in return for delivering correct outputs. We design protocols that compile both public and private verification schemes to support incentivizations described above. Secure computation with restricted leakage. Building on the recent work of Huang et al. (Security and Privacy 2012), we show an efficient secure computation protocol that monetarily penalizes an adversary that attempts to learn one bit of information but gets detected in the process. Fair secure computation. Inspired by recent work, we consider a model of secure computation where a party that aborts after learning the output is monetarily penalized. We then propose an ideal transaction functionality FML and show a constant-round realization on the Bitcoin network. Then, in the FML-hybrid world we design a constant round protocol for secure computation in this model. Noninteractive bounties. We provide formal definitions and candidate realizations of noninteractive bounty mechanisms on the Bitcoin network which (1) allow a bounty maker to place a bounty for the solution of a hard problem by sending a single message, and (2) allow a bounty collector (unknown at the time of bounty creation) with the solution to claim the bounty, while (3) ensuring that the bounty maker can learn the solution whenever its bounty is collected, and (4) preventing malicious eavesdropping parties from both claiming the bounty as well as learning the solution.\n All our protocol realizations (except those realizing fair secure computation) rely on a special ideal functionality that is not currently supported in Bitcoin due to limitations imposed on Bitcoin scripts. Motivated by this, we propose validation complexity of a protocol, a formal complexity measure that captures the amount of computational effort required to validate Bitcoin transactions required to implement it in Bitcoin. Our protocols are also designed to take advantage of optimistic scenarios where participating parties behave honestly.",
"title": ""
},
{
"docid": "6a2d9597887a39d3f3a22427b32260aa",
"text": "A complete over-current and short-circuit protection system for Low-Drop Out (LDO) regulator applications is presented. The system consists of a current-sense circuit, a current comparator, a D Flip-Flop, an OR logic gate and the short-circuit sense topology. The protection circuit is able to shut down the LDO rapidly by producing a control signal when an over-current event occurs while during the normal operation of the LDO, the protection circuit is idle. The restart of the LDO has to be made manually and a master Reset signal is, also, available. The proposed protection system was designed by using a standard 0.18u CMOS technology using high-voltage transistors.",
"title": ""
},
{
"docid": "7a033c2bedf107dfbd92887eaa4ae8c0",
"text": "Building high-performance virtual machines is a complex and expensive undertaking; many popular languages still have low-performance implementations. We describe a new approach to virtual machine (VM) construction that amortizes much of the effort in initial construction by allowing new languages to be implemented with modest additional effort. The approach relies on abstract syntax tree (AST) interpretation where a node can rewrite itself to a more specialized or more general node, together with an optimizing compiler that exploits the structure of the interpreter. The compiler uses speculative assumptions and deoptimization in order to produce efficient machine code. Our initial experience suggests that high performance is attainable while preserving a modular and layered architecture, and that new high-performance language implementations can be obtained by writing little more than a stylized interpreter.",
"title": ""
},
{
"docid": "4021a6d34ca5a6c3d2d021d0ba2cbcf7",
"text": "Visual compatibility is critical for fashion analysis, yet is missing in existing fashion image synthesis systems. In this paper, we propose to explicitly model visual compatibility through fashion image inpainting. To this end, we present Fashion Inpainting Networks (FiNet), a two-stage image-to-image generation framework that is able to perform compatible and diverse inpainting. Disentangling the generation of shape and appearance to ensure photorealistic results, our framework consists of a shape generation network and an appearance generation network. More importantly, for each generation network, we introduce two encoders interacting with one another to learn latent code in a shared compatibility space. The latent representations are jointly optimized with the corresponding generation network to condition the synthesis process, encouraging a diverse set of generated results that are visually compatible with existing fashion garments. In addition, our framework is readily extended to clothing reconstruction and fashion transfer, with impressive results. Extensive experiments with comparisons with state-of-the-art approaches on fashion synthesis task quantitatively and qualitatively demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "6807545797869605f90721ee5777b5a0",
"text": "This paper examines location-based services (LBS) from a broad perspective involving deWnitions, characteristics, and application prospects. We present an overview of LBS modeling regarding users, locations, contexts and data. The LBS modeling endeavors are cross-examined with a research agenda of geographic information science. Some core research themes are brieXy speculated. © 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "81f71bf0f923ff07a770ae30321382f6",
"text": "The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.",
"title": ""
}
] |
scidocsrr
|
4cb359c4c463322b3b9cdfc0a3fa60ea
|
Unwrapping and Visualizing Cuneiform Tablets
|
[
{
"docid": "4323d4280fd38420b77b53e5a68c4b92",
"text": "In this paper we present a new form of texture mapping that produces increased photorealism. Coefficients of a biquadratic polynomial are stored per texel, and used to reconstruct the surface color under varying lighting conditions. Like bump mapping, this allows the perception of surface deformations. However, our method is image based, and photographs of a surface under varying lighting conditions can be used to construct these maps. Unlike bump maps, these Polynomial Texture Maps (PTMs) also capture variations due to surface self-shadowing and interreflections, which enhance realism. Surface colors can be efficiently reconstructed from polynomial coefficients and light directions with minimal fixed-point hardware. We have also found PTMs useful for producing a number of other effects such as anisotropic and Fresnel shading models and variable depth of focus. Lastly, we present several reflectance function transformations that act as contrast enhancement operators. We have found these particularly useful in the study of ancient archeological clay and stone writings.",
"title": ""
},
{
"docid": "b9bb07dd039c0542a7309f2291732f82",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
}
] |
[
{
"docid": "9b6191f96f096035429583e8799a2eb2",
"text": "Recognition of food images is challenging due to their diversity and practical for health care on foods for people. In this paper, we propose an automatic food image recognition system for 85 food categories by fusing various kinds of image features including bag-of-features~(BoF), color histogram, Gabor features and gradient histogram with Multiple Kernel Learning~(MKL). In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 62.52% classification rate for 85 food categories.",
"title": ""
},
{
"docid": "073e3296fc2976f0db2f18a06b0cb816",
"text": "Nowadays spoofing detection is one of the priority research areas in the field of automatic speaker verification. The success of Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) Challenge 2015 confirmed the impressive perspective in detection of unforeseen spoofing trials based on speech synthesis and voice conversion techniques. However, there is a small number of researches addressed to replay spoofing attacks which are more likely to be used by non-professional impersonators. This paper describes the Speech Technology Center (STC) anti-spoofing system submitted for ASVspoof 2017 which is focused on replay attacks detection. Here we investigate the efficiency of a deep learning approach for solution of the mentioned-above task. Experimental results obtained on the Challenge corpora demonstrate that the selected approach outperforms current state-of-the-art baseline systems in terms of spoofing detection quality. Our primary system produced an EER of 6.73% on the evaluation part of the corpora which is 72% relative improvement over the ASVspoof 2017 baseline system.",
"title": ""
},
{
"docid": "1d2485f8a4e2a5a9f983bfee3e036b92",
"text": "Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work [1] where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.",
"title": ""
},
{
"docid": "f91daa578d75c6add8c7e4ce54fbd106",
"text": "Aviation spare parts provisioning is a highly complex problem. Traditionally, provisioning has been carried out using a conventional Poisson-based approach where inventory quantities are calculated separately for each part number and demands from different operations bases are consolidated into one single location. In an environment with multiple operations bases, however, such simplifications can lead to situations in which spares -- although available at another airport -- first have to be shipped to the location where the demand actually arose, leading to flight delays and cancellations. In this paper we demonstrate how simulation-based optimisation can help with the multi-location inventory problem by quantifying synergy potential between locations and how total service lifecycle cost can be further reduced without increasing risk right away from the Initial Provisioning (IP) stage onwards by taking into account advanced logistics policies such as pro-active re-balancing of spares between stocking locations.",
"title": ""
},
{
"docid": "6befac01d5a3f21100a54de43ee62845",
"text": "Robots used for tasks in space have strict requirements. Modular reconfigurable robots have a variety of attributes that are advantageous for these conditions including the ability to serve as many tools at once saving weight, packing into compressed forms saving space and having large redundancy to increase robustness. Self-reconfigurable systems can also self-repair as well as automatically adapt to changing conditions or ones that were not anticipated. PolyBot may serve well in the space manipulation and surface mobility class of space applications.",
"title": ""
},
{
"docid": "0c570100f94e46c5be1fb171be02d1d8",
"text": "One of the most important research areas in the field of Human-Computer-Interaction (HCI) is gesture recognition as it provides a natural and intuitive way to communicate between people and machines. Gesture-based HCI applications range from computer games to virtual/augmented reality and is recently being explored in other fields. The idea behind this work is to develop and implement a gesture-based HCI system using the recently developed Microsoft Kinect depth sensor to control the Windows Mouse Cursor as well as PowerPoint presentations. The paper can be divided into two major modules namely, hand detection and gesture recognition. For hand detection, the application uses the Kinect for Windows Software Development Kit (SDK) and its skeletal tracking features to detect a user's hand which enables the user to control the Windows mouse cursor. Gesture recognition involves capturing user gestures and interpreting motions or signs that the user performs to simulate different mouse events.",
"title": ""
},
{
"docid": "eb7ccd69c0bbb4e421b8db3b265f5ba6",
"text": "The discovery of Novoselov et al. (2004) of a simple method to transfer a single atomic layer of carbon from the c-face of graphite to a substrate suitable for the measurement of its electrical and optical properties has led to a renewed interest in what was considered to be before that time a prototypical, yet theoretical, two-dimensional system. Indeed, recent theoretical studies of graphene reveal that the linear electronic band dispersion near the Brillouin zone corners gives rise to electrons and holes that propagate as if they were massless fermions and anomalous quantum transport was experimentally observed. Recent calculations and experimental determination of the optical phonons of graphene reveal Kohn anomalies at high-symmetry points in the Brillouin zone. They also show that the Born– Oppenheimer principle breaks down for doped graphene. Since a carbon nanotube can be viewed as a rolled-up sheet of graphene, these recent theoretical and experimental results on graphene should be important to researchers working on carbon nanotubes. The goal of this contribution is to review the exciting news about the electronic and phonon states of graphene and to suggest how these discoveries help understand the properties of carbon nanotubes.",
"title": ""
},
{
"docid": "ef2cc160033a30ed1341b45468d93464",
"text": "A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research.",
"title": ""
},
{
"docid": "3da64db5e0d9474eb2194e73f71e0d6c",
"text": "Standard cutaneous innervation maps show strict midline demarcation. Although authors of these maps accept variability of peripheral nerve distribution or occasionally even the midline overlap of cutaneous nerves, this concept seems to be neglected by many other anatomists. To support the statement that such transmedian overlap exists, we performed an extensive literature search and found ample evidence for all regions (head/neck, thorax/abdomen, back, perineum, and genitalia) that peripheral nerves cross the midline or communicate across the midline. This concept has substantial clinical implications, most notably in anesthesia and perineural tumor spread. This article serves as a springboard for future anatomical, clinical, and experimental research.",
"title": ""
},
{
"docid": "caf88f7fd5ec7f3a3499f46f541b985b",
"text": "Photo-based question answering is a useful way of finding information about physical objects. Current question answering (QA) systems are text-based and can be difficult to use when a question involves an object with distinct visual features. A photo-based QA system allows direct use of a photo to refer to the object. We develop a three-layer system architecture for photo-based QA that brings together recent technical achievements in question answering and image matching. The first, template-based QA layer matches a query photo to online images and extracts structured data from multimedia databases to answer questions about the photo. To simplify image matching, it exploits the question text to filter images based on categories and keywords. The second, information retrieval QA layer searches an internal repository of resolved photo-based questions to retrieve relevant answers. The third, human-computation QA layer leverages community experts to handle the most difficult cases. A series of experiments performed on a pilot dataset of 30,000 images of books, movie DVD covers, grocery items, and landmarks demonstrate the technical feasibility of this architecture. We present three prototypes to show how photo-based QA can be built into an online album, a text-based QA, and a mobile application.",
"title": ""
},
{
"docid": "47866c8eb518f962213e3a2d8c3ab8d3",
"text": "With the increasing fears of the impacts of the high penetration rates of Photovoltaic (PV) systems, a technical study about their effects on the power quality metrics of the utility grid is required. Since such study requires a complete modeling of the PV system in an electromagnetic transient software environment, PSCAD was chosen. This paper investigates a grid-tied PV system that is prepared in PSCAD. The model consists of PV array, DC link capacitor, DC-DC buck converter, three phase six-pulse inverter, AC inductive filter, transformer and a utility grid equivalent model. The paper starts with investigating the tasks of the different blocks of the grid-tied PV system model. It also investigates the effect of variable atmospheric conditions (irradiation and temperature) on the performance of the different components in the model. DC-DC converter and inverter in this model use PWM and SPWM switching techniques, respectively. Finally, total harmonic distortion (THD) analysis on the inverter output current at PCC will be applied and the obtained THD values will be compared with the limits specified by the regulating standards such as IEEE Std 519-1992.",
"title": ""
},
{
"docid": "f7c2ebd19c41b697d52850a225bfe8a0",
"text": "There is currently a misconception among designers and users of free space laser communication (lasercom) equipment that 1550 nm light suffers from less atmospheric attenuation than 785 or 850 nm light in all weather conditions. This misconception is based upon a published equation for atmospheric attenuation as a function of wavelength, which is used frequently in the free-space lasercom literature. In hazy weather (visibility > 2 km), the prediction of less atmospheric attenuation at 1550 nm is most likely true. However, in foggy weather (visibility < 500 m), it appears that the attenuation of laser light is independent of wavelength, ie. 785 nm, 850 nm, and 1550 nm are all attenuated equally by fog. This same wavelength independence is also observed in snow and rain. This observation is based on an extensive literature search, and from full Mie scattering calculations. A modification to the published equation describing the atmospheric attenuation of laser power, which more accurately describes the effects of fog, is offered. This observation of wavelength-independent attenuation in fog is important, because fog, heavy snow, and extreme rain are the only types of weather that are likely to disrupt short (<500 m) lasercom links. Short lasercom links will be necessary to meet the high availability requirements of the telecommunications industry.",
"title": ""
},
{
"docid": "7d97847e9d62fad542e77779789a1edf",
"text": "We report on the development of new low-cost, compact ultra-wideband microstrip pulse generators capable of varying the pulse duration electronically. These electronically tunable pulse generators generate an initial step function using a step recovery diode, which is then converted into pulses of various durations by alternately switching on one of the switches, realized by PIN diode or MESFET, spatially located along a short-circuited transmission line. Representative pulse-duration variations from 300 to 800 ps have been demonstrated experimentally and theoretically. Good symmetry and low distortion have been achieved for the pulses. Measured results also confirm the simulations.",
"title": ""
},
{
"docid": "a9b96c162e9a7f39a90c294167178c05",
"text": "The performance of automotive radar systems is expected to significantly increase in the near future. With enhanced resolution capabilities more accurate and denser point clouds of traffic participants and roadside infrastructure can be acquired and so the amount of gathered information is growing drastically. One main driver for this development is the global trend towards self-driving cars, which all rely on precise and fine-grained sensor information. New radar signal processing concepts have to be developed in order to provide this additional information. This paper presents a prototype high resolution radar sensor which helps to facilitate algorithm development and verification. The system is operational under real-time conditions and achieves excellent performance in terms of range, velocity and angular resolution. Complex traffic scenarios can be acquired out of a moving test vehicle, which is very close to the target application. First measurement runs on public roads are extremely promising and show an outstanding single-snapshot performance. Complex objects can be precisely located and recognized by their contour shape. In order to increase the possible recording time, the raw data rate is reduced by several orders of magnitude in real-time by means of constant false alarm rate (CFAR) processing. The number of target cells can still exceed more than 10 000 points in a single measurement cycle for typical road scenarios.",
"title": ""
},
{
"docid": "2450127a12eeb12f19d2b6430058bf70",
"text": "Web Services can be used as a communication structure for embedded devices. Out of numerous Web Service specifications there is a subset (Profile) to implement Web Services on resource constrained devices, called Devices Profile for Web Services (DPWS). The resulting service oriented architecture enables the user to discover new devices, e.g., in the local smart home network. The open standards for Web Services further define a metadata format for service description. Besides the simple invocation of service operations, an eventing mechanism is specified. A client which subscribes to an event will receive notifications. Those features and the inherent Plug&Play capability provided by Web Services are suitable to connect smart home devices in a user-friendly manner. However, DPWS specifies no standard procedure to combine multiple devices with each other to build more complex applications. Therefore, new concepts for embedded Web Service orchestration are needed. Some commercial solutions use a central hub, which represents a Single Point of Failure (SPoF). Hence, a failure would lead to a breakdown of all smart Internet of Things (IoT) applications and decrease the user acceptance. We propose a concept where no central broker is needed by defining a Configuration Service that runs on multiple devices. Based on that service a smartphone app is used to establish trigger-action rules between devices. A prototype is implemented as a proof-of-concept for different smart home devices. Furthermore, a mobile Android application to find and orchestrate the devices is presented.",
"title": ""
},
{
"docid": "a33cf416cf48f67cd0a91bf3a385d303",
"text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.",
"title": ""
},
{
"docid": "8a478da1c2091525762db35f1ac7af58",
"text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.",
"title": ""
},
{
"docid": "607d632419c47c3568bfa00ec48eb71d",
"text": "Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https://github.com/JiaRenChang/PSMNet.",
"title": ""
},
{
"docid": "fb975eb9b65916dcc96415036ee02566",
"text": "We are developing a sensor system for use in clinical gait analysis. This research involves the development of an on-shoe device that can be used for continuous and real -time monitoring of gait. This paper presents the design of an instrumente d insole and a removable instrumented shoe attachment. Transmission of the data is in real -time and wireless, providing information about the three-dimensional motion, position, and pressure distribution of the foot. Using pattern recognition and numerical analysis of the calibrated sensor outputs, algorithms will be developed to analyze the data in real -time. Results will be validated by comparison to results from a commerical optical gait analysis system at the Massachusetts General Hospital (MGH) Biomoti on Lab.",
"title": ""
},
{
"docid": "21502c42ef7a8e342334b93b1b5069d6",
"text": "Motivations to engage in retail online shopping can include both utilitarian and hedonic shopping dimensions. To cater to these consumers, online retailers can create a cognitively and esthetically rich shopping environment, through sophisticated levels of interactive web utilities and features, offering not only utilitarian benefits and attributes but also providing hedonic benefits of enjoyment. Since the effect of interactive websites has proven to stimulate online consumer’s perceptions, this study presumes that websites with multimedia rich interactive utilities and features can influence online consumers’ shopping motivations and entice them to modify or even transform their original shopping predispositions by providing them with attractive and enhanced interactive features and controls, thus generating a positive attitude towards products and services offered by the retailer. This study seeks to explore the effects of Web interactivity on online consumer behavior through an attitudinal model of technology acceptance.",
"title": ""
}
] |
scidocsrr
|
fae3095469e50fac6324869cb0f85ae0
|
Comparative Studies of Passive Imaging in Terahertz and Mid-Wavelength Infrared Ranges for Object Detection
|
[
{
"docid": "f1b137d4ac36e141415963d6fab14918",
"text": "Passive equipments operating in the 30-300 GHz (millimeter wave) band are compared to those in the 300 GHz-3 THz (submillimeter band). Equipments operating in the submillimeter band can measure distance and also spectral information and have been used to address new opportunities in security. Solid state spectral information is available in the submillimeter region making it possible to identify materials, whereas in millimeter region bulk optical properties determine the image contrast. The optical properties in the region from 30 GHz to 3 THz are discussed for some typical inorganic and organic solids. In the millimeter-wave region of the spectrum, obscurants such as poor weather, dust, and smoke can be penetrated and useful imagery generated for surveillance. In the 30 GHz-3 THz region dielectrics such as plastic and cloth are also transparent and the detection of contraband hidden under clothing is possible. A passive millimeter-wave imaging concept based on a folded Schmidt camera has been developed and applied to poor weather navigation and security. The optical design uses a rotating mirror and is folded using polarization techniques. The design is very well corrected over a wide field of view making it ideal for surveillance and security. This produces a relatively compact imager which minimizes the receiver count.",
"title": ""
},
{
"docid": "22285844f638715765d21bff139d1bb1",
"text": "The field of Terahertz (THz) radiation, electromagnetic energy, between 0.3 to 3 THz, has seen intense interest recently, because it combines some of the best properties of IR along with those of RF. For example, THz radiation can penetrate fabrics with less attenuation than IR, while its short wavelength maintains comparable imaging capabilities. We discuss major challenges in the field: designing systems and applications which fully exploit the unique properties of THz radiation. To illustrate, we present our reflective, radar-inspired THz imaging system and results, centered on biomedical burn imaging and skin hydration, and discuss challenges and ongoing research.",
"title": ""
}
] |
[
{
"docid": "44b71e1429f731cc2d91f919182f95a4",
"text": "Power management of multi-core processors is extremely important because it allows power/energy savings when all cores are not used. OS directed power management according to ACPI (Advanced Power and Configurations Interface) specifications is the common approach that industry has adopted for this purpose. While operating systems are capable of such power management, heuristics for effectively managing the power are still evolving. The granularity at which the cores are slowed down/turned off should be designed considering the phase behavior of the workloads. Using 3-D, video creation, office and e-learning applications from the SYSmark benchmark suite, we study the challenges in power management of a multi-core processor such as the AMD Quad-Core Opteron\" and Phenom\". We unveil effects of the idle core frequency on the performance and power of the active cores. We adjust the idle core frequency to have the least detrimental effect on the active core performance. We present optimized hardware and operating system configurations that reduce average active power by 30% while reducing performance by an average of less than 3%. We also present complete system measurements and power breakdown between the various systems components using the SYSmark and SPEC CPU workloads. It is observed that the processor core and the disk consume the most power, with core having the highest variability.",
"title": ""
},
{
"docid": "56525ce9536c3c8ea03ab6852b854e95",
"text": "The Distributed Denial of Service (DDoS) attacks are a serious threat in today's Internet where packets from large number of compromised hosts block the path to the victim nodes and overload the victim servers. In the newly proposed future Internet Architecture, Named Data Networking (NDN), the architecture itself has prevention measures to reduce the overload to the servers. This on the other hand increases the work and security threats to the intermediate routers. Our project aims at identifying the DDoS attack in NDN which is known as Interest flooding attack, mitigate the consequence of it and provide service to the legitimate users. We have developed a game model for the DDoS attacks and provide possible countermeasures to stop the flooding of interests. Through this game theory model, we either forward or redirect or drop the incoming interest packets thereby reducing the PIT table consumption. This helps in identifying the nodes that send malicious interest packets and eradicate their actions of sending malicious interests further. The main highlight of this work is that we have implemented the Game Theory model in the NDN architecture. It was primarily imposed for the IP internet architecture.",
"title": ""
},
{
"docid": "75961ecd0eadf854ad9f7d0d76f7e9c8",
"text": "This paper presents the design of a microstrip-CPW transition where the CPW line propagates close to slotline mode. This design allows the solution to be determined entirely though analytical techniques. In addition, a planar via-less microwave crossover using this technique is proposed. The experimental results at 5 GHz show that the crossover has a minimum isolation of 32 dB. It also has low in-band insertion loss and return loss of 1.2 dB and 18 dB respectively over more than 44 % of bandwidth.",
"title": ""
},
{
"docid": "c936e76e8db97b640a4123e66169d1b8",
"text": "Varying philosophical and theoretical orientations to qualitative inquiry remind us that issues of quality and credibility intersect with audience and intended research purposes. This overview examines ways of enhancing the quality and credibility of qualitative analysis by dealing with three distinct but related inquiry concerns: rigorous techniques and methods for gathering and analyzing qualitative data, including attention to validity, reliability, and triangulation; the credibility, competence, and perceived trustworthiness of the qualitative researcher; and the philosophical beliefs of evaluation users about such paradigm-based preferences as objectivity versus subjectivity, truth versus perspective, and generalizations versus extrapolations. Although this overview examines some general approaches to issues of credibility and data quality in qualitative analysis, it is important to acknowledge that particular philosophical underpinnings, specific paradigms, and special purposes for qualitative inquiry will typically include additional or substitute criteria for assuring and judging quality, validity, and credibility. Moreover, the context for these considerations has evolved. In early literature on evaluation methods the debate between qualitative and quantitative methodologists was often strident. In recent years the debate has softened. A consensus has gradually emerged that the important challenge is to match appropriately the methods to empirical questions and issues, and not to universally advocate any single methodological approach for all problems.",
"title": ""
},
{
"docid": "3f80322512497ceb4129d1f10a6dbf99",
"text": "Alzheimer's dis ease (AD) is a leading cause of mortality in the developed world with 70% risk attributable to genetics. The remaining 30% of AD risk is hypothesized to include environmental factors and human lifestyle patterns. Environmental factors possibly include inorganic and organic hazards, exposure to toxic metals (aluminium, copper), pesticides (organochlorine and organophosphate insecticides), industrial chemicals (flame retardants) and air pollutants (particulate matter). Long term exposures to these environmental contaminants together with bioaccumulation over an individual's life-time are speculated to induce neuroinflammation and neuropathology paving the way for developing AD. Epidemiologic associations between environmental contaminant exposures and AD are still limited. However, many in vitro and animal studies have identified toxic effects of environmental contaminants at the cellular level, revealing alterations of pathways and metabolisms associated with AD that warrant further investigations. This review provides an overview of in vitro, animal and epidemiological studies on the etiology of AD, highlighting available data supportive of the long hypothesized link between toxic environmental exposures and development of AD pathology.",
"title": ""
},
{
"docid": "575d8fed62c2afa1429d16444b6b173c",
"text": "Research into learning and teaching in higher education over the last 25 years has provided a variety of concepts, methods, and findings that are of both theoretical interest and practical relevance. It has revealed the relationships between students’ approaches to studying, their conceptions of learning, and their perceptions of their academic context. It has revealed the relationships between teachers’ approaches to teaching, their conceptions of teaching, and their perceptions of the teaching environment. And it has provided a range of tools that can be exploited for developing our understanding of learning and teaching in particular contexts and for assessing and enhancing the student experience on specific courses and programs.",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "0d51dc0edc9c4e1c050b536c7c46d49d",
"text": "MOTIVATION\nThe identification of risk-associated genetic variants in common diseases remains a challenge to the biomedical research community. It has been suggested that common statistical approaches that exclusively measure main effects are often unable to detect interactions between some of these variants. Detecting and interpreting interactions is a challenging open problem from the statistical and computational perspectives. Methods in computing science may improve our understanding on the mechanisms of genetic disease by detecting interactions even in the presence of very low heritabilities.\n\n\nRESULTS\nWe have implemented a method using Genetic Programming that is able to induce a Decision Tree to detect interactions in genetic variants. This method has a cross-validation strategy for estimating classification and prediction errors and tests for consistencies in the results. To have better estimates, a new consistency measure that takes into account interactions and can be used in a genetic programming environment is proposed. This method detected five different interaction models with heritabilities as low as 0.008 and with prediction errors similar to the generated errors.\n\n\nAVAILABILITY\nInformation on the generated data sets and executable code is available upon request.",
"title": ""
},
{
"docid": "0e19123e438f39c4404d4bd486348247",
"text": "Boundary and edge cues are highly beneficial in improving a wide variety of vision tasks such as semantic segmentation, object recognition, stereo, and object proposal generation. Recently, the problem of edge detection has been revisited and significant progress has been made with deep learning. While classical edge detection is a challenging binary problem in itself, the category-aware semantic edge detection by nature is an even more challenging multi-label problem. We model the problem such that each edge pixel can be associated with more than one class as they appear in contours or junctions belonging to two or more semantic classes. To this end, we propose a novel end-to-end deep semantic edge learning architecture based on ResNet and a new skip-layer architecture where category-wise edge activations at the top convolution layer share and are fused with the same set of bottom layer features. We then propose a multi-label loss function to supervise the fused activations. We show that our proposed architecture benefits this problem with better performance, and we outperform the current state-of-the-art semantic edge detection methods by a large margin on standard data sets such as SBD and Cityscapes.",
"title": ""
},
{
"docid": "6379e89db7d9063569a342ef2056307a",
"text": "Grounded Theory is a research method that generates theory from data and is useful for understanding how people resolve problems that are of concern to them. Although the method looks deceptively simple in concept, implementing Grounded Theory research can often be confusing in practice. Furthermore, despite many papers in the social science disciplines and nursing describing the use of Grounded Theory, there are very few examples and relevant guides for the software engineering researcher. This paper describes our experience using classical (i.e., Glaserian) Grounded Theory in a software engineering context and attempts to interpret the canons of classical Grounded Theory in a manner that is relevant to software engineers. We provide model to help the software engineering researchers interpret the often fuzzy definitions found in Grounded Theory texts and share our experience and lessons learned during our research. We summarize these lessons learned in a set of fifteen guidelines.",
"title": ""
},
{
"docid": "68a0e00fccbf8658186f31915479708e",
"text": "Semantic amodal segmentation is a recently proposed extension to instance-aware segmentation that includes the prediction of the invisible region of each object instance. We present the first all-in-one end-to-end trainable model for semantic amodal segmentation that predicts the amodal instance masks as well as their visible and invisible part in a single forward pass. In a detailed analysis, we provide experiments to show which architecture choices are beneficial for an all-in-one amodal segmentation model. On the COCO amodal dataset, our model outperforms the current baseline for amodal segmentation by a large margin. To further evaluate our model, we provide two new datasets with ground truth for semantic amodal segmentation, D2S amodal and COCOA cls. For both datasets, our model provides a strong baseline performance. Using special data augmentation techniques, we show that amodal segmentation on D2S amodal is possible with reasonable performance, even without providing amodal training data.",
"title": ""
},
{
"docid": "a0fcd09ea8f29a0827385ae9f48ddd44",
"text": "Networks play a central role in modern data analysis, enabling us to reason about systems by studying the relationships between their parts. Most often in network analysis, the edges are given. However, in many systems it is difficult or impossible to measure the network directly. Examples of latent networks include economic interactions linking financial instruments and patterns of reciprocity in gang violence. In these cases, we are limited to noisy observations of events associated with each node. To enable analysis of these implicit networks, we develop a probabilistic model that combines mutuallyexciting point processes with random graph models. We show how the Poisson superposition principle enables an elegant auxiliary variable formulation and a fully-Bayesian, parallel inference algorithm. We evaluate this new model empirically on several datasets.",
"title": ""
},
{
"docid": "8d3c1e649e40bf72f847a9f8ac6edf38",
"text": "Many organizations are forming “virtual teams” of geographically distributed knowledge workers to collaborate on a variety of workplace tasks. But how effective are these virtual teams compared to traditional face-to-face groups? Do they create similar teamwork and is information exchanged as effectively? An exploratory study of a World Wide Web-based asynchronous computer conference system known as MeetingWebTM is presented and discussed. It was found that teams using this computer-mediated communication system (CMCS) could not outperform traditional (face-to-face) teams under otherwise comparable circumstances. Further, relational links among team members were found to be a significant contributor to the effectiveness of information exchange. Though virtual and face-to-face teams exhibit similar levels of communication effectiveness, face-to-face team members report higher levels of satisfaction. Therefore, the paper presents steps that can be taken to improve the interaction experience of virtual teams. Finally, guidelines for creating and managing virtual teams are suggested, based on the findings of this research and other authoritative sources. Subject Areas: Collaboration, Computer Conference, Computer-mediated Communication Systems (CMCS), Internet, Virtual Teams, and World Wide Web. *The authors wish to thank the Special Focus Editor and the reviewers for their thoughtful critique of the earlier versions of this paper. We also wish to acknowledge the contributions of the Northeastern University College of Business Administration and its staff, which provided the web server and the MeetingWebTM software used in these experiments.",
"title": ""
},
{
"docid": "8e7b273daa9d91e010a9ea02b4b7658c",
"text": "This collection of invited papers covers a lot of ground in its nearly 800 pages, so any review of reasonable length will necessarily be selective. However, there are a number of features that make the book as a whole a comparatively easy and thoroughly rewarding read. Multiauthor compendia of this kind are often disjointed, with very little uniformity from chapter to chapter in terms of breadth, depth, and format. Such is not the case here. Breadth and depth of treatment are surprisingly consistent, with coherent formats that often include both a little history of the field and some thoughts about the future. The volume has a very logical structure in which the chapters flow and follow on from each other in an orderly fashion. There are also many cross-references between chapters, which allow the authors to build upon the foundation of one another's work and eliminate redundancies. Specifically, the contents consist of 38 survey papers grouped into three parts: Fundamentals; Processes, Methods, and Resources; and Applications. Taken together, they provide both a comprehensive introduction to the field and a useful reference volume. In addition to the usual author and subject matter indices, there is a substantial glossary that students will find invaluable. Each chapter ends with a bibliography, together with tips for further reading and mention of other resources, such as conferences , workshops, and URLs. Part I covers the full spectrum of linguistic levels of analysis from a largely theoretical point of view, including phonology, morphology, lexicography, syntax, semantics, discourse, and dialogue. The result is a layered approach to the subject matter that allows each new level to take the previous level for granted. However, the authors do not typically restrict themselves to linguistic theory. For example, Hanks's chapter on lexicography characterizes the deficiencies of both hand-built and corpus-based dictionaries , as well as discussing other practical problems, such as how to link meaning and use. The phonology and morphology chapters provide fine introductions to these topics, which tend to receive short shrift in many NLP and AI texts. Part I ends with two chapters, one on formal grammars and one on complexity, which round out the computational aspect. This is an excellent pairing, with Martín-Vide's thorough treatment of regular and context-free languages leading into Carpen-ter's masterly survey of problem complexity and practical efficiency. Part II is more task based, with a focus on such activities as text segmentation, …",
"title": ""
},
{
"docid": "4457c0b480ec9f3d503aa89c6bbf03b9",
"text": "An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit.",
"title": ""
},
{
"docid": "f5f3e946634af981f9a7e00ad9a0296c",
"text": "We investigate the use of machine learning algorithms to classify the topic of messages published in Online Social Networks using as input solely user interaction data, instead of the actual message content. During a period of six months, we monitored and gathered data from users interacting with news messages on Twitter, creating thousands of information diffusion processes. The data set presented regular patterns on how messages were spread over the network by users, depending on its content, so we could build classifiers to predict the topic of a message using as input only the information of which users shared such message. Thus, we demonstrate the explanatory power of user behavior data on identifying content present in Social Networks, proposing techniques for topic classification that can be used to assist traditional content identification strategies (such as natural language or image processing) in challenging contexts, or be applied in scenarios with limited information access.",
"title": ""
},
{
"docid": "418e29af01be9655c06df63918f41092",
"text": "A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the metalearned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
},
{
"docid": "c581f1797921247e9674c06b49c1b055",
"text": "Service organizations are increasingly utilizing advanced information and communication technologies, such as the Internet, in hopes of improving the efficiency, cost-effectiveness, and/or quality of their customer-facing operations. More of the contact a customer has with the firm is likely to be with the back-office and, therefore, mediated by technology. While previous operations management research has been important for its contributions to our understanding of customer contact in face-to-facesettings, considerably less work has been done to improve our understanding of customer contact in what we refer to as technology-mediated settings (e.g., via telephone, instant messaging (IM), or email). This paper builds upon the service operations management (SOM) literature on customer contact by theoretically defining and empirically developing new multi-item measurement scales specifically designed for assessing tech ology-mediated customer contact. Seminal works on customer contact theory and its empirical measurement are employed to provide a foundation for extending these concepts to technology-mediated contexts. We also draw upon other important frameworks, including the Service Profit Chain, the Theory of Planned Behavior, and the concept of media/information richness, in order to identify and define our constructs. We follow a rigorous empirical scale development process to create parsimonious sets of survey items that exhibit satisfactory levels of reliability and validity to be useful in advancing SOM empirical research in the emerging Internet-enabled back-office. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "35c904cdbaddec5e7cd634978c0b415d",
"text": "Life-long visual localization is one of the most challenging topics in robotics over the last few years. The difficulty of this task is in the strong appearance changes that a place suffers due to dynamic elements, illumination, weather or seasons. In this paper, we propose a novel method (ABLE-M) to cope with the main problems of carrying out a robust visual topological localization along time. The novelty of our approach resides in the description of sequences of monocular images as binary codes, which are extracted from a global LDB descriptor and efficiently matched using FLANN for fast nearest neighbor search. Besides, an illumination invariant technique is applied. The usage of the proposed binary description and matching method provides a reduction of memory and computational costs, which is necessary for long-term performance. Our proposal is evaluated in different life-long navigation scenarios, where ABLE-M outperforms some of the main state-of-the-art algorithms, such as WI-SURF, BRIEF-Gist, FAB-MAP or SeqSLAM. Tests are presented for four public datasets where a same route is traversed at different times of day or night, along the months or across all four seasons.",
"title": ""
}
] |
scidocsrr
|
871f6d5657eda9c4a4b2f26e436bbe77
|
N E ] 2 4 M ay 2 01 7 Fast-Slow Recurrent Neural Networks
|
[
{
"docid": "c10dd691e79d211ab02f2239198af45c",
"text": "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-ofthe-art.",
"title": ""
},
{
"docid": "7c55398f45928f26e0af812a7c311c6f",
"text": "We have already shown that extracting long-term dependencies from sequential data is difficult, both for determimstic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input/output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general type of a-priori knowledge, namely that the temporal dependencIes are structured hierarchically. This implies that long-term dependencies are represented by variables with a long time scale. This principle is applied to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach is proposed for HMMs and IOHMMs.",
"title": ""
},
{
"docid": "0178f7e0f0db3dac510a8b8a94767f34",
"text": "We propose a novel method of regularization for recurrent neural networks called suprisal-driven zoneout. In this method, states zoneout (maintain their previous value rather than updating), when the suprisal (discrepancy between the last state’s prediction and target) is small. Thus regularization is adaptive and input-driven on a per-neuron basis. We demonstrate the effectiveness of this idea by achieving state-of-the-art bits per character of 1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to the best known highly-engineered compression methods.",
"title": ""
}
] |
[
{
"docid": "0f505d193991bd0e3186409290e56217",
"text": "This stamp, honoring a Mexican artist who has transcended “la frontera” and has become and icon to Hispanics, feminists, and art lovers, will be a further reminder of the continuous cultural contributions of Latinos to the United States. (Cecilia Alvear, President of National Association of Hispanic Journalists (NAHJ) on the occasion of the introduction of the Frida Kahlo U.S. postage stamp; 2001; emphasis added)",
"title": ""
},
{
"docid": "2cfd5dcf7aa6710e71ac837f4192afc3",
"text": "In this paper, we present the depth-adaptive deep neural network using a depth map for semantic segmentation. Typical deep neural networks receive inputs at the predetermined locations regardless of the distance from the camera. This fixed receptive field presents a challenge to generalize the features of objects at various distances in neural networks. Specifically, the predetermined receptive fields are too small at a short distance, and vice versa. To overcome this challenge, we develop a neural network that is able to adapt the receptive field not only for each layer but also for each neuron at the spatial location. To adjust the receptive field, we propose the depth-adaptive multiscale (DaM) convolution layer consisting of the adaptive perception neuron and the in-layer multiscale neuron. The adaptive perception neuron is to adjust the receptive field at each spatial location using the corresponding depth information. The in-layer multiscale neuron is to apply the different size of the receptive field at each feature space to learn features at multiple scales. The proposed DaM convolution is applied to two fully convolutional neural networks. We demonstrate the effectiveness of the proposed neural networks on the publicly available RGB-D dataset for semantic segmentation and the novel hand segmentation dataset for hand-object interaction. The experimental results show that the proposed method outperforms the state-of-the-art methods without any additional layers or preprocessing/postprocessing.",
"title": ""
},
{
"docid": "93f7a6057bf0f446152daf3233d000aa",
"text": "Given a stream of depth images with a known cuboid reference object present in the scene, we propose a novel approach for accurate camera tracking and volumetric surface reconstruction in real-time. Our contribution in this paper is threefold: (a) utilizing a priori knowledge of the precisely manufactured cuboid reference object, we keep drift-free camera tracking without explicit global optimization; (b) we improve the fineness of the volumetric surface representation by proposing a prediction-corrected data fusion strategy rather than a simple moving average, which enables accurate reconstruction of high-frequency details such as the sharp edges of objects and geometries of high curvature; (c) we introduce a benchmark dataset CU3D that contains both synthetic and real-world scanning sequences with ground-truth camera trajectories and surface models for the quantitative evaluation of 3D reconstruction algorithms. We test our algorithm on our dataset and demonstrate its accuracy compared with other state-of-the-art algorithms. We release both our dataset and code as open-source (https://github.com/zhangxaochen/CuFusion) for other researchers to reproduce and verify our results.",
"title": ""
},
{
"docid": "461d42e45c0ebcfeeb074904957b943c",
"text": "Quadratic discriminant analysis is a common tool for classification, but estimation of the Gaussian parameters can be ill-posed. This paper contains theoretical and algorithmic contributions to Bayesian estimation for quadratic discriminant analysis. A distribution-based Bayesian classifier is derived using information geometry. Using a calculus of variations approach to define a functional Bregman divergence for distributions, it is shown that the Bayesian distribution-based classifier that minimizes the expected Bregman divergence of each class conditional distribution also minimizes the expected misclassification cost. A series approximation is used to relate regularized discriminant analysis to Bayesian discriminant analysis. A new Bayesian quadratic discriminant analysis classifier is proposed where the prior is defined using a coarse estimate of the covariance based on the training data; this classifier is termed BDA7. Results on benchmark data sets and simulations show that BDA7 performance is competitive with, and in some cases significantly better than, regularized quadratic discriminant analysis and the cross-validated Bayesian quadratic discriminant analysis classifier Quadratic Bayes.",
"title": ""
},
{
"docid": "7706e41b7c79a8c290d0f4f580fea534",
"text": "For various reasons, the cloud computing paradigm is unable to meet certain requirements (e.g. low latency and jitter, context awareness, mobility support) that are crucial for several applications (e.g. vehicular networks, augmented reality). To fulfil these requirements, various paradigms, such as fog computing, mobile edge computing, and mobile cloud computing, have emerged in recent years. While these edge paradigms share several features, most of the existing research is compartmentalised; no synergies have been explored. This is especially true in the field of security, where most analyses focus only on one edge paradigm, while ignoring the others. The main goal of this study is to holistically analyse the security threats, challenges, and mechanisms inherent in all edge paradigms, while highlighting potential synergies and venues of collaboration. In our results, we will show that all edge paradigms should consider the advances in other paradigms.",
"title": ""
},
{
"docid": "2651e41af0ed03a1078197bcde20a7d3",
"text": "The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.",
"title": ""
},
{
"docid": "1924730db532936166d07c6bab058800",
"text": "The rising popularity of digital table surfaces has spawned considerable interest in new interaction techniques. Most interactions fall into one of two modalities: 1) direct touch and multi-touch (by hand and by tangibles) directly on the surface, and 2) hand gestures above the surface. The limitation is that these two modalities ignore the rich interaction space between them. To move beyond this limitation, we first contribute a unification of these discrete interaction modalities called the continuous interaction space. The idea is that many interaction techniques can be developed that go beyond these two modalities, where they can leverage the space between them. That is, we believe that the underlying system should treat the space on and above the surface as a continuum, where a person can use touch, gestures, and tangibles anywhere in the space and naturally move between them. Our second contribution illustrates this, where we introduce a variety of interaction categories that exploit the space between these modalities. For example, with our Extended Continuous Gestures category, a person can start an interaction with a direct touch and drag, then naturally lift off the surface and continue their drag with a hand gesture over the surface. For each interaction category, we implement an example (or use prior work) that illustrates how that technique can be applied. In summary, our primary contribution is to broaden the design space of interaction techniques for digital surfaces, where we populate the continuous interaction space both with concepts and examples that emerge from considering this space as a continuum.",
"title": ""
},
{
"docid": "1acea5d872937a8929a174916f53303d",
"text": "The pattern of muscle glycogen synthesis following glycogen-depleting exercise occurs in two phases. Initially, there is a period of rapid synthesis of muscle glycogen that does not require the presence of insulin and lasts about 30-60 minutes. This rapid phase of muscle glycogen synthesis is characterised by an exercise-induced translocation of glucose transporter carrier protein-4 to the cell surface, leading to an increased permeability of the muscle membrane to glucose. Following this rapid phase of glycogen synthesis, muscle glycogen synthesis occurs at a much slower rate and this phase can last for several hours. Both muscle contraction and insulin have been shown to increase the activity of glycogen synthase, the rate-limiting enzyme in glycogen synthesis. Furthermore, it has been shown that muscle glycogen concentration is a potent regulator of glycogen synthase. Low muscle glycogen concentrations following exercise are associated with an increased rate of glucose transport and an increased capacity to convert glucose into glycogen. The highest muscle glycogen synthesis rates have been reported when large amounts of carbohydrate (1.0-1.85 g/kg/h) are consumed immediately post-exercise and at 15-60 minute intervals thereafter, for up to 5 hours post-exercise. When carbohydrate ingestion is delayed by several hours, this may lead to ~50% lower rates of muscle glycogen synthesis. The addition of certain amino acids and/or proteins to a carbohydrate supplement can increase muscle glycogen synthesis rates, most probably because of an enhanced insulin response. However, when carbohydrate intake is high (> or =1.2 g/kg/h) and provided at regular intervals, a further increase in insulin concentrations by additional supplementation of protein and/or amino acids does not further increase the rate of muscle glycogen synthesis. Thus, when carbohydrate intake is insufficient (<1.2 g/kg/h), the addition of certain amino acids and/or proteins may be beneficial for muscle glycogen synthesis. Furthermore, ingestion of insulinotropic protein and/or amino acid mixtures might stimulate post-exercise net muscle protein anabolism. Suggestions have been made that carbohydrate availability is the main limiting factor for glycogen synthesis. A large part of the ingested glucose that enters the bloodstream appears to be extracted by tissues other than the exercise muscle (i.e. liver, other muscle groups or fat tissue) and may therefore limit the amount of glucose available to maximise muscle glycogen synthesis rates. Furthermore, intestinal glucose absorption may also be a rate-limiting factor for muscle glycogen synthesis when large quantities (>1 g/min) of glucose are ingested following exercise.",
"title": ""
},
{
"docid": "0347347608738b966ca4a62dfb37fdd7",
"text": "Much of the work done in the field of tangible interaction has focused on creating tools for learning; however, in many cases, little evidence has been provided that tangible interfaces offer educational benefits compared to more conventional interaction techniques. In this paper, we present a study comparing the use of a tangible and a graphical interface as part of an interactive computer programming and robotics exhibit that we designed for the Boston Museum of Science. In this study, we have collected observations of 260 museum visitors and conducted interviews with 13 family groups. Our results show that visitors found the tangible and the graphical systems equally easy to understand. However, with the tangible interface, visitors were significantly more likely to try the exhibit and significantly more likely to actively participate in groups. In turn, we show that regardless of the condition, involving multiple active participants leads to significantly longer interaction times. Finally, we examine the role of children and adults in each condition and present evidence that children are more actively involved in the tangible condition, an effect that seems to be especially strong for girls.",
"title": ""
},
{
"docid": "ef2898e76ab581478b87674356185c2d",
"text": "This paper presents the theory, design procedure, and implementation of a dual-band planar quadrature hybrid with enhanced bandwidth. The topology of the circuit is a three-branch-line (3-BL) quadrature hybrid, which provides much larger flexibility to allocate the desired operating frequencies and necessary bandwidths than other previously published configurations. A performance comparison with other dual-band planar topologies is presented. Finally, a 3-BL quadrature hybrid for dual band (2.4 and 5 GHz) wireless local area network systems was fabricated, aimed to cover the bands corresponding to the standards IEEE802.11a/b. The measurements show a 16% and 18% bandwidth for the lower and upper frequency, respectively, satisfying and exceeding the bandwidth requirements for the above standards",
"title": ""
},
{
"docid": "b8b3c300b0786c7cf5449945edb2e15c",
"text": "This paper outlines problems that cellular network operators will face as energy-efficient housing becomes more popular. We report measurement results from houses made of modern construction materials that are required to achieve sufficient level of energy-efficiency, but that impact heavily also on radio signal propagation. Energy-efficiency is especially important in northern countries, where houses need to be properly isolated as heating generates a big share of the total energy consumption of households. However, the energy-efficiency trend will also reach rest of the Europe and other warmer countries as the tightening energy-efficiency requirements concern also cooling the houses. The measurement results indicate severe problems originating from radio signal attenuation as it increases up to 35 dB for individual construction materials for cellular frequencies around 2 GHz. From the perspective of actual building penetration losses in modern, energy-efficient houses, average attenuation values even up to 30 dB have been measured. Additional attenuation is very sensitive to buildings materials, but could jeopardize cellular coverage in the future.",
"title": ""
},
{
"docid": "7317ba76ddba2933cdf01d8284fd687e",
"text": "In most of the cases, scientists depend on previous literature which is relevant to their research fields for developing new ideas. However, it is not wise, nor possible, to track all existed publications because the volume of literature collection grows extremely fast. Therefore, researchers generally follow, or cite merely a small proportion of publications which they are interested in. For such a large collection, it is rather interesting to forecast which kind of literature is more likely to attract scientists' response. In this paper, we use the citations as a measurement for the popularity among researchers and study the interesting problem of Citation Count Prediction (CCP) to examine the characteristics for popularity. Estimation of possible popularity is of great significance and is quite challenging. We have utilized several features of fundamental characteristics for those papers that are highly cited and have predicted the popularity degree of each literature in the future. We have implemented a system which takes a series of features of a particular publication as input and produces as output the estimated citation counts of that article after a given time period. We consider several regression models to formulate the learning process and evaluate their performance based on the coefficient of determination (R-square). Experimental results on a real-large data set show that the best predictive model achieves a mean average predictive performance of 0.740 measured in R-square, which significantly outperforms several alternative algorithms.",
"title": ""
},
{
"docid": "259794d0416876b6c490fba53f2eaf69",
"text": "All Rights Reserved © 2012 IJARCET Abstract – Now a days, the classification and grading is performed based on observations and through experience. The system utilizes image-processing techniques to classify and grade fruits. The developed system starts the process by capturing the fruit’s image using a regular digital camera. Then, the image is transmitted to the processing level where feature extraction, classification and grading is done using MATLAB. The fruits are classified based on color and graded based on size. Both classification and grading are realized by Fuzzy Logic approach. The results obtained are very promising.",
"title": ""
},
{
"docid": "619f38266a35e76a77fb4141879e1e68",
"text": "In article various approaches to measurement of efficiency of innovations and the problems arising at their measurement are considered, the system of an indistinct conclusion for the solution of a problem of obtaining recommendations about measurement of efficiency of innovations is offered.",
"title": ""
},
{
"docid": "2e35483beb568ab514601ba21d70c2d3",
"text": "Determining the intended sense of words in text – word sense disambiguation (WSD) – is a long-standing problem in natural language processing. In this paper, we present WSD algorithms which use neural network language models to achieve state-of-the-art precision. Each of these methods learns to disambiguate word senses using only a set of word senses, a few example sentences for each sense taken from a licensed lexicon, and a large unlabeled text corpus. We classify based on cosine similarity of vectors derived from the contexts in unlabeled query and labeled example sentences. We demonstrate state-of-the-art results when using the WordNet sense inventory, and significantly better than baseline performance using the New Oxford American Dictionary inventory. The best performance was achieved by combining an LSTM language model with graph label propagation.",
"title": ""
},
{
"docid": "24409b7013c37c577ba67bc20c61addb",
"text": "Traditional stereo matching approaches generally have problems in handling textureless regions, strong occlusions and reflective regions that do not satisfy a Lambertian surface assumption. In this paper, we propose to combine the predicted surface normal by deep learning to overcome these inherent difficulties in stereo matching. With the selected reliable disparities from stereo matching method and effective edge fusion strategy, we can faithfully convert the predicted surface normal map to a disparity map by solving a least squares system which maintains discontinuity on object boundaries and continuity on other regions. Then we refine the disparity map iteratively by bilateral filtering-based completion and edge feature refinement. Experimental results on the Middlebury dataset and our own captured stereo sequences demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "179b76942747f8a90f9036ea8d2377e7",
"text": "CNN (Convolution Neural Network) is widely used in visual analysis and achieves exceptionally high performances in image classification, face detection, object recognition, image recoloring, and other learning jobs. Using deep learning frameworks, such as Torch and Tensorflow, CNN can be efficiently computed by leveraging the power of GPU. However, one drawback of GPU is its limited memory which prohibits us from handling large images. Passing a 4K resolution image to the VGG network will result in an exception of out-of-memory for Titan-X GPU. In this paper, we propose a new approach that adopts the BSP (bulk synchronization parallel) model to compute CNNs for images of any size. Before fed to a specific CNN layer, the image is split into smaller pieces which go through the neural network separately. Then, a specific padding and normalization technique is adopted to merge sub-images back into one image. Our approach can be easily extended to support distributed multi-GPUs. In this paper, we use neural style network as our example to illustrate the effectiveness of our approach. We show that using one Titan-X GPU, we can transfer the style of an image with 10,000×10,000 pixels within 1 minute.",
"title": ""
},
{
"docid": "22445127362a9a2b16521a4a48f24686",
"text": "This work introduces the engineering design of a device capable to detect serum turbidity. We hypothesized that an electronic, portable, and low cost device that can provide objective, quantitative measurements of serum turbidity might have the potential to improve the early detection of neonatal sepsis. The design features, testing methodologies, and the obtained results are described. The final electronic device was evaluated in two experiments. The first one consisted in recording the turbidity value measured by the device for different solutions with known concentrations and different degrees of turbidity. The second analysis demonstrates a positive correlation between visual turbidity estimation and electronic turbidity measurement. Furthermore, our device demonstrated high turbidity in serum from two neonates with sepsis (one with a confirmed positive blood culture; the other one with a clinical diagnosis). We conclude that our electronic device may effectively measure serum turbidity at the bedside. Future studies will widen the possibility of additional clinical implications.",
"title": ""
},
{
"docid": "91f89990f9d41d3a92cbff38efc56b57",
"text": "ID3 algorithm was a classic classification of data mining. It always selected the attribute with many values. The attribute with many values wasn't the correct one, and it always created wrong classification. In the application of intrusion detection system, it would created fault alarm and omission alarm. To this fault, an improved decision tree algorithm was proposed. Though improvement of information gain formula, the correct attribute would be got. The decision tree was created after the data collected classified correctly. The tree would be not high and has a few of branches. The rule set would be got based on the decision tree. Experimental results showed the effectiveness of the algorithm, false alarm rate and omission rate decreased, increasing the detection rate and reducing the space consumption.",
"title": ""
},
{
"docid": "11d418decc0d06a3af74be77d4c71e5e",
"text": "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.",
"title": ""
}
] |
scidocsrr
|
820f8e69923176d4ecb5c1e6d2420932
|
Iot-based smart cities: A survey
|
[
{
"docid": "86cb3c072e67bed8803892b72297812c",
"text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of",
"title": ""
},
{
"docid": "f1e646a0627a5c61a0f73a41d35ccac7",
"text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.",
"title": ""
},
{
"docid": "cd891d5ecb9fa6bd8ae23e2a06151882",
"text": "Smart City represents one of the most promising and prominent Internet of Things (IoT) applications. In the last few years, smart city concept has played an important role in academic and industry fields, with the development and deployment of various middleware platforms. However, this expansion has followed distinct approaches creating a fragmented scenario, in which different IoT ecosystems are not able to communicate between them. To fill this gap, there is a need to revisit the smart city IoT semantic and offer a global common approach. To this purpose, this paper browses the semantic annotation of the sensors in the cloud, and innovative services can be implemented and considered by bridging Clouds and IoT. Things-like semantic will be considered to perform the aggregation of heterogeneous resources by defining the Clouds of Things (CoT) paradigm. We survey the smart city vision, providing information on the main requirements and highlighting the benefits of integrating different IoT ecosystems within the cloud under this new CoT vision and discuss relevant challenges in this research area.",
"title": ""
}
] |
[
{
"docid": "59bb9a006844dcf7c5f1769a4b208744",
"text": "3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world’s operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item “LTE-Advanced” to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.",
"title": ""
},
{
"docid": "02c50512c053fb8df4537e125afea321",
"text": "Online Social Networks (OSNs) have spread at stunning speed over the past decade. They are now a part of the lives of dozens of millions of people. The onset of OSNs has stretched the traditional notion of community to include groups of people who have never met in person but communicate with each other through OSNs to share knowledge, opinions, interests and activities. Here we explore in depth language independent gender classification. Our approach predicts gender using five colorbased features extracted from Twitter profiles such as the background color in a user’s profile page. This is in contrast with most existing methods for gender prediction that are language dependent. Those methods use high-dimensional spaces consisting of unique words extracted from such text fields as postings, user names, and profile descriptions. Our approach is independent of the user’s language, efficient, scalable, and computationally tractable, while attaining a good level of accuracy.",
"title": ""
},
{
"docid": "14fac379b3d4fdfc0024883eba8431b3",
"text": "PURPOSE\nTo summarize the literature addressing subthreshold or nondamaging retinal laser therapy (NRT) for central serous chorioretinopathy (CSCR) and to discuss results and trends that provoke further investigation.\n\n\nMETHODS\nAnalysis of current literature evaluating NRT with micropulse or continuous wave lasers for CSCR.\n\n\nRESULTS\nSixteen studies including 398 patients consisted of retrospective case series, prospective nonrandomized interventional case series, and prospective randomized clinical trials. All studies but one evaluated chronic CSCR, and laser parameters varied greatly between studies. Mean central macular thickness decreased, on average, by ∼80 μm by 3 months. Mean best-corrected visual acuity increased, on average, by about 9 letters by 3 months, and no study reported a decrease in acuity below presentation. No retinal complications were observed with the various forms of NRT used, but six patients in two studies with micropulse laser experienced pigmentary changes in the retinal pigment epithelium attributed to excessive laser settings.\n\n\nCONCLUSION\nBased on the current evidence, NRT demonstrates efficacy and safety in 12-month follow-up in patients with chronic and possibly acute CSCR. The NRT would benefit from better standardization of the laser settings and understanding of mechanisms of action, as well as further prospective randomized clinical trials.",
"title": ""
},
{
"docid": "3a882bf8553b8a0be05f1a6edbe01090",
"text": "We present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the transformation of the template into the input surface. By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template. We show that these correspondences can be improved by an additional step which improves the shape feature by minimizing the Chamfer distance between the input and transformed template. We demonstrate that our simple approach improves on stateof-the-art results on the difficult FAUST-inter challenge, with an average correspondence error of 2.88cm. We show, on the TOSCA dataset, that our method is robust to many types of perturbations, and generalizes to non-human shapes. This robustness allows it to perform well on real unclean, meshes from the the SCAPE dataset.",
"title": ""
},
{
"docid": "ac56eb533e3ae40b8300d4269fd2c08f",
"text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.",
"title": ""
},
{
"docid": "28f1b7635b777cf278cc8d53a5afafb9",
"text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.",
"title": ""
},
{
"docid": "200225a36d89de88a23bccedb54485ef",
"text": "This paper presents new software speed records for encryption and decryption using the block cipher AES-128 for different architectures. Target platforms are 8-bit AVR microcontrollers, NVIDIA graphics processing units (GPUs) and the Cell broadband engine. The new AVR implementation requires 124.6 and 181.3 cycles per byte for encryption and decryption with a code size of less than two kilobyte. Compared to the previous AVR records for encryption our code is 38 percent smaller and 1.24 times faster. The byte-sliced implementation for the synergistic processing elements of the Cell architecture achieves speed of 11.7 and 14.4 cycles per byte for encryption and decryption. Similarly, our fastest GPU implementation, running on the GTX 295 and handling many input streams in parallel, delivers throughputs of 0.17 and 0.19 cycles per byte for encryption and decryption respectively. Furthermore, this is the first AES implementation for the GPU which implements both encryption and decryption.",
"title": ""
},
{
"docid": "90d236f6ae1ad2a1404d6e1b497d8b3a",
"text": "In this paper, we propose a distributed and adaptive hybrid medium access control (DAH-MAC) scheme for a single-hop Internet of Things (IoT)-enabled mobile ad hoc network supporting voice and data services. A hybrid superframe structure is designed to accommodate packet transmissions from a varying number of mobile nodes generating either delay-sensitive voice traffic or best-effort data traffic. Within each superframe, voice nodes with packets to transmit access the channel in a contention-free period (CFP) using distributed time division multiple access, while data nodes contend for channel access in a contention period (CP) using truncated carrier sense multiple access with collision avoidance. In the CFP, by adaptively allocating time slots according to instantaneous voice traffic load, the MAC exploits voice traffic multiplexing to increase the voice capacity. In the CP, a throughput optimization framework is proposed for the DAH-MAC, which maximizes the aggregate data throughput by adjusting the optimal contention window size according to voice and data traffic load variations. Numerical results show that the proposed MAC scheme outperforms existing quality-of-service-aware MAC schemes for voice and data traffic in the presence of heterogeneous traffic load dynamics.",
"title": ""
},
{
"docid": "cc93fe4b851e3d7f3dcdcd2a54af6660",
"text": "Positioning is a key task in most field robotics applications but can be very challenging in GPS-denied or high-slip environments. A common tactic in such cases is to position visually, and we present a visual odometry implementation with the unusual reliance on optical mouse sensors to report vehicle velocity. Using multiple kilometers of data from a lunar rover prototype, we demonstrate that, in conjunction with a moderate-grade inertial measurement unit, such a sensor can provide an integrated pose stream that is at times more accurate than that achievable by wheel odometry and visibly more desirable for perception purposes than that provided by a high-end GPS-INS system. A discussion of the sensor’s limitations and several drift mitigating strategies attempted are presented.",
"title": ""
},
{
"docid": "9e0ded0d1f913dce7d0ea6aab115678c",
"text": "DevOps is changing the way organizations develop and deploy applications and service customers. Many organizations want to apply DevOps, but they are concerned by the security aspects of the produced software. This has triggered the creation of the terms SecDevOps and DevSecOps. These terms refer to incorporating security practices in a DevOps environment by promoting the collaboration between the development teams, the operations teams, and the security teams. This paper surveys the literature from academia and industry to identify the main aspects of this trend. The main aspects that we found are: definition, security best practices, compliance, process automation, tools for SecDevOps, software configuration, team collaboration, availability of activity data and information secrecy. Although the number of relevant publications is low, we believe that the terms are not buzzwords, they imply important challenges that the security and software communities shall address to help organizations develop secure software while applying DevOps processes.",
"title": ""
},
{
"docid": "2ebf4b32598ba3cd74513f1bab8fe447",
"text": "Anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis is an autoimmune disorder of the central nervous system (CNS). Its immunopathogenesis has been proposed to include early cerebrospinal fluid (CSF) lymphocytosis, subsequent CNS disease restriction and B cell mechanism predominance. There are limited data regarding T cell involvement in the disease. To contribute to the current knowledge, we investigated the complex system of chemokines and cytokines related to B and T cell functions in CSF and sera samples from anti-NMDAR encephalitis patients at different time-points of the disease. One patient in our study group had a long-persisting coma and underwent extraordinary immunosuppressive therapy. Twenty-seven paired CSF/serum samples were collected from nine patients during the follow-up period (median 12 months, range 1–26 months). The patient samples were stratified into three periods after the onset of the first disease symptom and compared with the controls. Modified Rankin score (mRS) defined the clinical status. The concentrations of the chemokines (C-X-C motif ligand (CXCL)10, CXCL8 and C-C motif ligand 2 (CCL2)) and the cytokines (interferon (IFN)γ, interleukin (IL)4, IL7, IL15, IL17A and tumour necrosis factor (TNF)α) were measured with Luminex multiple bead technology. The B cell-activating factor (BAFF) and CXCL13 concentrations were determined via enzyme-linked immunosorbent assay. We correlated the disease period with the mRS, pleocytosis and the levels of all of the investigated chemokines and cytokines. Non-parametric tests were used, a P value <0.05 was considered to be significant. The increased CXCL10 and CXCL13 CSF levels accompanied early-stage disease progression and pleocytosis. The CSF CXCL10 and CXCL13 levels were the highest in the most complicated patient. The CSF BAFF levels remained unchanged through the periods. In contrast, the CSF levels of T cell-related cytokines (INFγ, TNFα and IL17A) and IL15 were slightly increased at all of the periods examined. No dynamic changes in chemokine and cytokine levels were observed in the peripheral blood. Our data support the hypothesis that anti-NMDAR encephalitis is restricted to the CNS and that chemoattraction of immune cells dominates at its early stage. Furthermore, our findings raise the question of whether T cells are involved in this disease.",
"title": ""
},
{
"docid": "8245472f3dad1dce2f81e21b53af5793",
"text": "Butanol is an aliphatic saturated alcohol having the molecular formula of C(4)H(9)OH. Butanol can be used as an intermediate in chemical synthesis and as a solvent for a wide variety of chemical and textile industry applications. Moreover, butanol has been considered as a potential fuel or fuel additive. Biological production of butanol (with acetone and ethanol) was one of the largest industrial fermentation processes early in the 20th century. However, fermentative production of butanol had lost its competitiveness by 1960s due to increasing substrate costs and the advent of more efficient petrochemical processes. Recently, increasing demand for the use of renewable resources as feedstock for the production of chemicals combined with advances in biotechnology through omics, systems biology, metabolic engineering and innovative process developments is generating a renewed interest in fermentative butanol production. This article reviews biotechnological production of butanol by clostridia and some relevant fermentation and downstream processes. The strategies for strain improvement by metabolic engineering and further requirements to make fermentative butanol production a successful industrial process are also discussed.",
"title": ""
},
{
"docid": "d46434bbbf73460bf422ebe4bd65b590",
"text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.",
"title": ""
},
{
"docid": "bf4c0356b53f13fc2327dcf7c3377a8f",
"text": "This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.",
"title": ""
},
{
"docid": "adc84153f83ad1587a4218d817befe8d",
"text": "Improving the sluggish kinetics for the electrochemical reduction of water to molecular hydrogen in alkaline environments is one key to reducing the high overpotentials and associated energy losses in water-alkali and chlor-alkali electrolyzers. We found that a controlled arrangement of nanometer-scale Ni(OH)(2) clusters on platinum electrode surfaces manifests a factor of 8 activity increase in catalyzing the hydrogen evolution reaction relative to state-of-the-art metal and metal-oxide catalysts. In a bifunctional effect, the edges of the Ni(OH)(2) clusters promoted the dissociation of water and the production of hydrogen intermediates that then adsorbed on the nearby Pt surfaces and recombined into molecular hydrogen. The generation of these hydrogen intermediates could be further enhanced via Li(+)-induced destabilization of the HO-H bond, resulting in a factor of 10 total increase in activity.",
"title": ""
},
{
"docid": "64e26b00bba3bba8d2ab77b44f049c58",
"text": "The transmission properties of a folded corrugated substrate integrated waveguide (FCSIW) and a proposed half-mode FCSIW is investigated. For the same cut-off frequency, these structures have similar performance to CSIW and HMCSIW respectively, but with significantly reduced width. The top wall is isolated from the bottom wall at DC thereby permitting active devices to be connected directly to, and biased through them. Arrays of quarter-wave stubs above the top wall allow TE1,0 mode conduction currents to flow between the top and side walls. Measurements and simulations of waveguides designed to have a nominal cut-off frequency of 3 GHz demonstrate the feasibility of these compact waveguides.",
"title": ""
},
{
"docid": "87ce9a23040f809d1af2f7d032be2b41",
"text": "BACKGROUND\nThe majority of middle-aged to older patients with chronic conditions report forgetting to take medications as prescribed. The promotion of patients' smartphone medication reminder app (SMRA) use shows promise as a feasible and cost-effective way to support their medication adherence. Providing training on SMRA use, guided by the technology acceptance model (TAM), could be a promising intervention to promote patients' app use.\n\n\nOBJECTIVE\nThe aim of this pilot study was to (1) assess the feasibility of an SMRA training session designed to increase patients' intention to use the app through targeting perceived usefulness of app, perceived ease of app use, and positive subjective norm regarding app use and (2) understand the ways to improve the design and implementation of the training session in a hospital setting.\n\n\nMETHODS\nA two-group design was employed. A total of 11 patients older than 40 years (median=58, SD=9.55) and taking 3 or more prescribed medications took part in the study on one of two different dates as participants in either the training group (n=5) or nontraining group (n=6). The training group received an approximately 2-hour intervention training session designed to target TAM variables regarding one popular SMRA, the Medisafe app. The nontraining group received an approximately 2-hour control training session where the participants individually explored Medisafe app features. Each training session was concluded with a one-time survey and a one-time focus group.\n\n\nRESULTS\nMann-Whitney U tests revealed that the level of perceived ease of use (P=.13) and the level of intention to use an SMRA (P=.33) were higher in the training group (median=7.00, median=6.67, respectively) than in the nontraining group (median=6.25, median=5.83). However, the level of perceived usefulness (U=4.50, Z=-1.99, P=.05) and the level of positive subjective norm (P=.25) were lower in the training group (median=6.50, median=4.29) than in the nontraining group (median=6.92, median=4.50). Focus groups revealed the following participants' perceptions of SMRA use in the real-world setting that the intervention training session would need to emphasize in targeting perceived usefulness and positive subjective norm: (1) the participants would find an SMRA to be useful if they thought the app could help address specific struggles in medication adherence in their lives and (2) the participants think that their family members (or health care providers) might view positively the participants' SMRA use in primary care settings (or during routine medical checkups).\n\n\nCONCLUSIONS\nIntervention training session, guided by TAM, appeared feasible in targeting patients' perceived ease of use and, thereby, increasing intention to use an SMRA. Emphasizing the real-world utility of SMRA, the training session could better target patients' perceived usefulness and positive subjective norm that are also important in increasing their intention to use the app.",
"title": ""
},
{
"docid": "ab05c141b9d334f488cfb08ad9ed2137",
"text": "Cellular communications are undergoing significant evolutions in order to accommodate the load generated by increasingly pervasive smart mobile devices. Dynamic access network adaptation to customers' demands is one of the most promising paths taken by network operators. To that end, one must be able to process large amount of mobile traffic data and outline the network utilization in an automated manner. In this paper, we propose a framework to analyze broad sets of Call Detail Records (CDRs) so as to define categories of mobile call profiles and classify network usages accordingly. We evaluate our framework on a CDR dataset including more than 300 million calls recorded in an urban area over 5 months. We show how our approach allows to classify similar network usage profiles and to tell apart normal and outlying call behaviors.",
"title": ""
},
{
"docid": "7ebd355d65c8de8607da0363e8c86151",
"text": "In this letter, we compare the scanning beams of two leaky-wave antennas (LWAs), respectively, loaded with capacitive and inductive radiation elements, which have not been fully discussed in previous publications. It is pointed out that an LWA with only one type of radiation element suffers from a significant gain fluctuation over its beam-scanning band. To remedy this problem, we propose an LWA alternately loaded with inductive and capacitive elements along the host transmission line. The proposed LWA is able to steer its beam continuously from backward to forward with constant gain. A microstrip-based LWA is designed on the basis of the proposed method, and the measurement of its fabricated prototype demonstrates and confirms the desired results. This design method can widely be used to obtain LWAs with constant gain based on a variety of TLs.",
"title": ""
},
{
"docid": "a74871212b708baea289ee42665c8adf",
"text": "Current data mining techniques used to create failure predictors for online services require massive amounts of data to build, train, and test the predictors. These operations are tedious, time consuming, and are not done in real-time. Also, the accuracy of the resulting predictor is highly compromised by changes that affect the environment and working conditions of the predictor. We propose a new approach to creating a dynamic failure predictor for online services in real-time and keeping its accuracy high during the services run-time changes. We use synthetic transactions during the run-time lifecycle to generate current data about the service. This data is used in its ephemeral state to build, train, test, and maintain an up-to-date failure predictor. We implemented the proposed approach in a large-scale online ad service that processes billions of requests each month in six data centers distributed in three continents. We show that the proposed predictor is able to maintain failure prediction accuracy as high as 86% during online service changes, whereas the accuracy of the state-of-the-art predictors may drop to less than 10%.",
"title": ""
}
] |
scidocsrr
|
2cb2411c77f6f61e00d870bd439d887c
|
Personalising learning with dynamic prediction and adaptation to learning styles in a conversational intelligent tutoring system
|
[
{
"docid": "fea4f7992ec61eaad35872e3a800559c",
"text": "The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Mismatches often occur between the learning styles of students in a language class and the teaching style of the instructor, with unfortunate effects on the quality of the students’ learning and on their attitudes toward the class and the subject. This paper defines several dimensions of learning style thought to be particularly relevant to foreign and second language education, outlines ways in which certain learning styles are favored by the teaching styles of most language instructors, and suggests steps to address the educational needs of all students in foreign language classes. Students learn in many ways—by seeing and hearing; reflecting and acting; reasoning logically and intuitively; memorizing and visualizing. Teaching methods also vary. Some instructors lecture, others demonstrate or discuss; some focus on rules and others on examples; some emphasize memory and others understanding. How much a given student learns in a class is governed in part by that student’s native ability and prior preparation but also by the compatibility of his or her characteristic approach to learning and the instructor’s characteristic approach to teaching. The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Learning styles have been extensively discussed in the educational psychology literature (Claxton & Murrell 1987; Schmeck 1988) and specifically in the context Richard M. Felder (Ph.D., Princeton University) is the Hoechst Celanese Professor of Chemical Engineering at North Carolina State University,",
"title": ""
}
] |
[
{
"docid": "0a96e0b1c82ba4fecacda16746b29446",
"text": "PURPOSE\nExternal transcranial electric and magnetic stimulation techniques allow for the fast induction of sustained and measurable changes in cortical excitability. Here we aim to develop a paradigm using transcranial alternating current (tACS) in a frequency range higher than 1 kHz, which potentially interferes with membrane excitation, to shape neuroplastic processes in the human primary motor cortex (M1).\n\n\nMETHODS\nTranscranial alternating current stimulation was applied at 1, 2 and 5 kHz over the left primary motor cortex with a reference electrode over the contralateral orbit in 11 healthy volunteers for a duration of 10 min at an intensity of 1 mA. Monophasic single- pulse transcranial magnetic stimulation (TMS) was used to measure changes in corticospinal excitability, both during and after tACS in the low kHz range, in the right hand muscle. As a control inactive sham stimulation was performed.\n\n\nRESULTS\nAll frequencies of tACS increased the amplitudes of motor- evoked potentials (MEPs) up to 30-60 min post stimulation, compared to the baseline. Two and 5 kHz stimulations were more efficacious in inducing sustained changes in cortical excitability than 1 kHz stimulation, compared to sham stimulation.\n\n\nCONCLUSIONS\nSince tACS in the low kHz range appears too fast to interfere with network oscillations, this technique opens a new possibility to directly interfere with cortical excitability, probably via neuronal membrane activation. It may also potentially replace more conventional repetitive transcranial magnetic stimulation (rTMS) techniques for some applications in a clinical setting.",
"title": ""
},
{
"docid": "b648cbaef5ae2e273ddd8549bc360af5",
"text": "We present extensions to a continuousstate dependency parsing method that makes it applicable to morphologically rich languages. Starting with a highperformance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.",
"title": ""
},
{
"docid": "573dde1b9187a925ddad7e2f1e5102c4",
"text": "Nowadays, the usage of cloud storages to store data is a popular alternative to traditional local storage systems. However, besides the benefits such services can offer, there are also some downsides like vendor lock-in or unavailability. Furthermore, the large number of available providers and their different pricing models can turn the search for the best fitting provider into a tedious and cumbersome task. Furthermore, the optimal selection of a provider may change over time.In this paper, we formalize a system model that uses several cloud storages to offer a redundant storage for data. The according optimization problem considers historic data access patterns and predefined Quality of Service requirements for the selection of the best-fitting storages. Through extensive evaluations we show the benefits of our work and compare the novel approach against a baseline which follows a state-of-the-art approach.",
"title": ""
},
{
"docid": "90f3c2ea17433ee296702cca53511b9e",
"text": "This paper presents the design process, detailed analysis, and prototyping of a novel-structured line-start solid-rotor-based axial-flux permanent-magnet (AFPM) motor capable of autostarting with solid-rotor rings. The preliminary design is a slotless double-sided AFPM motor with four poles for high torque density and stable operation. Two concentric unilevel-spaced raised rings are added to the inner and outer radii of the rotor discs for smooth line-start of the motor. The design allows the motor to operate at both starting and synchronous speeds. The basic equations for the solid rings of the rotor of the proposed AFPM motor are discussed. Nonsymmetry of the designed motor led to its 3-D time-stepping finite-element analysis (FEA) via Vector Field Opera 14.0, which evaluates the design parameters and predicts the transient performance. To verify the design, a prototype 1-hp four-pole three-phase line-start AFPM synchronous motor is built and is used to test the performance in real time. There is a good agreement between experimental and FEA-based computed results. It is found that the prototype motor maintains high starting torque and good synchronization.",
"title": ""
},
{
"docid": "be1b9731df45408571e75d1add5dfe9c",
"text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.",
"title": ""
},
{
"docid": "4d2b0b01fae0ff2402fc2feaa5657574",
"text": "In this paper, we give an algorithm for the analysis and correction of the distorted QR barcode (QR-code) image. The introduced algorithm is based on the code area finding by four corners detection for 2D barcode. We combine Canny edge detection with contours finding algorithms to erase noises and reduce computation and utilize two tangents to approximate the right-bottom point. Then, we give a detail description on how to use inverse perspective transformation in rebuilding a QR-code image from a distorted one. We test our algorithm on images taken by mobile phones. The experiment shows that our algorithm is effective.",
"title": ""
},
{
"docid": "634b30b81da7139082927109b4c22d5e",
"text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.",
"title": ""
},
{
"docid": "62376954e4974ea2d52e96b373c67d8a",
"text": "Imagine the following situation. You’re in your car, listening to the radio and suddenly you hear a song that catches your attention. It’s the best new song you have heard for a long time, but you missed the announcement and don’t recognize the artist. Still, you would like to know more about this music. What should you do? You could call the radio station, but that’s too cumbersome. Wouldn’t it be nice if you could push a few buttons on your mobile phone and a few seconds later the phone would respond with the name of the artist and the title of the music you’re listening to? Perhaps even sending an email to your default email address with some supplemental information. In this paper we present an audio fingerprinting system, which makes the above scenario possible. By using the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified. At the core of the presented system are a highly robust fingerprint extraction method and a very efficient fingerprint search strategy, which enables searching a large fingerprint database with only limited computing resources.",
"title": ""
},
{
"docid": "03d5eadaefc71b1da1b26f4e2923a082",
"text": "Sleep is characterized by a structured combination of neuronal oscillations. In the hippocampus, slow-wave sleep (SWS) is marked by high-frequency network oscillations (approximately 200 Hz \"ripples\"), whereas neocortical SWS activity is organized into low-frequency delta (1-4 Hz) and spindle (7-14 Hz) oscillations. While these types of hippocampal and cortical oscillations have been studied extensively in isolation, the relationships between them remain unknown. Here, we demonstrate the existence of temporal correlations between hippocampal ripples and cortical spindles that are also reflected in the correlated activity of single neurons within these brain structures. Spindle-ripple episodes may thus constitute an important mechanism of cortico-hippocampal communication during sleep. This coactivation of hippocampal and neocortical pathways may be important for the process of memory consolidation, during which memories are gradually translated from short-term hippocampal to longer-term neocortical stores.",
"title": ""
},
{
"docid": "0a63a875b57b963372640f8fb527bd5c",
"text": "KEMI-TORNIO UNIVERSITY OF APPLIED SCIENCES Degree programme: Business Information Technology Writer: Guo, Shuhang Thesis title: Analysis and evaluation of similarity metrics in collaborative filtering recommender system Pages (of which appendix): 62 (1) Date: May 15, 2014 Thesis instructor: Ryabov, Vladimir This research is focused on the field of recommender systems. The general aims of this thesis are to summary the state-of-the-art in recommendation systems, evaluate the efficiency of the traditional similarity metrics with varies of data sets, and propose an ideology to model new similarity metrics. The literatures on recommender systems were studied for summarizing the current development in this filed. The implementation of the recommendation and evaluation was achieved by Apache Mahout which provides an open source platform of recommender engine. By importing data information into the project, a customized recommender engine was built. Since the recommending results of collaborative filtering recommender significantly rely on the choice of similarity metrics and the types of the data, several traditional similarity metrics provided in Apache Mahout were examined by the evaluator offered in the project with five data sets collected by some academy groups. From the evaluation, I found out that the best performance of each similarity metric was achieved by optimizing the adjustable parameters. The features of each similarity metric were obtained and analyzed with practical data sets. In addition, an ideology by combining two traditional metrics was proposed in the thesis and it was proven applicable and efficient by the metrics combination of Pearson correlation and Euclidean distance. The observation and evaluation of traditional similarity metrics with practical data is helpful to understand their features and suitability, from which new models can be created. Besides, the ideology proposed for modeling new similarity metrics can be found useful both theoretically and practically.",
"title": ""
},
{
"docid": "0028061d8bd57be4aaf6a01995b8c3bb",
"text": "Steganography is the art of concealing the existence of information within seemingly harmless carriers. It is a method similar to covert channels, spread spectrum communication and invisible inks which adds another step in security. A message in cipher text may arouse suspicion while an invisible message will not. A digital image is a flexible medium used to carry a secret message because the slight modification of a cover image is hard to distinguish by human eyes. In this paper, we propose a revised version of information hiding scheme using Sudoku puzzle. The original work was proposed by Chang et al. in 2008, and their work was inspired by Zhang and Wang's method and Sudoku solutions. Chang et al. successfully used Sudoku solutions to guide cover pixels to modify pixel values so that secret messages can be embedded. Our proposed method is a modification of Chang et al’s method. Here a 27 X 27 Reference matrix is used instead of 256 X 256 reference matrix as proposed in the previous method. The earlier version is for a grayscale image but the proposed method is for a colored image.",
"title": ""
},
{
"docid": "40bdadc044f5342534ba5387c47c6456",
"text": "A numerical study of atmospheric turbulence effects on wind-turbine wakes is presented. Large-eddy simulations of neutrally-stratified atmospheric boundary layer flows through stand-alone wind turbines were performed over homogeneous flat surfaces with four different aerodynamic roughness lengths. Emphasis is placed on the structure and characteristics of turbine wakes in the cases where the incident flows to the turbine have the same mean velocity at the hub height but different mean wind shears and turbulence intensity levels. The simulation results show that the different turbulence intensity levels of the incoming flow lead to considerable influence on the spatial distribution of the mean velocity deficit, turbulence intensity, and turbulent shear stress in the wake region. In particular, when the turbulence intensity level of the incoming flow is higher, the turbine-induced wake (velocity deficit) recovers faster, and the locations of the maximum turbulence intensity and turbulent stress are closer to the turbine. A detailed analysis of the turbulence kinetic energy budget in the wakes reveals also an important effect of the incoming flow turbulence level on the magnitude and spatial distribution of the shear production and transport terms.",
"title": ""
},
{
"docid": "718cf9a405a81b9a43279a1d02f5e516",
"text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.",
"title": ""
},
{
"docid": "b1ecd3c12161f64640ffb1ac2b02b68a",
"text": "Our goal is to construct a domain-targeted, high precision knowledge base (KB), containing general (subject,predicate,object) statements about the world, in support of a downstream question-answering (QA) application. Despite recent advances in information extraction (IE) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. To address these, we have created a domain-targeted, high precision knowledge extraction pipeline, leveraging Open IE, crowdsourcing, and a novel canonical schema learning algorithm (called CASI), that produces high precision knowledge targeted to a particular domain - in our case, elementary science. To measure the KB’s coverage of the target domain’s knowledge (its “comprehensiveness” with respect to science) we measure recall with respect to an independent corpus of domain text, and show that our pipeline produces output with over 80% precision and 23% recall with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. We have made the KB publicly available.",
"title": ""
},
{
"docid": "85c3dc3dae676f0509a99c6d27db8423",
"text": "Swarming, or aggregations of organisms in groups, can be found in nature in many organisms ranging from simple bacteria to mammals. Such behavior can result from several different mechanisms. For example, individuals may respond directly to local physical cues such as concentration of nutrients or distribution of some chemicals as seen in some bacteria and social insects, or they may respond directly to other individuals as seen in fish, birds, and herds of mammals. In this dissertation, we consider models for aggregating and social foraging swarms and perform rigorous stability analysis of emerging collective behavior. Moreover, we consider formation control of a general class of multi-agent systems in the framework of nonlinear output regulation problem with application on formation control of mobile robots. First, an individual-based continuous time model for swarm aggregation in an n-dimensional space is identified and its stability properties are analyzed. The motion of each individual is determined by two factors: (i) attraction to the other individuals on long distances and (ii) repulsion from the other individuals on short distances. It is shown that the individuals (autonomous agents or biological creatures) will form a cohesive swarm in a finite time. Moreover, explicit bounds on the swarm size and time of convergence are derived. Then, the results are generalized to a more general class of attraction/repulsion functions and extended to handle formation stabilization and uniform swarm density. After that, we consider social foraging swarms. We ii assume that the swarm is moving in an environment with an ”attractant/repellent” profile (i.e., a profile of nutrients or toxic substances) which also affects the motion of each individual by an attraction to the more favorable or nutrient rich regions (or repulsion from the unfavorable or toxic regions) of the profile. The stability properties of the collective behavior of the swarm for different profiles are studied and conditions for collective convergence to more favorable regions are provided. Then, we use the ideas for modeling and analyzing the behavior of honey bee clusters and in-transit swarms, a phenomena seen during the reproduction of the bees. After that, we consider one-dimensional asynchronous swarms with time delays. We prove that, despite the asynchronism and time delays in the motion of the individuals, the swarm will converge to a comfortable position with comfortable intermember spacing. Finally, we consider formation control of a multi-agent system with general nonlinear dynamics. It is assumed that the formation is required to follow a virtual leader whose dynamics are generated by an autonomous neutrally stable system. We develop a decentralized control strategy based on the nonlinear output regulation (servomechanism) theory. We illustrate the procedure with application to formation control of mobile robots.",
"title": ""
},
{
"docid": "7dde662184f9dc0363df5cfeffc4724e",
"text": "WordNet is a lexical reference system, developed by the university of Princeton. This paper gives a detailed documentation of the Prolog database of WordNet and predicates to interface it. 1",
"title": ""
},
{
"docid": "3a6197322da0e5fe2c2d98a8fcba7a42",
"text": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory.",
"title": ""
},
{
"docid": "497d72ce075f9bbcb2464c9ab20e28de",
"text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.",
"title": ""
},
{
"docid": "218813a16c9dd6db5f4ce5a55250c1f6",
"text": "The hippocampus frequently replays memories of past experiences during sharp-wave ripple (SWR) events. These events can represent spatial trajectories extending from the animal's current location to distant locations, suggesting a role in the evaluation of upcoming choices. While SWRs have been linked to learning and memory, the specific role of awake replay remains unclear. Here we show that there is greater coordinated neural activity during SWRs preceding correct, as compared to incorrect, trials in a spatial alternation task. As a result, the proportion of cell pairs coactive during SWRs was predictive of subsequent correct or incorrect responses on a trial-by-trial basis. This effect was seen specifically during early learning, when the hippocampus is essential for task performance. SWR activity preceding correct trials represented multiple trajectories that included both correct and incorrect options. These results suggest that reactivation during awake SWRs contributes to the evaluation of possible choices during memory-guided decision making.",
"title": ""
}
] |
scidocsrr
|
743d49af004064e8d34f6a777e93a642
|
Compress and Control
|
[
{
"docid": "b55d2448633f70da4830565268a2b590",
"text": "This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration policies in unknown environments. The flexibility and computational simplicity of the model render it suitable for many reinforcement learning problems in continuous state spaces. We demonstrate this in an experimental comparison with a Gaussian process model, a linear model and simple least squares policy iteration.",
"title": ""
}
] |
[
{
"docid": "4ddd5c732b59a432a51dcd5d262aaf68",
"text": "Remote Application Programming Interfaces (APIs) are technology enablers for major distributed system trends such as mobile and cloud computing and the Internet of Things. In such settings, message-based APIs dominate over procedural and object-oriented ones. It is hard to design such APIs so that they are easy and efficient to use for client developers. Maintaining their runtime qualities while preserving backward compatibility is equally challenging for API providers. For instance, finding a well suited granularity for services and their operations is a particularly important design concern in APIs that realize service-oriented software architectures. Due to the fallacies of distributed computing, the forces for message-based APIs and service interfaces differ from those for local APIs -- for instance, network latency and security concerns deserve special attention. Existing pattern languages have dealt with local APIs in object-oriented programming, with remote objects, with queue-based messaging and with service-oriented computing platforms. However, patterns or equivalent guidance for the structural design of request and response messages in message-based remote APIs is still missing. In this paper, we outline such a pattern language and introduce five basic interface representation patterns to promote platform-independent design advice for common remote API technologies such as RESTful HTTP and Web services (WSDL/SOAP). Known uses and examples of the patterns are drawn from public Web APIs, as well as application development and software integration projects the authors have been involved in.",
"title": ""
},
{
"docid": "10b6b29254236c600040d27498f40feb",
"text": "Large-scale clustering has been widely used in many applications, and has received much attention. Most existing clustering methods suffer from both expensive computation and memory costs when applied to large-scale datasets. In this paper, we propose a novel clustering method, dubbed compressed k-means (CKM), for fast large-scale clustering. Specifically, high-dimensional data are compressed into short binary codes, which are well suited for fast clustering. CKM enjoys two key benefits: 1) storage can be significantly reduced by representing data points as binary codes; 2) distance computation is very efficient using Hamming metric between binary codes. We propose to jointly learn binary codes and clusters within one framework. Extensive experimental results on four large-scale datasets, including two million-scale datasets demonstrate that CKM outperforms the state-of-theart large-scale clustering methods in terms of both computation and memory cost, while achieving comparable clustering accuracy.",
"title": ""
},
{
"docid": "89d4143e7845d191433882f3fa5aaa26",
"text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory. In order to collect a large number of manipulation demonstrations for different objects, we developed a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot can even manipulate objects it has never seen before. Keywords— Robotics and Learning, Crowd-sourcing, Manipulation",
"title": ""
},
{
"docid": "e03b3995da2030983bcabe6c8f765c14",
"text": "Developing thermoelectric materials with superior performance means tailoring interrelated thermoelectric physical parameters – electrical conductivities, Seebeck coefficients, and thermal conductivities – for a crystalline system. High electrical conductivity, low thermal conductivity, and a high Seebeck coefficient are desirable for thermoelectric materials. Therefore, knowledge of the relation between electrical conductivity and thermal conductivity is essential to improve thermoelectric properties. In general, research in recent years has focused on developing thermoelectric structures and materials of high efficiency. The importance of this parameter is universally recognized; it is an established, ubiquitous, routinely used tool for material, device, equipment and process characterization both in the thermoelectric industry and in research. In this paper, basic knowledge of thermoelectric materials and an overview of parameters that affect the figure of merit ZT are provided. The prospects for the optimization of thermoelectric materials and their applications are also discussed. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bf9d706685f76877a56d323423b32a5c",
"text": "BACKGROUND\nFine particulate air pollution has been linked to cardiovascular disease, but previous studies have assessed only mortality and differences in exposure between cities. We examined the association of long-term exposure to particulate matter of less than 2.5 microm in aerodynamic diameter (PM2.5) with cardiovascular events.\n\n\nMETHODS\nWe studied 65,893 postmenopausal women without previous cardiovascular disease in 36 U.S. metropolitan areas from 1994 to 1998, with a median follow-up of 6 years. We assessed the women's exposure to air pollutants using the monitor located nearest to each woman's residence. Hazard ratios were estimated for the first cardiovascular event, adjusting for age, race or ethnic group, smoking status, educational level, household income, body-mass index, and presence or absence of diabetes, hypertension, or hypercholesterolemia.\n\n\nRESULTS\nA total of 1816 women had one or more fatal or nonfatal cardiovascular events, as confirmed by a review of medical records, including death from coronary heart disease or cerebrovascular disease, coronary revascularization, myocardial infarction, and stroke. In 2000, levels of PM2.5 exposure varied from 3.4 to 28.3 microg per cubic meter (mean, 13.5). Each increase of 10 microg per cubic meter was associated with a 24% increase in the risk of a cardiovascular event (hazard ratio, 1.24; 95% confidence interval [CI], 1.09 to 1.41) and a 76% increase in the risk of death from cardiovascular disease (hazard ratio, 1.76; 95% CI, 1.25 to 2.47). For cardiovascular events, the between-city effect appeared to be smaller than the within-city effect. The risk of cerebrovascular events was also associated with increased levels of PM2.5 (hazard ratio, 1.35; 95% CI, 1.08 to 1.68).\n\n\nCONCLUSIONS\nLong-term exposure to fine particulate air pollution is associated with the incidence of cardiovascular disease and death among postmenopausal women. Exposure differences within cities are associated with the risk of cardiovascular disease.",
"title": ""
},
{
"docid": "d16ec1f4c32267a07b1453d45bc8a6f2",
"text": "Knowledge representation learning (KRL), exploited by various applications such as question answering and information retrieval, aims to embed the entities and relations contained by the knowledge graph into points of a vector space such that the semantic and structure information of the graph is well preserved in the representing space. However, the previous works mainly learned the embedding representations by treating each entity and relation equally which tends to ignore the inherent imbalance and heterogeneous properties existing in knowledge graph. By visualizing the representation results obtained from classic algorithm TransE in detail, we reveal the disadvantages caused by this homogeneous learning strategy and gain insight of designing policy for the homogeneous representation learning. In this paper, we propose a novel margin-based pairwise representation learning framework to be incorporated into many KRL approaches, with the method of introducing adaptivity according to the degree of knowledge heterogeneity. More specially, an adaptive margin appropriate to separate the real samples from fake samples in the embedding space is first proposed based on the sample’s distribution density, and then an adaptive weight is suggested to explicitly address the trade-off between the different contributions coming from the real and fake samples respectively. The experiments show that our Adaptive Weighted Margin Learning (AWML) framework can help the previous work achieve a better performance on real-world Knowledge Graphs Freebase and WordNet in the tasks of both link prediction and triplet classification.",
"title": ""
},
{
"docid": "0f9cc52899c7e25a17bb372977d46834",
"text": "In modeling and rendering of complex procedural terrains the extraction of isosurfaces is an important part. In this paper we introduce an approach to generate high-quality isosurfaces from regular grids at interactive frame rates. The surface extraction is a variation of Dual Marching Cubes and designed as a set of well-balanced parallel computation kernels. In contrast to a straightforward parallelization we generate a quadrilateral mesh with full connectivity information and 1-ring vertex neighborhood. We use this information to smooth the extracted mesh and to approximate the smooth subdivision surface for detail tessellation. Both improve the visual fidelity when modeling procedural terrains interactively. Moreover, our extraction approach is generally applicable, for example in the field of volume visualization.",
"title": ""
},
{
"docid": "0160ef86512929e91fc3e5bb3902514e",
"text": "In this paper we propose a clustering method based on combination of the particle swarm optimization (PSO) and the k-mean algorithm. PSO algorithm was showed to successfully converge during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, k-means algorithm can achieve faster convergence to optimum solution. At the same time, the convergent accuracy for k-means can be higher than PSO. So in this paper, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with k-means algorithm is proposed we refer to it as PSO-KM algorithm. The algorithm aims to group a given set of data into a user specified number of clusters. We evaluate the performance of the proposed algorithm using five datasets. The algorithm performance is compared to K-means and PSO clustering.",
"title": ""
},
{
"docid": "94c6ab34e39dd642b94cc2f538451af8",
"text": "Like every other social practice, journalism cannot now fully be understood apart from globalization. As part of a larger platform of communication media, journalism contributes to this experience of the world-as-a-single-place and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. In this essay, I review this intersection of journalism and globalization by considering the communication field’s approach to ‘media globalization’ within a broader interdisciplinary perspective that mixes the sociology of globalization with aspects of geography and social anthropology. By placing the emphasis on social practices, elites, and specific geographical spaces, I introduce a less media-centric approach to media globalization and how journalism fits into the process. Beyond ‘global village journalism,’ this perspective captures the changes globalization has brought to journalism. Like every other social practice, journalism cannot now fully be understood apart from globalization. This process refers to the intensification of social interconnections, which allows apprehending the world as a single place, creating a greater awareness of our own place and its relative location within the range of world experience. As part of a larger platform of communication media, journalism contributes to this experience and thus represents a key component in these social transformations, both as cause and outcome. These issues at the intersection of journalism and globalization define an important and growing field of research, particularly concerning the public sphere and spaces for political discourse. The study of globalization has become a fashionable growth industry, attracting an interdisciplinary assortment of scholars. Journalism, meanwhile, itself has become an important subject in its own right within media studies, with a growing number of projects taking an international perspective (reviewed in Reese 2009). Combining the two areas yields a complex subject that requires some careful sorting out to get beyond the jargon and the easy country–by-country case studies. From the globalization studies side, the media role often seems like an afterthought, a residual category of social change, or a self-evident symbol of the global era–CNN, for example. Indeed, globalization research has been slower to consider the changing role of journalism, compared to the attention devoted to financial and entertainment flows. That may be expected, given that economic and cultural globalization is further along than that of politics, and journalism has always been closely tied to democratic structures, many of which are inherently rooted in local communities. The media-centrism of communication research, on the other hand, may give the media—and the journalism associated with them—too much credit in the globalization process, treating certain media as the primary driver of global connections and the proper object of study. Global connections support new forms of journalism, which create politically significant new spaces within social systems, lead to social change, and privilege certain forms Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x a 2010 The Author Journal Compilation a 2010 Blackwell Publishing Ltd of power. Therefore, we want to know how journalism has contributed to these new spaces, bringing together new combinations of transnational élites, media professionals, and citizens. To what extent are these interactions shaped by a globally consistent shared logic, and what are the consequences for social change and democratic values? Here, however, the discussion often gets reduced to whether a cultural homogenization is taking place, supporting a ‘McWorld’ thesis of a unitary media and journalistic form. But we do not have to subscribe to a one-world media monolith prediction to expect certain transnational logics to emerge to take their place along side existing ones. Journalism at its best contributes to social transparency, which is at the heart of the globalization optimists’ hopes for democracy (e.g. Giddens 2000). The insertion of these new logics into national communities, especially those closed or tightly controlled societies, can bring an important impulse for social change (seen in a number of case studies from China, as in Reese and Dai 2009). In this essay, I will review a few of the issues at the intersection of journalism and globalization and consider a more nuanced view of media within a broader network of actors, particularly in the case of journalism as it helps create emerging spaces for public affairs discourse. Understanding the complex interplay of the global and local requires an interdisciplinary perspective, mixing the sociology of globalization with aspects of geography and social anthropology. This helps avoid equating certain emerging global news forms with a new and distinct public sphere. The globalization of journalism occurs through a multitude of levels, relationships, social actors, and places, as they combine to create new public spaces. Communication research may bring journalism properly to the fore, but it must be considered within the insights into places and relationships provided by these other disciplines. Before addressing these questions, it is helpful to consider how journalism has figured into some larger debates. Media Globalization: Issues of Scale and Homogeneity One major fault line lies within the broader context of ‘media,’ where journalism has been seen as providing flows of information and transnational connections. That makes it a key factor in the phenomenon of ‘media globalization.’ McLuhan gave us the enduring image of the ‘global village,’ a quasi-utopian idea that has seeped into such theorizing about the contribution of media. The metaphor brings expectations of an extensive, unitary community, with a corresponding set of universal, global values, undistorted by parochial interests and propaganda. The interaction of world media systems, however, has not as of yet yielded the kind of transnational media and programs that would support such ‘village’-worthy content (Ferguson 1992; Sparks 2007). In fact, many of the communication barriers show no signs of coming down, with many specialized enclaves becoming stronger. In this respect, changes in media reflect the larger crux of globalization that it simultaneously facilitates certain ‘monoculture’ global standards along with the proliferation of a host of micro-communities that were not possible before. In a somewhat analogous example, the global wine trade has led to convergent trends in internationally desirable tastes but also allowed a number of specialized local wineries to survive and flourish through the ability to reach global markets. The very concept of ‘media globalization’ suggests that we are not quite sure if media lead to globalization or are themselves the result of it. In any case, giving the media a privileged place in shaping a globalized future has led to high expectations for international journalism, satellite television, and other media to provide a workable global public sphere, making them an easy target if they come up short. In his book, Media globalization Journalism and Globalization 345 a 2010 The Author Sociology Compass 4/6 (2010): 344–353, 10.1111/j.1751-9020.2010.00282.x Journal Compilation a 2010 Blackwell Publishing Ltd myth, Kai Hafez (2007) provides that kind of attack. Certainly, much of the discussion has suffered from overly optimistic and under-conceptualized research, with global media technology being a ‘necessary but not sufficient condition for global communication.’ (p. 2) Few truly transnational media forms have emerged that have a more supranational than national allegiance (among newspapers, the International Herald Tribune, Wall St. Journal Europe, Financial Times), and among transnational media even CNN does not present a single version to the world, split as it is into various linguistic viewer zones. Defining cross-border communication as the ‘core phenomenon’ of globalization leads to comparing intrato inter-national communication as the key indicator of globalization. For example, Hafez rejects the internet as a global system of communication, because global connectivity does not exceed local and regional connections. With that as a standard, we may indeed conclude that media globalization has failed to produce true transnational media platforms or dialogs across boundaries. Rather a combination of linguistic and digital divides, along with enduring regional preferences, actually reinforces some boundaries. (The wishful thinking for a global media may be tracked to highly mobile Western scholars, who in Hafez’s ‘hotel thesis’ overestimate the role of such transnational media, because they are available to them in their narrow and privileged travel circles.) Certainly, the foreign news most people receive, even about big international events, is domesticated through the national journalistic lens. Indeed, international reporting, as a key component of the would-be global public sphere, flunks Hafez’s ‘global test,’ incurring the same criticisms others have leveled for years at national journalism: elite-focused, conflictual, and sensational, with a narrow, parochial emphasis. If ‘global’ means giving ‘dialogic’ voices a chance to speak to each other without reproducing national ethnocentrism, then the world’s media still fail to measure up. Conceptualizing the ‘Global’ For many, ‘global’ means big. That goes too for the global village perspective, which emphasizes the scaling dimension and equates the global with ‘bigness,’ part of a nested hierarchy of levels of analysis based on size: beyond local, regional, and nationa",
"title": ""
},
{
"docid": "17aecef988a609953923e3d19ee15b53",
"text": "Deploying and managing multi-component IoT applications in Fog computing scenarios is challenging due to the heterogeneity, scale and dynamicity of Fog infrastructures, as well as to the complexity of modern software systems. When deciding on where/how to (re-)allocate application components over the continuum from the IoT to the Cloud, application administrators need to find the best deployment, satisfying all application (hardware, software, QoS, IoT) requirements over the contextually available resources, also trading-off non-functional desiderata (e.g., financial costs, security). This PhD thesis proposal aims at devising models, algorithms and methodologies to support the adaptive deployment and management of Fog applications.",
"title": ""
},
{
"docid": "815c79118a7e7ae0facbc932fb41a1ac",
"text": "In this paper we present a volumetric lighting model, which simulates scattering as well as shadowing in order to generate high quality volume renderings. By approximating light transport in inhomogeneous participating media, we are able to come up with an efficient GPU implementation, in order to achieve the desired effects at interactive frame rates. Moreover, in many cases the frame rates are even higher as those achieved with conventional gradient-based shading. To evaluate the impact of the proposed illumination model on the spatial comprehension of volumetric objects, we have conducted a user study, in which the participants had to perform depth perception tasks. The results of this study show, that depth perception is significantly improved when comparing our illumination model to conventional gradient-based volume shading. Additionally, since our volumetric illumination model is not based on gradient calculation, it is also less sensitive to noise and therefore also applicable to imaging modalities incorporating a higher degree of noise, as for instance magnet resonance tomography or 3D ultrasound.",
"title": ""
},
{
"docid": "3130e666076d119983ac77c5d77d0aed",
"text": "of Ph.D. dissertation, University of Haifa, Israel.",
"title": ""
},
{
"docid": "cba9f80ab39de507e84b68dc598d0bb9",
"text": "In this paper we construct a noncommutative space of “pointed Drinfeld modules” that generalizes to the case of function fields the noncommutative spaces of commensurability classes of Q-lattices. It extends the usual moduli spaces of Drinfeld modules to possibly degenerate level structures. In the second part of the paper we develop some notions of quantum statistical mechanics in positive characteristic and we show that, in the case of Drinfeld modules of rank one, there is a natural time evolution on the associated noncommutative space, which is closely related to the positive characteristic L-functions introduced by Goss. The points of the usual moduli space of Drinfeld modules define KMS functionals for this time evolution. We also show that the scaling action on the dual system is induced by a Frobenius action, up to a Wick rotation to imaginary time. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "10514cb40ed8adc9fb59e12cb0cf3fe9",
"text": "Crossover recombination is a crucial process in plant breeding because it allows plant breeders to create novel allele combnations on chromosomes that can be used for breeding superior F1 hybrids. Gaining control over this process, in terms of increasing crossover incidence, altering crossover positions on chromosomes or silencing crossover formation, is essential for plant breeders to effectively engineer the allelic composition of chromosomes. We review the various means of crossover control that have been described or proposed. By doing so, we sketch a field of science that uses both knowledge from classic literature and the newest discoveries to manage the occurrence of crossovers for a variety of breeding purposes.",
"title": ""
},
{
"docid": "db1c4e97a367add3a6d708f0b4c6dc84",
"text": "BACKGROUND\nMethods to assess impaired consciousness in acute stroke typically include the Glasgow Coma Scale (GCS), but the verbal component has limitations in aphasic or intubated patients. The FOUR (Full Outline of UnResponsiveness) score, a new coma scale, evaluates 4 components: eye and motor responses, brainstem reflexes and respiration. We aimed to study the interobserver variability of the FOUR score in acute stroke patients.\n\n\nMETHODS\nWe prospectively enrolled consecutive patients with acute stroke admitted from February to July 2008 to the stroke unit of our Neurology Department. Patients were evaluated by neurology residents and nurses using the FOUR score and the GCS. For both scales, we obtained paired and total weighted kappa values (Kw) and intraclass correlation coefficients (ICC). NIH stroke scale was also recorded on admission.\n\n\nRESULTS\nWe obtained a total of 75 paired evaluations in 60 patients (41 cerebral infarctions, 15 cerebral hemorrhages and 4 transient ischemic attacks). Thirty-three (55%) patients were alert, 17 (28.3%) drowsy and 10 (16.7%) stuporous or comatose. The overall rater agreement was excellent in the FOUR score (Kw 0.93; 95% CI 0.89-0.97) with an ICC of 0.94 (95% CI 0.91-0.96) and in the GCS (Kw 0.96; 95% CI 0.94-0.98) with an ICC of 0.96 (95% CI 0.93-0.97). A good correlation was found between the FOUR score and the GCS (rho 0.83; p < 0.01) and between the FOUR score and the NIH stroke scale (rho -0.78; p < 0.001).\n\n\nCONCLUSIONS\nThe FOUR score is a reliable scale for evaluating the level of consciousness in acute stroke patients, showing a good correlation with the GCS and the NIH stroke scale.",
"title": ""
},
{
"docid": "e6640dc272e4142a2ddad8291cfaead7",
"text": "We give a summary of R. Borcherds’ solution (with some modifications) to the following part of the Conway-Norton conjectures: Given the Monster M and Frenkel-Lepowsky-Meurman’s moonshine module V ♮, prove the equality between the graded characters of the elements of M acting on V ♮ (i.e., the McKay-Thompson series for V ♮) and the modular functions provided by Conway and Norton. The equality is established using the homology of a certain subalgebra of the monster Lie algebra, and the Euler-Poincaré identity.",
"title": ""
},
{
"docid": "8cfdd59ba7271d48ea0d41acc2ef795a",
"text": "The Cole single-dispersion impedance model is based upon a constant phase element (CPE), a conductance parameter as a dependent parameter and a characteristic time constant as an independent parameter. Usually however, the time constant of tissue or cell suspensions is conductance dependent, and so the Cole model is incompatible with general relaxation theory and not a model of first choice. An alternative model with conductance as a free parameter influencing the characteristic time constant of the biomaterial has been analyzed. With this free-conductance model it is possible to separately follow CPE and conductive processes, and the nominal time constant no longer corresponds to the apex of the circular arc in the complex plane.",
"title": ""
},
{
"docid": "76e62af2971de3d11d684f1dd7100475",
"text": "Recent advances in memory research suggest methods that can be applied to enhance educational practices. We outline four principles of memory improvement that have emerged from research: 1) process material actively, 2) practice retrieval, 3) use distributed practice, and 4) use metamemory. Our discussion of each principle describes current experimental research underlying the principle and explains how people can take advantage of the principle to improve their learning. The techniques that we suggest are designed to increase efficiency—that is, to allow a person to learn more, in the same unit of study time, than someone using less efficient memory strategies. A common thread uniting all four principles is that people learn best when they are active participants in their own learning.",
"title": ""
},
{
"docid": "cbf278a630fbc3e4b5c363d7cb976aa4",
"text": "Iterative computations are pervasive among data analysis applications in the cloud, including Web search, online social network analysis, recommendation systems, and so on. These cloud applications typically involve data sets of massive scale. Fast convergence of the iterative computation on the massive data set is essential for these applications. In this paper, we explore the opportunity for accelerating iterative computations and propose a distributed computing framework, PrIter, which enables fast iterative computation by providing the support of prioritized iteration. Instead of performing computations on all data records without discrimination, PrIter prioritizes the computations that help convergence the most, so that the convergence speed of iterative process is significantly improved. We evaluate PrIter on a local cluster of machines as well as on Amazon EC2 Cloud. The results show that PrIter achieves up to 50x speedup over Hadoop for a series of iterative algorithms.",
"title": ""
},
{
"docid": "8d0baafd435c44d8e2c1dcfccb755cd8",
"text": "Bayesian optimization is an efficient way to optimize expensive black-box functions such as designing a new product with highest quality or tuning hyperparameter of a machine learning algorithm. However, it has a serious limitation when the parameter space is high-dimensional as Bayesian optimization crucially depends on solving a global optimization of a surrogate utility function in the same sized dimensions. The surrogate utility function, known commonly as acquisition function is a continuous function but can be extremely sharp at high dimension having only a few peaks marooned in a large terrain of almost flat surface. Global optimization algorithms such as DIRECT are infeasible at higher dimensions and gradient-dependent methods cannot move if initialized in the flat terrain. We propose an algorithm that enables local gradient-dependent algorithms to move through the flat terrain by using a sequence of gross-tofiner Gaussian process priors on the objective function as we leverage two underlying facts a) there exists a large enough length-scales for which the acquisition function can be made to have a significant gradient at any location in the parameter space, and b) the extrema of the consecutive acquisition functions are close although they are different only due to a small difference in the length-scales. Theoretical guarantees are provided and experiments clearly demonstrate the utility of the proposed method on both benchmark test functions and real-world case studies.",
"title": ""
}
] |
scidocsrr
|
67058d7b2ed9b51bccf89b7b22373059
|
Humorist Bot: Bringing Computational Humour in a Chat-Bot System
|
[
{
"docid": "2f566d97cf0949ae54276525b805239e",
"text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to di¤erences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second one two ambiguous elements are present, each of which functions both as a connector and",
"title": ""
}
] |
[
{
"docid": "e3af956e04a55c8bed24efdebdd01931",
"text": "Since the effective and efficient system of water quality monitoring (WQM) are critical implementation for the issue of polluted water globally, with increasing in the development of Wireless Sensor Network (WSN) technology in the Internet of Things (IoT) environment, real time water quality monitoring is remotely monitored by means of real-time data acquisition, transmission and processing. This paper presents a reconfigurable smart sensor interface device for water quality monitoring system in an IoT environment. The smart WQM system consists of Field Programmable Gate Array (FPGA) design board, sensors, Zigbee based wireless communication module and personal computer (PC). The FPGA board is the core component of the proposed system and it is programmed in very high speed integrated circuit hardware description language (VHDL) and C programming language using Quartus II software and Qsys tool. The proposed WQM system collects the five parameters of water data such as water pH, water level, turbidity, carbon dioxide (CO2) on the surface of water and water temperature in parallel and in real time basis with high speed from multiple different sensor nodes.",
"title": ""
},
{
"docid": "010c2b908c0f4b33272eec553bb842ca",
"text": "In the last decade, optimized treatment for non-small cell lung cancer had lead to improved prognosis, but the overall survival is still very short. To further understand the molecular basis of the disease we have to identify biomarkers related to survival. Here we present the development of an online tool suitable for the real-time meta-analysis of published lung cancer microarray datasets to identify biomarkers related to survival. We searched the caBIG, GEO and TCGA repositories to identify samples with published gene expression data and survival information. Univariate and multivariate Cox regression analysis, Kaplan-Meier survival plot with hazard ratio and logrank P value are calculated and plotted in R. The complete analysis tool can be accessed online at: www.kmplot.com/lung. All together 1,715 samples of ten independent datasets were integrated into the system. As a demonstration, we used the tool to validate 21 previously published survival associated biomarkers. Of these, survival was best predicted by CDK1 (p<1E-16), CD24 (p<1E-16) and CADM1 (p = 7E-12) in adenocarcinomas and by CCNE1 (p = 2.3E-09) and VEGF (p = 3.3E-10) in all NSCLC patients. Additional genes significantly correlated to survival include RAD51, CDKN2A, OPN, EZH2, ANXA3, ADAM28 and ERCC1. In summary, we established an integrated database and an online tool capable of uni- and multivariate analysis for in silico validation of new biomarker candidates in non-small cell lung cancer.",
"title": ""
},
{
"docid": "398c791338adf824a81a2bfb8f35c6bb",
"text": "Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2 TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewingallowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) a system for supporting 2D tiled displays, with Omegalib a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.",
"title": ""
},
{
"docid": "0d5ba680571a9051e70ababf0c685546",
"text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization",
"title": ""
},
{
"docid": "67d7bd4580cffc59021350078a135500",
"text": "With the popularity of handheld devices, the development of wireless communication technology and the proliferation of multimedia resources, mobile video has become the main business in LTE networks with explosive traffic demands. How to improve the quality of experience (QoE) of mobile video in the dynamic and complex network environment has become a research focus. Dynamic adaptive streaming over HTTP technology introduces adaptive bitrate (ABR) requests at the client side to improve video QoE and various rate adaptation algorithms are also constantly proposed. In view of the limitations of the existing heuristic or learning-based ABR methods, we propose redirecting enhanced Deep Q-learning toward DASH video QoE (RDQ), a QoE-oriented rate adaptation framework based on enhanced deep Q-learning. First, we establish a chunkwise subjective QoE model and utilize it as the reward function in reinforcement learning so that the strategy can converge toward the direction of maximizing the subjective QoE score. Then, we apply several effective improvements of deep Q-learning to the RDQ agent’s neural network architecture and learning mechanism to achieve faster convergence and higher average reward than other learning-based methods. The proposed RDQ agent has been thoroughly evaluated using trace-based simulation on the real-time LTE network data. For disparate network scenarios and different video contents, the RDQ agent can outperform the existing methods in terms of the QoE score. The breakdown analysis shows that RDQ can suppress the number and the duration of the stalling events to the minimum while maintaining high video bitrate, thus achieving better QoE performance than other methods.",
"title": ""
},
{
"docid": "dc61d7035503b7e382824aea6ec06a8b",
"text": "Deep learning is formulated as a discrete-time optimal control problem. This allows one to characterize necessary conditions for optimality and develop training algorithms that do not rely on gradients with respect to the trainable parameters. In particular, we introduce the discrete-time method of successive approximations (MSA), which is based on the Pontryagin’s maximum principle, for training neural networks. A rigorous error estimate for the discrete MSA is obtained, which sheds light on its dynamics and the means to stabilize the algorithm. The developed methods are applied to train, in a rather principled way, neural networks with weights that are constrained to take values in a discrete set. We obtain competitive performance and interestingly, very sparse weights in the case of ternary networks, which may be useful in model deployment in low-memory devices.",
"title": ""
},
{
"docid": "0807bfb91fdb15b19652e98f0af20f29",
"text": "Finding the factorization of a polynomial over a finite field is of interest not only independently but also for many applications in computer algebra, algebraic coding theory, cryptography, and computational number theory. Polynomial factorization over finite fields is used as a subproblem in algorithms for factoring polynomials over the integers (Zassenhaus, 1969; Collins, 1979; Lenstra et al., 1982; Knuth, 1998), for constructing cyclic redundancy codes and BCH codes (Berlekamp, 1968; MacWilliams and Sloane, 1977; van Lint, 1982), for designing public key cryptosystems (Chor and Rivest, 1985; Odlyzko, 1985; Lenstra, 1991), and for computing the number of points on elliptic curves (Buchmann, 1990). Major improvements have been made in the polynomial factorization problem during this decade both in theory and in practice. From a theoretical point of view, asymptotically faster algorithms have been proposed. However, these advances are yet more striking in practice where variants of the asymptotically fastest algorithms allow us to factor polynomials over finite fields in reasonable amounts of time that were unassailable a few years ago. Our purpose in this survey is to stress the basic ideas behind these methods, to overview experimental results, as well as to give a comprehensive up-to-date bibliography of the problem. Kaltofen (1982, 1990, 1992) has given excellent surveys of",
"title": ""
},
{
"docid": "ca3ea61314d43abeac81546e66ff75e4",
"text": "OBJECTIVE\nTo describe and discuss the process used to write a narrative review of the literature for publication in a peer-reviewed journal. Publication of narrative overviews of the literature should be standardized to increase their objectivity.\n\n\nBACKGROUND\nIn the past decade numerous changes in research methodology pertaining to reviews of the literature have occurred. These changes necessitate authors of review articles to be familiar with current standards in the publication process.\n\n\nMETHODS\nNarrative overview of the literature synthesizing the findings of literature retrieved from searches of computerized databases, hand searches, and authoritative texts.\n\n\nDISCUSSION\nAn overview of the use of three types of reviews of the literature is presented. Step by step instructions for how to conduct and write a narrative overview utilizing a 'best-evidence synthesis' approach are discussed, starting with appropriate preparatory work and ending with how to create proper illustrations. Several resources for creating reviews of the literature are presented and a narrative overview critical appraisal worksheet is included. A bibliography of other useful reading is presented in an appendix.\n\n\nCONCLUSION\nNarrative overviews can be a valuable contribution to the literature if prepared properly. New and experienced authors wishing to write a narrative overview should find this article useful in constructing such a paper and carrying out the research process. It is hoped that this article will stimulate scholarly dialog amongst colleagues about this research design and other complex literature review methods.",
"title": ""
},
{
"docid": "90ce5197708ee86f42ac8c5e985e481f",
"text": "This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method.",
"title": ""
},
{
"docid": "f72607d15fe2015858bf74b09e574692",
"text": "This document describes several aspects relating to the design of dc-dc converters operating at frequencies in the VHF range (30-300 MHz). Design considerations are treated in the context of a dc-dc converter operating at a switching frequency of 100 MHz. Gate drive, rectifier and control designs are explored in detail, and experimental measurements of the complete converter are presented that verify the design approach. The gate drive, a self-oscillating multi-resonant circuit, dramatically reduces the gating power while ensuring fast onoff transitions of the semiconductor switch. The rectifier is a resonant topology that absorbs diode parasitic capacitance and is designed to appear resistive at the switching frequency. The small sizes of the energy storage elements (inductors and capacitors) in this circuit permit rapid start-up and shut-down and a correspondingly high control bandwidth. These characteristics are exploited in a high bandwidth hysteretic control scheme that modulates the converter on and off at frequencies as high as 200 kHz.",
"title": ""
},
{
"docid": "e2c2cdb5245b73b7511c434c4901fff8",
"text": "Adversarial machine learning in the context of image processing and related applications has received a large amount of attention. However, adversarial machine learning, especially adversarial deep learning, in the context of malware detection has received much less attention despite its apparent importance. In this paper, we present a framework for enhancing the robustness of Deep Neural Networks (DNNs) against adversarial malware samples, dubbed Hashing Transformation Deep Neural Networks (HashTran-DNN). The core idea is to use hash functions with a certain locality-preserving property to transform samples to enhance the robustness of DNNs in malware classification. The framework further uses a Denoising Auto-Encoder (DAE) regularizer to reconstruct the hash representations of samples, making the resulting DNN classifiers capable of attaining the locality information in the latent space. We experiment with two concrete instantiations of the HashTranDNN framework to classify Android malware. Experimental results show that four known attacks can render standard DNNs useless in classifying Android malware, that known defenses can at most defend three of the four attacks, and that HashTran-DNN can effectively defend against all of the four attacks.",
"title": ""
},
{
"docid": "0738367dec2b7f1c5687ce1a15c8ac28",
"text": "There is a high demand for qualified information and communication technology (ICT) practitioners in the European labour market, but the problem at many universities is a high dropout rate among ICT students, especially during the first study year. The solution might be to focus more on improving students’ computational thinking (CT) before starting university studies. Therefore, research is needed to find the best methods for learning CT already at comprehensive school level to raise the interest in and awareness of studying computer science. Doing so requires a clear understanding of CT and a model to improve it at comprehensive schools. Through the analysis of the articles found in EBSCO Discovery Search tool, this study gives an overview of the definition of CT and presents three models of CT. The models are analysed to find out their similarities and differences in order to gather together the core elements of CT and form a revised model of learning CT in comprehensive school ICT lessons or integrating CT in other subjects.",
"title": ""
},
{
"docid": "b205dd971c6fb240b5fc85e9c3ee80a9",
"text": "Network embedding leverages the node proximity manifested to learn a low-dimensional node vector representation for each node in the network. The learned embeddings could advance various learning tasks such as node classification, network clustering, and link prediction. Most, if not all, of the existing works, are overwhelmingly performed in the context of plain and static networks. Nonetheless, in reality, network structure often evolves over time with addition/deletion of links and nodes. Also, a vast majority of real-world networks are associated with a rich set of node attributes, and their attribute values are also naturally changing, with the emerging of new content patterns and the fading of old content patterns. These changing characteristics motivate us to seek an effective embedding representation to capture network and attribute evolving patterns, which is of fundamental importance for learning in a dynamic environment. To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly. In this paper, we tackle this problem by proposing a novel dynamic attributed network embedding framework - DANE. In particular, DANE first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner. We perform extensive experiments on both synthetic and real attributed networks to corroborate the effectiveness and efficiency of the proposed framework.",
"title": ""
},
{
"docid": "d8cdef48386a73c72436f6ed570f0630",
"text": "Webbed penis as an isolated anomaly is rare, having been reported in 10 cases. A report is made of a 1-year-old child successfully repaired by a rectangular scrotal flap to close the penoscrotal junction and multiple W-plasty incisions for closure of the skin of the shaft of the penis.",
"title": ""
},
{
"docid": "3a314a72ea2911844a5a3462d052f4e7",
"text": "While increasing income inequality in China has been commented on and studied extensively, relatively little analysis is available on inequality in other dimensions of human development. Using data from different sources, this paper presents some basic facts on the evolution of spatial inequalities in education and healthcare in China over the long run. In the era of economic reforms, as the foundations of education and healthcare provision have changed, so has the distribution of illiteracy and infant mortality. Across provinces and within provinces, between rural and urban areas and within rural and urban areas, social inequalities have increased substantially since the reforms began.",
"title": ""
},
{
"docid": "640e1bf49e1205077898eddcdcbc5906",
"text": "Machine comprehension(MC) style question answering is a representative problem in natural language processing. Previous methods rarely spend time on the improvement of encoding layer, especially the embedding of syntactic information and name entity of the words, which are very crucial to the quality of encoding. Moreover, existing attention methods represent each query word as a vector or use a single vector to represent the whole query sentence, neither of them can handle the proper weight of the key words in query sentence. In this paper, we introduce a novel neural network architecture called Multi-layer Embedding with Memory Network(MEMEN) for machine reading task. In the encoding layer, we employ classic skip-gram model to the syntactic and semantic information of the words to train a new kind of embedding layer. We also propose a memory network of full-orientation matching of the query and passage to catch more pivotal information. Experiments show that our model has competitive results both from the perspectives of precision and efficiency in Stanford Question Answering Dataset(SQuAD) among all published results and achieves the state-of-the-art results on TriviaQA dataset.",
"title": ""
},
{
"docid": "b4cf0dd9f7aa5451636926627923ca08",
"text": "Named Data Networking (NDN) is an entirely new internet architecture inspired by years of empirical research into network usage. NDN is related to Content Centric Networking. Unique feature of NDN is its adaptive forwarding plane. In NDN, the packets carry the data name instead of the source and destination address. In NDN, communication takes place by the exchange of Interest and Data packets. Data consumers send Interest packets in the form of names. Routers forward the Interest packet based on the data name and also maintain the state information of pending Interests that enable NDN routers to detect loops, measure performance of different path, quickly detect failures and retry alternative path. The producer replies with data packet that takes the reverse path of Interests. In this paper the motivation and the vision of NDN architecture, and basic components and operations of NDN are described. Also the strength and weakness of NDN are reviewed. The final discussion aims to identify challenges and some future directions for NDN deployment.",
"title": ""
},
{
"docid": "f56c5a623b29b88f42bf5d6913b2823e",
"text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.",
"title": ""
},
{
"docid": "9f04ac4067179aadf5e429492c7625e9",
"text": "We provide a model that links an asset’s market liquidity — i.e., the ease with which it is traded — and traders’ funding liquidity — i.e., the ease with which they can obtain funding. Traders provide market liquidity, and their ability to do so depends on their availability of funding. Conversely, traders’ funding, i.e., their capital and the margins they are charged, depend on the assets’ market liquidity. We show that, under certain conditions, margins are destabilizing and market liquidity and funding liquidity are mutually reinforcing, leading to liquidity spirals. The model explains the empirically documented features that market liquidity (i) can suddenly dry up, (ii) has commonality across securities, (iii) is related to volatility, (iv) is subject to “flight to quality”, and (v) comoves with the market, and it provides new testable predictions.",
"title": ""
},
{
"docid": "43d9566553ecf29c72cdac7466aab9dc",
"text": "This paper presents an integrated approach for the automatic extraction of rectangularand circularshape buildings from high-resolution optical spaceborne images using the integration of support vector machine (SVM) classification, Hough transformation and perceptual grouping. The building patches are detected from the image using the binary SVM classification. The generated normalized digital surface model (nDSM) and the normalized difference vegetation index (NDVI) are incorporated in the classification process as additional bands. After detecting the building patches, the building boundaries are extracted through sequential processing of edge detection, Hough transformation and perceptual grouping. Those areas that are classified as building are masked and further processing operations are performed on the masked areas only. The edges of the buildings are detected through an edge detection algorithm that generates a binary edge image of the building patches. These edges are then converted into vector form through Hough transform and the buildings are constructed by means of perceptual grouping. To validate the developed method, experiments were conducted on pan-sharpened and panchromatic Ikonos imagery, covering the selected test areas in Batikent district of Ankara, Turkey. For the test areas that contain industrial buildings, the average building detection percentage (BDP) and quality percentage (QP) values were computed to be 93.45% and 79.51%, respectively. For the test areas that contain residential rectangular-shape buildings, the average BDP and QP values were computed to be 95.34% and 79.05%, respectively. For the test areas that contain residential circular-shape buildings, the average BDP and QP values were found to be 78.74% and 66.81%, respectively. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
88dfe199847320a540146e0a510a0db7
|
Automated anomaly detection and performance modeling of enterprise applications
|
[
{
"docid": "9b628f47102a0eee67e469e223ece837",
"text": "We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.",
"title": ""
},
{
"docid": "7e0c7042c7bc4d1084234f48dd2e0333",
"text": "Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of \"black-box\" components: software from many different (perhaps competing) vendors, usually without source code available. Typical solutions-provider employees are not always skilled or experienced enough to debug these systems efficiently. Our goal is to design tools that enable modestly-skilled programmers (and experts, too) to isolate performance bottlenecks in distributed systems composed of black-box nodes.We approach this problem by obtaining message-level traces of system activity, as passively as possible and without any knowledge of node internals or message semantics. We have developed two very different algorithms for inferring the dominant causal paths through a distributed system from these traces. One uses timing information from RPC messages to infer inter-call causality; the other uses signal-processing techniques. Our algorithms can ascribe delay to specific nodes on specific causal paths. Unlike previous approaches to similar problems, our approach requires no modifications to applications, middleware, or messages.",
"title": ""
}
] |
[
{
"docid": "7b4400c6ef5801e60a6f821810538381",
"text": "A CMOS self-biased fully differential amplifier is presented. Due to the self-biasing structure of the amplifier and its associated negative feedback, the amplifier is compensated to achieve low sensitivity to process, supply voltage and temperature (PVT) variations. The output common-mode voltage of the amplifier is adjusted through the same biasing voltages provided by the common-mode feedback (CMFB) circuit. The amplifier core is based on a simple structure that uses two CMOS inverters to amplify the input differential signal. Despite its simple structure, the proposed amplifier is attractive to a wide range of applications, specially those requiring low power and small silicon area. As two examples, a sample-and-hold circuit and a second order multi-bit sigma-delta modulator either employing the proposed amplifier are presented. Besides these application examples, a set of amplifier performance parameters is given.",
"title": ""
},
{
"docid": "3a066516f52dec6150fcf4a8e081605f",
"text": "Writer: Julie Risbourg Title: Breaking the ‘glass ceiling’ Subtitle: Language: A Critical Discourse Analysis of how powerful businesswomen are portrayed in The Economist online English Pages: 52 Women still represent a minority in the executive world. Much research has been aimed at finding possible explanations concerning the underrepresentation of women in the male dominated executive sphere. The findings commonly suggest that a patriarchal society and the maintenance of gender stereotypes lead to inequalities and become obstacles for women to break the so-called ‘glass ceiling’. This thesis, however, aims to explore how businesswomen are represented once they have broken the glass ceiling and entered the executive world. Within the Forbes’ list of the 100 most powerful women of 2017, the two first businesswomen on the list were chosen, and their portrayals were analysed through articles published by The Economist online. The theoretical framework of this thesis includes Goffman’s framing theory and takes a cultural feminist perspective on exploring how the media outlet frames businesswomen Sheryl Sandberg and Mary Barra. The thesis also examines how these frames relate to the concepts of stereotyping, commonly used in the coverage of women in the media. More specifically, the study investigates whether negative stereotypes concerning their gender are present in the texts or if positive stereotypes such as idealisation are used to portray them. Those concepts are coupled with the theoretical aspect of the method, which is Critical Discourse Analysis. This method is chosen in order to explore the underlying meanings and messages The Economist chose to refer to these two businesswomen. This is done through the use of linguistic and visual tools, such as lexical choices, word connotations, nomination/functionalisation and gaze. The findings show that they were portrayed positively within a professional environment, and the publication celebrated their success and hard work. Moreover, the results also show that gender related traits were mentioned, showing a subjective representation, which is countered by their idealisation, via their presence in not only the executive world, but also having such high-working titles in male dominated industries.",
"title": ""
},
{
"docid": "5090070d6d928b83bd22d380f162b0a6",
"text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.",
"title": ""
},
{
"docid": "2b1a9f7131b464d9587137baf828cd3a",
"text": "The description of the spatial characteristics of twoand three-dimensional objects, in the framework of MPEG-7, is considered. The shape of an object is one of its fundamental properties, and this paper describes an e$cient way to represent the coarse shape, scale and composition properties of an object. This representation is invariant to resolution, translation and rotation, and may be used for both two-dimensional (2-D) and three-dimensional (3-D) objects. This coarse shape descriptor will be included in the eXperimentation Model (XM) of MPEG-7. Applications of such a description to search object databases, in particular the CAESAR anthropometric database are discussed. ( 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "b27ab468a885a3d52ec2081be06db2ef",
"text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.",
"title": ""
},
{
"docid": "00f0ba62d43b775ffd1c0809acef9175",
"text": "1. T. Shiratori, A. Nakazawa, K. Ikeuchi, “Dancing-to-Music Character Animation”, In Computer Graphics Forum, Vol. 25, No. 3 (also in Eurographics 2006), Sep. 2006 (to appear) 2. T. Shiratori, A. Nakazawa, K. Ikeuchi, “Synthesizing Dance Performance Using Musical and Motion Features”, In Proc. of IEEE International Conference on Robotics and Automation (ICRA 2006), May 2006 A Dancing-to-Music ability for CG characters & humanoids",
"title": ""
},
{
"docid": "dcdb6242febbef358efe5a1461957291",
"text": "Neuromorphic Engineering has emerged as an exciting research area, primarily owing to the paradigm shift from conventional computing architectures to data-driven, cognitive computing. There is a diversity of work in the literature pertaining to neuromorphic systems, devices and circuits. This review looks at recent trends in neuromorphic engineering and its sub-domains, with an attempt to identify key research directions that would assume significance in the future. We hope that this review would serve as a handy reference to both beginners and experts, and provide a glimpse into the broad spectrum of applications of neuromorphic hardware and algorithms. Our survey indicates that neuromorphic engineering holds a promising future, particularly with growing data volumes, and the imminent need for intelligent, versatile computing.",
"title": ""
},
{
"docid": "c4e80fd8e2c5b1795c016c9542f8f33e",
"text": "Duckweeds, plants of the Lemnaceae family, have the distinction of being the smallest angiosperms in the world with the fastest doubling time. Together with its naturally ability to thrive on abundant anthropogenic wastewater, these plants hold tremendous potential to helping solve critical water, climate and fuel issues facing our planet this century. With the conviction that rapid deployment and optimization of the duckweed platform for biomass production will depend on close integration between basic and applied research of these aquatic plants, the first International Conference on Duckweed Research and Applications (ICDRA) was organized and took place in Chengdu, China, from October 7th to 10th of 2011. Co-organized with Rutgers University of New Jersey (USA), this Conference attracted participants from Germany, Denmark, Japan, Australia, in addition to those from the US and China. The following are concise summaries of the various oral presentations and final discussions over the 2.5 day conference that serve to highlight current research interests and applied research that are paving the way for the imminent deployment of this novel aquatic crop. We believe the sharing of this information with the broad Plant Biology community is an important step toward the renaissance of this excellent plant model that will have important impact on our quest for sustainable development of the world.",
"title": ""
},
{
"docid": "2804384964bc8996e6574bdf67ed9cb5",
"text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.",
"title": ""
},
{
"docid": "e8c9067f13c9a57be46823425deb783b",
"text": "In order to utilize the tremendous computing power of graphics hardware and to automatically adapt to the fast and frequent changes in its architecture and performance characteristics, this paper implements an automatic tuning system to generate high-performance matrix-multiplication implementation on graphics hardware. The automatic tuning system uses a parameterized code generator to generate multiple versions of matrix multiplication, whose performances are empirically evaluated by actual execution on the target platform. An ad-hoc search engine is employed to search over the implementation space for the version that yields the best performance. In contrast to similar systems on CPUs, which utilize cache blocking, register tiling, instruction scheduling tuning strategies, this paper identifies and exploits several tuning strategies that are unique for graphics hardware. These tuning strategies include optimizing for multiple-render-targets, SIMD instructions with data packing, overcoming limitations on instruction count and dynamic branch instruction. The generated implementations have comparable performance with expert manually tuned version in spite of the significant overhead incurred due to the use of the high-level BrookGPU language.",
"title": ""
},
{
"docid": "e66e7677aa769135a6a9b9ea5c807212",
"text": "At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.",
"title": ""
},
{
"docid": "29e360b1e1999a284d4e464ce4c9ed51",
"text": "To study the role of brain oscillations in working memory, we recorded the scalp electroencephalogram (EEG) during the retention interval of a modified Sternberg task. A power spectral analysis of the EEG during the retention interval revealed a clear peak at 9-12 Hz, a frequency in the alpha band (8-13 Hz). In apparent conflict with previous ideas according to which alpha band oscillations represent brain \"idling\", we found that the alpha peak systematically increased with the number of items held in working memory. The enhancement was prominent over the posterior and bilateral central regions. The enhancement over posterior regions is most likely explained by the well known alpha rhythm produced close to the parietal-occipital fissure, whereas the lateral enhancement could be explained by sources in somato-motor cortex. A time-frequency analysis revealed that the enhancement was present throughout the last 2.5 s of the 2.8 s retention interval and that alpha power rapidly diminished following the probe. The load dependence and the tight temporal regulation of alpha provide strong evidence that the alpha generating system is directly or indirectly linked to the circuits responsible for working memory. Although a clear peak in the theta band (5-8 Hz) was only detectable in one subject, other lines of evidence indicate that theta occurs and also has a role in working memory. Hypotheses concerning the role of alpha band activity in working memory are discussed.",
"title": ""
},
{
"docid": "4691ef360395aefb51a8fb086ae50991",
"text": "Estimating 3D pose of a known object from a given 2D image is an important problem with numerous studies for robotics and augmented reality applications. While the state-of-the-art Perspective-n-Point algorithms perform well in pose estimation, the success hinges on whether feature points can be extracted and matched correctly on targets with rich texture. In this work, we propose a robust direct method for 3D pose estimation with high accuracy that performs well on both textured and textureless planar targets. First, the pose of a planar target with respect to a calibrated camera is approximately estimated by posing it as a template matching problem. Next, the object pose is further refined and disambiguated with a gradient descent search scheme. Extensive experiments on both synthetic and real datasets demonstrate the proposed direct pose estimation algorithm performs favorably against state-of-the-art feature-based approaches in terms of robustness and accuracy under several varying conditions.",
"title": ""
},
{
"docid": "262f1e965b311bf866ef5b924b6085a7",
"text": "By considering the amount of uncertainty perceived and the willingness to bear uncertainty concomitantly, we provide a more complete conceptual model of entrepreneurial action that allows for examination of entrepreneurial action at the individual level of analysis while remaining consistent with a rich legacy of system-level theories of the entrepreneur. Our model not only exposes limitations of existing theories of entrepreneurial action but also contributes to a deeper understanding of important conceptual issues, such as the nature of opportunity and the potential for philosophical reconciliation among entrepreneurship scholars.",
"title": ""
},
{
"docid": "38a7f57900474553f6979131e7f39e5d",
"text": "A cascade switched-capacitor ΔΣ analog-to-digital converter, suitable for WLANs, is presented. It uses a double-sampling scheme with single set of DAC capacitors, and an improved low-distortion architecture with an embedded-adder integrator. The proposed architecture eliminates one active stage, and reduces the output swings in the loop-filter and hence the non-linearity. It was fabricated with a 0.18um CMOS process. The prototype chip achieves 75.5 dB DR, 74 dB SNR, 73.8 dB SNDR, −88.1 dB THD, and 90.2 dB SFDR over a 10 MHz signal band with an FoM of 0.27 pJ/conv-step.",
"title": ""
},
{
"docid": "22a2779e79ec8fcc2f3e20ffef52e219",
"text": "Despite the great progress achieved in unconstrained face recognition, pose variations still remain a challenging and unsolved practical issue. We propose a novel framework for multi-view face recognition based on extracting and matching pose-robust face signatures from 2D images. Specifically, we propose an efficient method for monocular 3D face reconstruction, which is used to lift the 2D facial appearance to a canonical texture space and estimate the self-occlusion. On the lifted facial texture we then extract various local features, which are further enhanced by the occlusion encodings computed on the self-occlusion mask, resulting in a pose-robust face signature, a novel feature representation of the original 2D facial image. Extensive experiments on two public datasets demonstrate that our method not only simplifies the matching of multi-view 2D facial images by circumventing the requirement for pose-adaptive classifiers, but also achieves superior performance.",
"title": ""
},
{
"docid": "3f5e8ac89e893d3166f5e3c50f91b8cc",
"text": "Biosequences typically have a small alphabet, a long length, and patterns containing gaps (i.e., \"don't care\") of arbitrary size. Mining frequent patterns in such sequences faces a different type of explosion than in transaction sequences primarily motivated in market-basket analysis. In this paper, we study how this explosion affects the classic sequential pattern mining, and present a scalable two-phase algorithm to deal with this new explosion. The <i>Segment Phase</i> first searches for short patterns containing no gaps, called <i>segments</i>. This phase is efficient. The <i>Pattern Phase</i> searches for long patterns containing multiple segments separated by variable length gaps. This phase is time consuming. The purpose of two phases is to exploit the information obtained from the first phase to speed up the pattern growth and matching and to prune the search space in the second phase. We evaluate this approach on synthetic and real life data sets.",
"title": ""
},
{
"docid": "9c59eb4f1843db91a2511db2ad5fd35c",
"text": "Segmentation is an important task of any Optical Character Recognition (OCR) system. It separates the image text documents into lines, words and characters. The accuracy of OCR system mainly depends on the segmentation algorithm being used. Segmentation of handwritten text of some Indian languages like Kannada, Telugu, Assamese is difficult when compared with Latin based languages because of its structural complexity and increased character set. It contains vowels, consonants and compound characters. Some of the characters may overlap together. Despite several successful works in OCR all over the world, development of OCR tools in Indian languages is still an ongoing process. Character segmentation plays an important role in character recognition because incorrectly segmented characters are unlikely to be recognized correctly. In this paper, a segmentation scheme for segmenting handwritten Kannada scripts into lines, words and characters using morphological operations and projection profiles is proposed. The method was tested on totally unconstrained handwritten Kannada scripts, which pays more challenge and difficulty due to the complexity involved in the script. Usage of the morphology made extracting text lines efficient by an average extraction rate of 94.5% .Because of the varying inter and intra word gaps an average segmentation rate of 82.35% and 73.08% for words and characters respectively is obtained.",
"title": ""
},
{
"docid": "aa16ca139a7648f7d9bb3ff81aaf0bbc",
"text": "Atherosclerosis has an important inflammatory component and acute cardiovascular events can be initiated by inflammatory processes occurring in advanced plaques. Fatty acids influence inflammation through a variety of mechanisms; many of these are mediated by, or associated with, the fatty acid composition of cell membranes. Human inflammatory cells are typically rich in the n-6 fatty acid arachidonic acid, but the contents of arachidonic acid and of the marine n-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) can be altered through oral administration of EPA and DHA. Eicosanoids produced from arachidonic acid have roles in inflammation. EPA also gives rise to eicosanoids and these are usually biologically weak. EPA and DHA give rise to resolvins which are anti-inflammatory and inflammation resolving. EPA and DHA also affect production of peptide mediators of inflammation (adhesion molecules, cytokines, etc.). Thus, the fatty acid composition of human inflammatory cells influences their function; the contents of arachidonic acid, EPA and DHA appear to be especially important. The anti-inflammatory effects of marine n-3 polyunsaturated fatty acids (PUFAs) may contribute to their protective actions towards atherosclerosis and plaque rupture.",
"title": ""
},
{
"docid": "e52a2c807612cb383076f2fae508c6cc",
"text": "We present a new corpus for computational stylometry, more specifically authorship attribution and the prediction of author personality from text. Because of the large number of authors (145), the corpus will allow previously impossible studies of variation in features considered predictive for writing style. The innovative meta-information (personality profiles of the authors) associated with these texts allows the study of personality prediction, a not yet very well researched aspect of style. In this paper, we describe the contents of the corpus and show its use in both authorship attribution and personality prediction. We focus on features that have been proven useful in the field of author recognition. Syntactic features like part-of-speech n-grams are generally accepted as not being under the author’s conscious control and therefore providing good clues for predicting gender or authorship. We want to test whether these features are helpful for personality prediction and authorship attribution on a large set of authors. Both tasks are approached as text categorization tasks. First a document representation is constructed based on feature selection from the linguistically analyzed corpus (using the Memory-Based Shallow Parser (MBSP)). These are associated with each of the 145 authors or each of the four components of the Myers-Briggs Type Indicator (Introverted-Extraverted, Sensing-iNtuitive, Thinking-Feeling, JudgingPerceiving). Authorship attribution on 145 authors achieves results around 50% accuracy. Preliminary results indicate that the first two personality dimensions can be predicted fairly accurately.",
"title": ""
}
] |
scidocsrr
|
dc5bd7573ed88fb04e789500a36c6898
|
TOWARDS NATURAL SPOKEN INTERACTION WITH ARTIFICIAL INTELLIGENT SYSTEMS
|
[
{
"docid": "0bcef553c7a9593f6356658125d3082b",
"text": "Belief tracking is a core component of modern spoken dialogue system pipelines. However, most current approaches would have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: a) Spoken Language Understanding models that require large amounts of annotated training data; or b) hand-crafted semantic lexicons that capture the lexical variation in users’ language. We propose a novel Neural Belief Tracking (NBT) framework which aims to overcome these problems by building on recent advances in semantic representation learning. The NBT models reason over continuous distributed representations of words, utterances and dialogue context. Our evaluation on two datasets shows that this approach overcomes both limitations, matching the performance of state-of-the-art models that have greater resource requirements.",
"title": ""
},
{
"docid": "43e9fbaedf062a67be3c51b99889a6fb",
"text": "A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy.",
"title": ""
}
] |
[
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "c1ccbb8e8a9fa8a3291e9b8a2f8ee8aa",
"text": "Chronic stress is one of the predominant environmental risk factors for a number of psychiatric disorders, particularly for major depression. Different hypotheses have been formulated to address the interaction between early and adult chronic stress in psychiatric disease vulnerability. The match/mismatch hypothesis of psychiatric disease states that the early life environment shapes coping strategies in a manner that enables individuals to optimally face similar environments later in life. We tested this hypothesis in female Balb/c mice that underwent either stress or enrichment early in life and were in adulthood further subdivided in single or group housed, in order to provide aversive or positive adult environments, respectively. We studied the effects of the environmental manipulation on anxiety-like, depressive-like and sociability behaviors and gene expression profiles. We show that continuous exposure to adverse environments (matched condition) is not necessarily resulting in an opposite phenotype compared to a continuous supportive environment (matched condition). Rather, animals with mismatched environmental conditions behaved differently from animals with matched environments on anxious, social and depressive like phenotypes. These results further support the match/mismatch hypothesis and illustrate how mild or moderate aversive conditions during development can shape an individual to be optimally adapted to similar conditions later in life.",
"title": ""
},
{
"docid": "c4816cafb042e6d96caee6af90583422",
"text": "Software Defined Networking (SDN) is an emerging network control paradigm focused on logical centralization and programmability. At the same time, distributed routing protocols, most notably OSPF and IS-IS, are still prevalent in IP networks, as they provide shortest path routing, fast topological convergence after network failures, and, perhaps most importantly, the confidence based on decades of reliable operation. Therefore, a hybrid SDN/OSPF operation remains a desirable proposition. In this paper, we propose a new method of hybrid SDN/OSPF operation. Our method is different from other hybrid approaches, as it uses SDN nodes to partition an OSPF domain into sub-domains thereby achieving the traffic engineering capabilities comparable to full SDN operation. We place SDN-enabled routers as subdomain border nodes, while the operation of the OSPF protocol continues unaffected. In this way, the SDN controller can tune routing protocol updates for traffic engineering purposes before they are flooded into sub-domains. While local routing inside sub-domains remains stable at all times, inter-sub-domain routes can be optimized by determining the routes in each traversed sub-domain. As the majority of traffic in non-trivial topologies has to traverse multiple subdomains, our simulation results confirm that a few SDN nodes allow traffic engineering up to a degree that renders full SDN deployment unnecessary.",
"title": ""
},
{
"docid": "c4282486dad6f0fef06964bd3fa45272",
"text": "In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models. Specically, the toolkit provides a unied data preparation module for dierent text matching problems, a exible layer-based model construction process, and a variety of training objectives and evaluation metrics. In addition, the toolkit has implemented two schools of representative deep text matching models, namely representation-focused models and interactionfocused models. Finally, users can easily modify existing models, create and share their own models for text matching in MatchZoo.",
"title": ""
},
{
"docid": "3ee3cf039b1bc03d6b6e504ae87fc62f",
"text": "Objective: This paper tackles the problem of transfer learning in the context of electroencephalogram (EEG)-based brain–computer interface (BCI) classification. In particular, the problems of cross-session and cross-subject classification are considered. These problems concern the ability to use data from previous sessions or from a database of past users to calibrate and initialize the classifier, allowing a calibration-less BCI mode of operation. Methods: Data are represented using spatial covariance matrices of the EEG signals, exploiting the recent successful techniques based on the Riemannian geometry of the manifold of symmetric positive definite (SPD) matrices. Cross-session and cross-subject classification can be difficult, due to the many changes intervening between sessions and between subjects, including physiological, environmental, as well as instrumental changes. Here, we propose to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable. Then, classification is performed both using a standard minimum distance to mean classifier, and through a probabilistic classifier recently developed in the literature, based on a density function (mixture of Riemannian Gaussian distributions) defined on the SPD manifold. Results: The improvements in terms of classification performances achieved by introducing the affine transformation are documented with the analysis of two BCI datasets. Conclusion and significance: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.",
"title": ""
},
{
"docid": "263c7309eb803c91ab15af5708cf039c",
"text": "In wave optics, the Wigner distribution and its Fourier dual, the ambiguity function, are important tools in optical system simulation and analysis. The light field fulfills a similar role in the computer graphics community. In this paper, we establish that the light field as it is used in computer graphics is equivalent to a smoothed Wigner distribution and that these are equivalent to the raw Wigner distribution under a geometric optics approximation. Using this insight, we then explore two recent contributions: Fourier slice photography in computer graphics and wavefront coding in optics, and we examine the similarity between explanations of them using Wigner distributions and explanations of them using light fields. Understanding this long-suspected equivalence may lead to additional insights and the productive exchange of ideas between the two fields.",
"title": ""
},
{
"docid": "3c07ced46da30f8128543be21e43891e",
"text": "We propose a new subspace clustering method that integrates feature selection into subspace clustering. Rather than using all features to construct a low-rank representation of the data, we find such a representation using only relevant features, which helps in revealing more accurate data relationships. Two variants are proposed by using both convex and nonconvex rank approximations. Extensive experimental results confirm the effectiveness of the proposed method and models.",
"title": ""
},
{
"docid": "e95d41b322dccf7f791ed88a9f2ccced",
"text": "Most of the recent literature on Sentiment Analysis over Twitter is tied to the idea that the sentiment is a function of an incoming tweet. However, tweets are filtered through streams of posts, so that a wider context, e.g. a topic, is always available. In this work, the contribution of this contextual information is investigated. We modeled the polarity detection problem as a sequential classification task over streams of tweets. A Markovian formulation of the Support Vector Machine discriminative model as embodied by the SVMhmm algorithm has been here employed to assign the sentiment polarity to entire sequences. The experimental evaluation proves that sequential tagging effectively embodies evidence about the contexts and is able to reach a relative increment in detection accuracy of around 20% in F1 measure. These results are particularly interesting as the approach is flexible and does not require manually coded resources.",
"title": ""
},
{
"docid": "173f5497089e86c29075df964891ca13",
"text": "Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The pruning process attempts to eliminate as many connections as possible from the network. Relevant and irrelevant attributes of the data are distinguished during the training process. Those that are relevant will be kept and others will be automatically discarded. From the simplified networks having small number of connections and nodes we may easily able to extract symbolic rules using the proposed algorithm. Extensive experimental results on several benchmarks problems in neural networks demonstrate the effectiveness of the proposed approach with good generalization ability.",
"title": ""
},
{
"docid": "dd40063dd10027f827a65976261c8683",
"text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.",
"title": ""
},
{
"docid": "de05e649c6e77278b69665df3583d3d8",
"text": "This context-aware emotion-based model can help design intelligent agents for group decision making processes. Experiments show that agents with emotional awareness reach agreement more quickly than those without it.",
"title": ""
},
{
"docid": "d469d31d26d8bc07b9d8dfa8ce277e47",
"text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.",
"title": ""
},
{
"docid": "148b3fa74867f67fa1a7196b3a10038a",
"text": "Sentiment analysis of customer reviews has a crucial impact on a business's development strategy. Despite the fact that a repository of reviews evolves over time, sentiment analysis often relies on offline solutions where training data is collected before the model is built. If we want to avoid retraining the entire model from time to time, incremental learning becomes the best alternative solution for this task. In this work, we present a variant of online random forests to perform sentiment analysis on customers' reviews. Our model is able to achieve accuracy similar to offline methods and comparable to other online models.",
"title": ""
},
{
"docid": "1303770cf8d0f1b0f312feb49281aa10",
"text": "A terahertz metamaterial absorber (MA) with properties of broadband width, polarization-insensitive, wide angle incidence is presented. Different from the previous methods to broaden the absorption width, this letter proposes a novel combinatorial way which units a nested structure with multiple metal-dielectric layers. We numerically investigate the proposed MA, and the simulation results show that the absorber achieves a broadband absorption over a frequency range of 0.896 THz with the absorptivity greater than 90%. Moreover, the full-width at half maximum of the absorber is up to 1.224 THz which is 61.2% with respect to the central frequency. The mechanism for the broadband absorption originates from the overlapping of longitudinal coupling between layers and coupling of the nested structure. Importantly, the nested structure makes a great contribution to broaden the absorption width. Thus, constructing a nested structure in a multi-layer absorber may be considered as an effective way to design broadband MAs.",
"title": ""
},
{
"docid": "fc551fee912bd82562f258ac15e8ec95",
"text": "In orthopedic surgery, it is important for physicians to completely understand the three-dimensional bone structure for several procedures. To achieve this goal, it is required to image the patient several times using C-arm scanner from different positions during the surgery. This procedure is time consuming and increase the x-ray dose given to both patient and physician. In this paper, we propose an augmented reality imaging system for minimally invasive orthopedic surgery. The system is based on mapping the x-ray image to the real object such that the number of x-ray shots during the surgery can be significantly reduced. We consider two imaging scenarios that can fit with different cases. Results obtained through clinical data indicate that the proposed approach has a potential usefulness in real applications.",
"title": ""
},
{
"docid": "ec1f47a6ca0edd2334fc416d29ce02ea",
"text": "We present Synereo, a next-gen decentralized and distributed social network designed for an attention economy. Our presentation is given in two chapters. Chapter 1 presents our design philosophy. Our goal is to make our users more effective agents by presenting social content that is relevant and actionable based on the user’s own estimation of value. We discuss the relationship between attention, value, and social agency in order to motivate the central mechanisms for content flow on the network. Chapter 2 defines a network model showing the mechanics of the network interactions, as well as the compensation model enabling users to promote content on the network and receive compensation for attention given to the network. We discuss the high-level technical implementation of these concepts based on the π-calculus the most well known of a family of computational formalisms known as the mobile process calculi. 0.1 Prologue: This is not a manifesto The Internet is overflowing with social network manifestos. Ello has a manifesto. Tsu has a manifesto. SocialSwarm has a manifesto. Even Disaspora had a manifesto. Each one of them is written in earnest with clear intent (see figure 1). Figure 1: Ello manifesto The proliferation of these manifestos and the social networks they advertise represents an important market shift, one that needs to be understood in context. The shift from mainstream media to social media was all about “user generated content”. In other words, people took control of the content by making it for and distributing it to each other. In some real sense it was a remarkable expansion of the shift from glamrock to punk and DIY; and like that movement, it was the sense of people having a say in what impressions they received that has been the underpinning of the success of Facebook and Twitter and YouTube and the other social media giants. In the wake of that shift, though, we’ve seen that even when the people are producing the content, if the service is in somebody else’s hands then things still go wonky: the service providers run psychology experiments via the social feeds [1]; they sell people’s personally identifiable and other critical info [2]; and they give data to spooks [3]. Most importantly, they do this without any real consent of their users. With this new wave of services people are expressing a desire to take more control of the service, itself. When the service is distributed, as is the case with Splicious and Diaspora, it is truly cooperative. And, just as with the music industry, where the technology has reached the point that just about anybody can have a professional studio in their home, the same is true with media services. People are recognizing that we don’t need big data centers with massive environmental impact, we need engagement at the level of the service, itself. If this really is the underlying requirement the market is articulating, then there is something missing from a social network that primarily serves up a manifesto with their service. While each of the networks mentioned above constitutes an important step in the right direction, they lack any clear indication",
"title": ""
},
{
"docid": "3155879d5264ad723de6051075d47ee2",
"text": "We have shown that there is a difference between individuals in their tendency to deposit DNA on an item when it is touched. While a good DNA shedder may leave behind a full DNA profile immediately after hand washing, poor DNA shedders may only do so when their hands have not been washed for a period of 6h. We have also demonstrated that transfer of DNA from one individual (A) to another (B) and subsequently to an object is possible under specific laboratory conditions using the AMPFISTR SGM Plus multiplex at both 28 and 34 PCR cycles. This is a form of secondary transfer. If a 30 min or 1h delay was introduced before contact of individual B with the object then at 34 cycles a mixture of profiles from both individuals was recovered. We have also determined that the quantity and quality of DNA profiles recovered is dependent upon the particular individuals involved in the transfer process. The findings reported here are preliminary and further investigations are underway in order to further add to understanding of the issues of DNA transfer and persistence.",
"title": ""
},
{
"docid": "27d65b98233322f099fccc61838ce4ae",
"text": "This article defines universal design for learning (UDL) and presents examples of how universally designed technology hardware and software applications benefit students with disabilities who are majoring in science, technology, engineering, or mathematics (STEM) majors. When digital technologies are developed without incorporating accessible design features, persons with disabilities cannot access required information to interact with the information society. However, when accessible technology and instruction are provided using UDL principles, research indicates that many students benefit with increased achievement. Learning through universally designed and accessible technology is essential for students with disabilities who, without access, would not gain the skills needed to complete their degrees and access employment and a life of self-sufficiency. UDL strategies enhance learning for all students, including students with disabilities who are majoring in STEM, which are among the most rigorous academic disciplines, but also among the most financially rewarding careers.",
"title": ""
},
{
"docid": "2410a4b40b833d1729fac37020ec13be",
"text": "Understanding how ecological conditions influence physiological responses is fundamental to forensic entomology. When determining the minimum postmortem interval with blow fly evidence in forensic investigations, using a reliable and accurate model of development is integral. Many published studies vary in results, source populations, and experimental designs. Accordingly, disentangling genetic causes of developmental variation from environmental causes is difficult. This study determined the minimum time of development and pupal sizes of three populations of Lucilia sericata Meigen (Diptera: Calliphoridae; from California, Michigan, and West Virginia) at two temperatures (20 degrees C and 33.5 degrees C). Development times differed significantly between strain and temperature. In addition, California pupae were the largest and fastest developing at 20 degrees C, but at 33.5 degrees C, though they still maintained their rank in size among the three populations, they were the slowest to develop. These results indicate a need to account for genetic differences in development, and genetic variation in environmental responses, when estimating a postmortem interval with entomological data.",
"title": ""
},
{
"docid": "40229eb3a95ec25c1c3247edbcc22540",
"text": "The aim of this paper is the identification of a superordinate research framework for describing emerging IT-infrastructures within manufacturing, logistics and Supply Chain Management. This is in line with the thoughts and concepts of the Internet of Things (IoT), as well as with accompanying developments, namely the Internet of Services (IoS), Mobile Computing (MC), Big Data Analytics (BD) and Digital Social Networks (DSN). Furthermore, Cyber-Physical Systems (CPS) and their enabling technologies as a fundamental component of all these research streams receive particular attention. Besides of the development of an eponymous research framework, relevant applications against the background of the technological trends as well as potential areas of interest for future research, both raised from the economic practice's perspective, are identified.",
"title": ""
}
] |
scidocsrr
|
0ed6ee20795f5c41be00a693571b70c4
|
Metadata Extraction from PDF Papers for Digital Library Ingest
|
[
{
"docid": "3afa057464635a4d78d46461562390ea",
"text": "Digital librarians strive to add value to the collections they create and maintain. One way is through selectivity: a carefully chosen set of authoritative documents in a particular topic area is far more useful to those working in the area than a huge, unfocused collection (like the Web). Another is by augmenting the collection with highquality metadata, which supports activities of searching and browsing in a uniform and useful way. A third way, and our topic here, is to enrich the documents by examining their content, extracting information, and using it to enhance the ways they can be located and presented. Text mining is a burgeoning new field that attempts to glean meaningful information from natural-language text. It may be loosely characterized as the process of analyzing text to extract information that is useful for particular purposes. It most commonly targets text whose function is the communication of factual information or opinions, and the motivation for trying to extract information from such text automatically is compelling – even if success is only partial. “Text mining” (sometimes called “text data mining”; [4]) defies tight definition but encompasses a wide range of activities: text summarization; document retrieval; document clustering; text categorization; language identification; authorship ascription; identifying phrases, phrase structures, and key phrases; extracting “entities” such as names, dates, and abbreviations; locating acronyms and their definitions; filling predefined templates with extracted information; and even learning rules from such templates [8]. Techniques of text mining have much to offer digital libraries and their users. Here we describe the marriage of a widely used digital library system (Greenstone) with a development environment for text mining (GATE) to enrich the library reader’s experience. The work is in progress: one level of integration has been demonstrated and another is planned. The project has been greatly facilitated by the fact that both systems are publicly available under the GNU public license – and, in addition, this means that the benefits gained by leveraging text mining techniques will accrue to all Greenstone users.",
"title": ""
}
] |
[
{
"docid": "d8ebc5a68f8e3e7db1abc6a0e7b37da2",
"text": "Previous research shows that interleaving rather than blocking practice of different skills (e.g. abcbcacab instead of aaabbbccc) usually improves subsequent test performance. Yet interleaving, but not blocking, ensures that practice of any particular skill is distributed, or spaced, because any two opportunities to practice the same task are not consecutive. Hence, because spaced practice typically improves test performance, the previously observed test benefits of interleaving may be due to spacing rather than interleaving per se. In the experiment reported herein, children practiced four kinds of mathematics problems in an order that was interleaved or blocked, and the degree of spacing was fixed. The interleaving of practice impaired practice session performance yet doubled scores on a test given one day later. An analysis of the errors suggested that interleaving boosted test scores by improving participants’ ability to pair each problem with the appropriate procedure. Copyright # 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "4eb9808144e04bf0c01121f2ec7261d2",
"text": "The rise of multicore computing has greatly increased system complexity and created an additional burden for software developers. This burden is especially troublesome when it comes to optimizing software on modern computing systems. Autonomic or adaptive computing has been proposed as one method to help application programmers handle this complexity. In an autonomic computing environment, system services monitor applications and automatically adapt their behavior to increase the performance of the applications they support. Unfortunately, applications often run as performance black-boxes and adaptive services must infer application performance from low-level information or rely on system-specific ad hoc methods. This paper proposes a standard framework, Application Heartbeats, which applications can use to communicate both their current and target performance and which autonomic services can use to query these values.\n The Application Heartbeats framework is designed around the well-known idea of a heartbeat. At important points in the program, the application registers a heartbeat. In addition, the interface allows applications to express their performance in terms of a desired heart rate and/or a desired latency between specially tagged heartbeats. Thus, the interface provides a standard method for an application to directly communicate its performance and goals while allowing autonomic services access to this information. Thus, Heartbeat-enabled applications are no longer performance black-boxes. This paper presents the Applications Heartbeats interface, characterizes two reference implementations (one suitable for clusters and one for multicore), and illustrates the use of Heartbeats with several examples of systems adapting behavior based on feedback from heartbeats.",
"title": ""
},
{
"docid": "31a2e6948a816a053d62e3748134cdc2",
"text": "In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent’s representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments.",
"title": ""
},
{
"docid": "79cdd24d14816f45b539f31606a3d5ee",
"text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.",
"title": ""
},
{
"docid": "62fea7d8dcdb999ec87c607c47e2d015",
"text": "The role of workplace supervisors in the clinical education of medical students is currently under debate. However, few studies have addressed how supervisors conceptualize workplace learning and how conceptions relate to current sociocultural workplace learning theory. We explored physician conceptions of: (a) medical student learning in the clinical workplace and (b) how they contribute to student learning. The methodology included a combination of a qualitative, inductive (conventional) and deductive (directed) content analysis approach. The study triangulated two types of interview data from 4 focus group interviews and 34 individual interviews. A total of 55 physicians participated. Three overarching themes emerged from the data: learning as membership, learning as partnership and learning as ownership. The themes described how physician conceptions of learning and supervision were guided by the notions of learning-as-participation and learning-as-acquisition. The clinical workplace was either conceptualized as a context in which student learning is based on a learning curriculum, continuity of participation and partnerships with supervisors, or as a temporary source of knowledge within a teaching curriculum. The process of learning was shaped through the reciprocity between different factors in the workplace context and the agency of students and supervising physicians. A systems-thinking approach merged with the \"co-participation\" conceptual framework advocated by Billet proved to be useful for analyzing variations in conceptions. The findings suggest that mapping workplace supervisor conceptions of learning can be a valuable starting point for medical schools and educational developers working with changes in clinical educational and faculty development practices.",
"title": ""
},
{
"docid": "6498624b945f4a0a218c2c047641296e",
"text": "The electronic health record (EHR) contains a large amount of multi-dimensional and unstructured clinical data of significant operational and research value. Distinguished from previous studies, our approach embraces a double-annotated dataset and strays away from obscure “black-box” models to comprehensive deep learning models. In this paper, we present a novel neural attention mechanism that not only classifies clinically important findings. Specifically, convolutional neural networks (CNN) with attention analysis are used to classify radiology head computed tomography reports based on five categories that radiologists would account for in assessing acute and communicable findings in daily practice. The experiments show that our CNN attention models outperform non-neural models, especially when trained on a larger dataset. Our attention analysis demonstrates the intuition behind the classifier's decision by generating a heatmap that highlights attended terms used by the CNN model; this is valuable when potential downstream medical decisions are to be performed by human experts or the classifier information is to be used in cohort construction such as for epidemiological studies.",
"title": ""
},
{
"docid": "4a80d4ecb00fd27b29f342794213fc41",
"text": "Rapid and accurate analysis of platelet count plays an important role in evaluating hemorrhagic status. Therefore, we evaluated platelet counting performance of a hematology analyzer, Celltac F (MEK-8222, Nihon Kohden Corporation, Tokyo, Japan), that features easy use with low reagent consumption and high throughput while occupying minimal space in the clinical laboratory. All blood samples were anticoagulated with dipotassium ethylenediaminetetraacetic acid (EDTA-2K). The samples were stored at room temperature (18(;)C-22(;)C) and tested within 4 hours of phlebotomy. We evaluated the counting ability of the Celltac F hematology analyzer by comparing it with the platelet counts obtained by the flow cytometry method that ISLH and ICSH recommended, and also the manual visual method by Unopette (Becton Dickinson Vacutainer Systems). The ICSH/ISLH reference method is based on the fact that platelets can be stained with monoclonal antibodies to CD41 and/or CD61. The dilution ratio was optimized after the precision, coincidence events, and debris counts were confirmed by the reference method. Good correlation of platelet count between the Celltac F and the ICSH/ISLH reference method (r = 0.99, and the manual visual method (r= 0.93) were obtained. The regressions were y = 0.90 x+9.0 and y=1.11x+8.4, respectively. We conclude that the Celltac F hematology analyzer for platelet counting was well suited to the ICSH/ISLH reference method for rapidness and reliability.",
"title": ""
},
{
"docid": "8397bdb99c650ea07feeb3301698dd79",
"text": "This section gives a short survey of the principles and the terminology of phased array radar. Beamforming, radar detection and parameter estimation are described. The concept of subarrays and monopulse estimation with arbitrary subarrays is developed. As a preparation to adaptive beam forming, which is treated in several other sections, the topic of pattern shaping by deterministic weighting is presented in more detail. 1.0 INTRODUCTION Arrays are today used for many applications and the view and terminology is quite different. We give here an introduction to the specific features of radar phased array antennas and the associated signal processing. First the radar principle and the terminology is explained. Beamforming with a large number of array elements is the typical radar feature and the problems with such antennas are in other applications not known. We discuss therefore the special problems of fully filled arrays, large apertures and bandwidth. To reduce cost and space the antenna outputs are usually summed up into subarrays. Digital processing is done only with the subarray outputs. The problems of such partial analogue and digital beamforming, in particular the grating problems are discussed. This topic will be reconsidered for adaptive beamforming, space-time adaptive processing (STAP), and SAR. Radar detection, range and direction estimation is derived from statistical hypotheses testing and parameter estimation theory. The main application of this theory is the derivation of adaptive beamforming to be considered in the following lectures. In this lecture we present as an application the derivation of the monopulse estimator which is in the following lectures extended to monopulse estimators for adaptive arrays or STAP. As beamforming plays a central role in phased arrays and as a preparation to all kinds of adaptive beamforming, a detailed presentation of deterministic antenna pattern shaping and the associated channel accuracy requirements is given. 2.0 FUNDAMENTALS OF RADAR AND ARRAYS 2.1 Nomenclature The radar principle is sketched in Figure 1. A pulse of length τ is transmitted, is reflected at the target and is received again at time t0 at the radar. From this signal travelling time the range is calculated R0= ct0 /2. The process is repeated at the pulse repetition interval (PRI) T. The maximum unambiguous range is Nickel, U. (2006) Fundamentals of Signal Processing for Phased Array Radar. In Advanced Radar Signal and Data Processing (pp. 1-1 – 1-22). Educational Notes RTO-EN-SET-086, Paper 1. Neuilly-sur-Seine, France: RTO. Available from: http://www.rto.nato.int/abstracts.asp. RTO-EN-SET-086 1 1 Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 01 SEP 2006 2. REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE Fundamentals of Signal Processing for Phased Array Radar 5a. CONTRACT NUMBER",
"title": ""
},
{
"docid": "faec1a6b42cfdd303309c69c4185c9fe",
"text": "The currency which is imitated with illegal sanction of state and government is counterfeit currency. Every country incorporates a number of security features for its currency security. Currency counterfeiting is always been a challenging term for financial system of any country. The problem of counterfeiting majorly affects the economical as well as financial growth of a country. In view of the problem various studies about counterfeit detection has been conducted using various techniques and variety of tools. This paper focuses on the researches and studies that have been conducted by various researchers. The paper highlighted the methodologies used and the particular characteristics features considered for counterfeit money detection.",
"title": ""
},
{
"docid": "a5e01cfeb798d091dd3f2af1a738885b",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "48aa68862748ab502f3942300b4d8e1e",
"text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.",
"title": ""
},
{
"docid": "df05a09412e48520d58da7e50001fed9",
"text": "Ultrabroadband and wide-angle antireflection coatings (ARCs) are essential to realizing efficiency gains for state-of-the-art multijunction photovoltaic devices. In this study, we examine a novel design that integrates a nanostructured antireflection layer with a multilayer ARC. Using optical models, we find that this hybrid approach can reduce reflected AM1.5D power by 10-50 W/m2 over a wide angular range compared to conventional thin-film ARCs. A detailed balance model correlates this to an improvement in absolute cell efficiency of 1-2%. Three different ARC designs are fabricated on indium gallium phosphide, and reflectance is measured to show the benefit of this hybrid approach.",
"title": ""
},
{
"docid": "02988a99b7cb966ec5c63c51d1aa57d8",
"text": "Data embedding is used in many machine learning applications to create low-dimensional feature representations, which preserves the structure of data points in their original space. In this paper, we examine the scenario of a heterogeneous network with nodes and content of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a network of scalable, dynamic and heterogeneous data. In such cases, both the content and linkage structure provide important cues for creating a unified feature representation of the underlying network. In this paper, we design a deep embedding algorithm for networked data. A highly nonlinear multi-layered embedding function is used to capture the complex interactions between the heterogeneous data in a network. Our goal is to create a multi-resolution deep embedding function, that reflects both the local and global network structures, and makes the resulting embedding useful for a variety of data mining tasks. In particular, we demonstrate that the rich content and linkage information in a heterogeneous network can be captured by such an approach, so that similarities among cross-modal data can be measured directly in a common embedding space. Once this goal has been achieved, a wide variety of data mining problems can be solved by applying off-the-shelf algorithms designed for handling vector representations. Our experiments on real-world network datasets show the effectiveness and scalability of the proposed algorithm as compared to the state-of-the-art embedding methods.",
"title": ""
},
{
"docid": "5e75a46c36e663791db0f8b45f685cb6",
"text": "This study provides one of very few experimental investigations into the impact of a musical soundtrack on the video gaming experience. Participants were randomly assigned to one of three experimental conditions: game-with-music, game-without-music, or music-only. After playing each of three segments of The Lord of the Rings: The Two Towers (Electronic Arts, 2002)--or, in the music-only condition, listening to the musical score that accompanies the scene--subjects responded on 21 verbal scales. Results revealed that some, but not all, of the verbal scales exhibited a statistically significant difference due to the presence of a musical score. In addition, both gender and age level were shown to be significant factors for some, but not all, of the verbal scales. Details of the specific ways in which music affects the gaming experience are provided in the body of the paper.",
"title": ""
},
{
"docid": "34690f455f9e539b06006f30dd3e512b",
"text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.",
"title": ""
},
{
"docid": "64d9fc4cd85aff665155b22ead0ad1a5",
"text": "This paper introduces capabilities developed for a battery-sensing intrusion protection system (B-SIPS) for mobile computers, which alerts when abnormal current changes are detected. The intrusion detection system's (IDS's) IEEE 802.15.1 (Bluetooth) and 802.11 (Wi-Fi) capabilities are enhanced with iterative safe process checking, wireless connection determination, and an automated intrusion protection disconnect ability. The correlation intrusion detection engine (CIDE) provides power profiling for mobile devices and a correlated view of B-SIPS and snort alerts. An examination of smart battery drain times was conducted to ascertain the optimal transmission rate for the B-SIPS client. A 10 second reporting rate was used to assess 9 device types, which were then compared with their corresponding baseline battery lifetime. Lastly, an extensive usability study was conducted to improve the B-SIPS client and CIDE features. The 31 expert participants provided feedback and data useful for validating the system's viability as a complementary IDS for mobile devices.",
"title": ""
},
{
"docid": "bfcd962b099e6e751125ac43646d76cc",
"text": "Dear Editor: We read carefully and with great interest the anatomic study performed by Lilyquist et al. They performed an interesting study of the tibiofibular syndesmosis using a 3-dimensional method that can be of help when performing anatomic studies. As the authors report in the study, a controversy exists regarding the anatomic structures of the syndesmosis, and a huge confusion can be observed when reading the related literature. However, anatomic confusion between the inferior transverse ligament and the intermalleolar ligament is present in the manuscript: the intermalleolar ligament is erroneously identified as the “inferior” transverse ligament. The transverse ligament is the name that receives the deep component of the posterior tibiofibular ligament. The posterior tibiofibular ligament is a ligament located in the posterior aspect of the ankle that joins the distal epiphysis of tibia and fibula; it is formed by 2 fascicles, one superficial and one deep. The deep fascicle or transverse ligament is difficult to see from a posterior ankle view, but easily from a plantar view of the tibiofibular syndesmosis (Figure 1). Instead, the intermalleolar ligament is a thickening of the posterior ankle joint capsule, located between the posterior talofibular ligament and the transverse ligament. It originates from the medial facet of the lateral malleolus and directs medially to tibia and talus (Figure 2). The intermalleolar ligament was observed in 100% of the specimens by Golanó et al in contrast with 70% in Lilyquist’s study. On the other hand, structures of the ankle syndesmosis have not been named according to the International Anatomical Terminology (IAT). In 1955, the VI Federative International Congress of Anatomy accorded to eliminate eponyms from the IAT. Because of this measure, the Chaput, Wagstaff, or Volkman tubercles used in the manuscript should be eliminated in order to avoid increasing confusion. Lilyquist et al also defined the tibiofibular syndesmosis as being formed by the anterior inferior tibiofibular ligament, the posterior inferior tibiofibular ligament, the interosseous ligament, and the inferior transverse ligament. The anterior inferior tibiofibular ligament and posterior inferior tibiofibular ligament of the tibiofibular syndesmosis (or inferior tibiofibular joint) should be referred to as the anterior tibiofibular ligament and posterior tibiofibular ligament. The reason why it is not necessary to use “inferior” in its description is that the ligaments of the superior tibiofibular joint are the anterior ligament of the fibular head and the posterior ligament of the fibular head, not the “anterior superior tibiofibular ligament” and “posterior superior tibiofibular ligament.” The ankle syndesmosis is one of the areas of the human body where chronic anatomic errors exist: the transverse ligament (deep component of the posterior tibiofibular ligament), the anterior tibiofibular ligament (“anterior 689614 FAIXXX10.1177/1071100716689614Foot & Ankle InternationalLetter to the Editor letter2017",
"title": ""
},
{
"docid": "e5f2101e7937c61a4d6b11d4525a7ed8",
"text": "This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.",
"title": ""
},
{
"docid": "544f47b32803266905190226a1f76288",
"text": "1 Abstract Characterizing board test coverage as a percentage of devices or nodes having tests does not accurately portray coverage, especially in a limited access testing environment that today includes a variety of diverse testing approaches from visual and penetrative inspection to classical In-Circuit test. A better depiction of test coverage is achieved by developing a list of potential defects referred to as the defect universe, where the capabilities of the chosen test strategy are not considered in development of this defect list. Coverage is measured by grading the capabilities of each test process against the defect universe. The defect universe is defined to be meaningful to the bulk of the electronics industry and to provide a consistent framework for coverage metrics and comparisons.",
"title": ""
},
{
"docid": "8074ecf8bd73c4add9e01f0b84ed6e70",
"text": "This paper provides a survey on implementing wireless sensor network (WSN) technology on industrial process monitoring and control. First, the existing industrial applications are explored, following with a review of the advantages of adopting WSN technology for industrial control. Then, challenging factors influencing the design and acceptance of WSNs in the process control world are outlined, and the state-of-the-art research efforts and industrial solutions are provided corresponding to each factor. Further research issues for the realization and improvement of wireless sensor network technology on process industry are also mentioned.",
"title": ""
}
] |
scidocsrr
|
a487594df1c6dd64ec5c669033a2908f
|
Stock Prediction and Automated Trading System
|
[
{
"docid": "e37c560150a94947117d7c796af73469",
"text": "For many players in financial markets, the price impact of their trading activity represents a large proportion of their transaction costs. This paper proposes a novel machine learning method for predicting the price impact of order book events. Specifically, we introduce a prediction system based on performance weighted ensembles of random forests. The system's performance is benchmarked using ensembles of other popular regression algorithms including: liner regression, neural networks and support vector regression using depth-of-book data from the BATS Chi-X exchange. The results show that recency-weighted ensembles of random forests produce over 15% greater prediction accuracy on out-of-sample data, for 5 out of 6 timeframes studied, compared with all benchmarks.",
"title": ""
},
{
"docid": "a39b83010f5c4094bc7636fd550a71bd",
"text": "Trend following (TF) is trading philosophy by which buying/selling decisions are made solely according to the observed market trend. For many years, many manifestations of TF such as a software program called Turtle Trader, for example, emerged in the industry. Surprisingly little has been studied in academic research about its algorithms and applications. Unlike financial forecasting, TF does not predict any market movement; instead it identifies a trend at early time of the day, and trades automatically afterwards by a pre-defined strategy regardless of the moving market directions during run time. Trend following trading has been popular among speculators. However it remains as a trading method where human judgment is applied in setting the rules (aka the strategy) manually. Subsequently the TF strategy is executed in pure objective operational manner. Finding the correct strategy at the beginning is crucial in TF. This usually involves human intervention in first identifying a trend, and configuring when to place an order and close it out, when certain conditions are met. In this paper, we evaluated and compared a collection of TF algorithms that can be programmed in a computer system for automated trading. In particular, a new version of TF called trend recalling model is presented. It works by partially matching the current market trend with one of the proven successful patterns from the past. Our experiments based on real stock market data show that this method has an edge over the other trend following methods in profitability. The results show that TF however is still limited by market fluctuation (volatility), and the ability to identify trend signal. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "405dce1cbea2315c9d602f0fdaaf32af",
"text": "A single chip NFC transceiver supporting not only NFC active and passive mode but also 13.56 MHz RFID reader and tag mode was designed and fabricated. The proposed NFC transceiver can operate as a RFID tag even without external power supply thanks to a dual antenna structure for initiator and target. The area increment due to additional target antenna is negligible because the target antenna is constructed by using a shielding layer of initiator antenna.",
"title": ""
},
{
"docid": "af7c62cba99c426e6108d164939b44de",
"text": "The hippocampal formation can encode relative spatial location, without reference to external cues, by the integration of linear and angular self-motion (path integration). Theoretical studies, in conjunction with recent empirical discoveries, suggest that the medial entorhinal cortex (MEC) might perform some of the essential underlying computations by means of a unique, periodic synaptic matrix that could be self-organized in early development through a simple, symmetry-breaking operation. The scale at which space is represented increases systematically along the dorsoventral axis in both the hippocampus and the MEC, apparently because of systematic variation in the gain of a movement-speed signal. Convergence of spatially periodic input at multiple scales, from so-called grid cells in the entorhinal cortex, might result in non-periodic spatial firing patterns (place fields) in the hippocampus.",
"title": ""
},
{
"docid": "2a56702663e6e52a40052a5f9b79a243",
"text": "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.",
"title": ""
},
{
"docid": "0084038376a3aa8ae2fb1ce5e1569379",
"text": "Key to automatically generate natural scene images is to properly arrange amongst various spatial elements, especially in the depth cue. To this end, we introduce a novel depth structure preserving scene image generation network (DSP-GAN), which favors a hierarchical architecture, for the purpose of depth structure preserving scene image generation. The main trunk of the proposed infrastructure is built upon a Hawkes point process that models high-order spatial dependency between different depth layers. Within each layer generative adversarial sub-networks are trained collaboratively to generate realistic scene components, conditioned on the layer information produced by the point process. We experiment our model on annotated natural scene images collected from SUN dataset and demonstrate that our models are capable of generating depth-realistic natural scene image.",
"title": ""
},
{
"docid": "c2fffaf7705ec5d87ca6cfffb24b1371",
"text": "Francisella tularensis is a highly infectious bacterium whose virulence relies on its ability to rapidly reach the macrophage cytosol and extensively replicate in this compartment. We previously identified a novel Francisella virulence factor, DipA (FTT0369c), which is required for intramacrophage proliferation and survival, and virulence in mice. DipA is a 353 amino acid protein with a Sec-dependent signal peptide, four Sel1-like repeats (SLR), and a C-terminal coiled-coil (CC) domain. Here, we determined through biochemical and localization studies that DipA is a membrane-associated protein exposed on the surface of the prototypical F. tularensis subsp. tularensis strain SchuS4 during macrophage infection. Deletion and substitution mutagenesis showed that the CC domain, but not the SLR motifs, of DipA is required for surface exposure on SchuS4. Complementation of the dipA mutant with either DipA CC or SLR domain mutants did not restore intracellular growth of Francisella, indicating that proper localization and the SLR domains are required for DipA function. Co-immunoprecipitation studies revealed interactions with the Francisella outer membrane protein FopA, suggesting that DipA is part of a membrane-associated complex. Altogether, our findings indicate that DipA is positioned at the host-pathogen interface to influence the intracellular fate of this pathogen.",
"title": ""
},
{
"docid": "10a0f370ad3e9c3d652e397860114f90",
"text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.",
"title": ""
},
{
"docid": "8537541028cbf72b6e05c731d52df59f",
"text": "MONITORING FREQUENT ITEMS OVER DISTRIBUTED DATA STREAMS Robert H. Fuller April 3, 2007 Many important applications require the discovery of items which have occurred frequently. Knowledge of these items is commonly used in anomaly detection and network monitoring tasks. Effective solutions for this problem focus mainly on reducing memory requirements in a centralized environment. These solutions, however, ignore the inherently distributed nature of many systems. Naively forwarding data to a centralized location is not practical when dealing with high speed data streams and will result in significant communication overhead. This thesis proposes a new approach designed for continuously tracking frequent items over distributed data streams, providing either exact or approximate answers. The method introduced is a direct modification to an existing communication efficient algorithm called Top-K Monitoring. Experimental results demonstrated that the proposed modifications significantly reduced communication cost and improved scalability. Also examined in this thesis is the applicability of frequent item monitoring at detecting distributed denial of service attacks. Simulation of the proposed tracking",
"title": ""
},
{
"docid": "bf156a97587b55e8afe255fe1b1a8ac0",
"text": "In recent years researches are focused towards mining infrequent patterns rather than frequent patterns. Mining infrequent pattern plays vital role in detecting any abnormal event. In this paper, an algorithm named Infrequent Pattern Miner for Data Streams (IPM-DS) is proposed for mining nonzero infrequent patterns from data streams. The proposed algorithm adopts the FP-growth based approach for generating all infrequent patterns. The proposed algorithm (IPM-DS) is evaluated using health data set collected from wearable physiological sensors that measure vital parameters such as Heart Rate (HR), Breathing Rate (BR), Oxygen Saturation (SPO2) and Blood pressure (BP) and also with two publically available data sets such as e-coli and Wine from UCI repository. The experimental results show that the proposed algorithm generates all possible infrequent patterns in less time.",
"title": ""
},
{
"docid": "6b21fc5c80677016fbd3d52d721ecce5",
"text": "This paper focuses on the problem of generating human face pictures from specific attributes. The existing CNN-based face generation models, however, either ignore the identity of the generated face or fail to preserve the identity of the reference face image. Here we address this problem from the view of optimization, and suggest an optimization model to generate human face with the given attributes while keeping the identity of the reference image. The attributes can be obtained from the attribute-guided image or by tuning the attribute features of the reference image. With the deep convolutional network \"VGG-Face\", the loss is defined on the convolutional feature maps. We then apply the gradient decent algorithm to solve this optimization problem. The results validate the effectiveness of our method for attribute driven and identity-preserving face generation.",
"title": ""
},
{
"docid": "9caaf7c3c2e01e8625fc566db4913df1",
"text": "It is established that driver distraction is the result of sharing cognitive resources between the primary task (driving) and any other secondary task. In the case of holding conversations, a human passenger who is aware of the driving conditions can choose to interrupt his speech in situations potentially requiring more attention from the driver, but in-car information systems typically do not exhibit such sensitivity. We have designed and tested such a system in a driving simulation environment. Unlike other systems, our system delivers information via speech (calendar entries with scheduled meetings) but is able to react to signals from the environment to interrupt when the driver needs to be fully attentive to the driving task and subsequently resume its delivery. Distraction is measured by a secondary short-term memory task. In both tasks, drivers perform significantly worse when the system does not adapt its speech, while they perform equally well to control conditions (no concurrent task) when the system intelligently interrupts and resumes.",
"title": ""
},
{
"docid": "3300e4e29d160fb28861ac58740834b5",
"text": "To facilitate proactive fault management in large-scale systems such as IBM Blue Gene/P, online failure prediction is of paramount importance. While many techniques have been presented for online failure prediction, questions arise regarding two commonly used approaches: period-based and event-driven. Which one has better accuracy? What is the best observation window (i.e., the time interval used to collect evidence before making a prediction)? How does the lead time (i.e., the time interval from the prediction to the failure occurrence) impact prediction arruracy? To answer these questions, we analyze and compare period-based and event-driven prediction approaches via a Bayesian prediction model. We evaluate these prediction approaches, under a variety of testing parameters, by means of RAS logs collected from a production supercomputer at Argonne National Laboratory. Experimental results show that the period-based Bayesian model and the event-driven Bayesian model can achieve up to 65.0% and 83.8% prediction accuracy, respectively. Furthermore, our sensitivity study indicates that the event-driven approach seems more suitable for proactive fault management in large-scale systems like Blue Gene/P.",
"title": ""
},
{
"docid": "dc198f396142376e36d7143a5bfe7d19",
"text": "Successful direct pulp capping of cariously exposed permanent teeth with reversible pulpitis and incomplete apex formation can prevent the need for root canal treatment. A case report is presented which demonstrates the use of mineral trioxide aggregate (MTA) as a direct pulp capping material for the purpose of continued maturogenesis of the root. Clinical and radiographic follow-up demonstrated a vital pulp and physiologic root development in comparison with the contralateral tooth. MTA can be considered as an effective material for vital pulp therapy, with the goal of maturogenesis.",
"title": ""
},
{
"docid": "479f00e59bdc5744c818e29cdf446df3",
"text": "A new algorithm for Support Vector regression is described. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with non-constant radius. The algorithm is analysed theoretically and experimentally.",
"title": ""
},
{
"docid": "db1537ee5c95f97a7e1146bc4fd68bf0",
"text": "BACKGROUND\nElotuzumab, an immunostimulatory monoclonal antibody targeting signaling lymphocytic activation molecule F7 (SLAMF7), showed activity in combination with lenalidomide and dexamethasone in a phase 1b-2 study in patients with relapsed or refractory multiple myeloma.\n\n\nMETHODS\nIn this phase 3 study, we randomly assigned patients to receive either elotuzumab plus lenalidomide and dexamethasone (elotuzumab group) or lenalidomide and dexamethasone alone (control group). Coprimary end points were progression-free survival and the overall response rate. Final results for the coprimary end points are reported on the basis of a planned interim analysis of progression-free survival.\n\n\nRESULTS\nOverall, 321 patients were assigned to the elotuzumab group and 325 to the control group. After a median follow-up of 24.5 months, the rate of progression-free survival at 1 year in the elotuzumab group was 68%, as compared with 57% in the control group; at 2 years, the rates were 41% and 27%, respectively. Median progression-free survival in the elotuzumab group was 19.4 months, versus 14.9 months in the control group (hazard ratio for progression or death in the elotuzumab group, 0.70; 95% confidence interval, 0.57 to 0.85; P<0.001). The overall response rate in the elotuzumab group was 79%, versus 66% in the control group (P<0.001). Common grade 3 or 4 adverse events in the two groups were lymphocytopenia, neutropenia, fatigue, and pneumonia. Infusion reactions occurred in 33 patients (10%) in the elotuzumab group and were grade 1 or 2 in 29 patients.\n\n\nCONCLUSIONS\nPatients with relapsed or refractory multiple myeloma who received a combination of elotuzumab, lenalidomide, and dexamethasone had a significant relative reduction of 30% in the risk of disease progression or death. (Funded by Bristol-Myers Squibb and AbbVie Biotherapeutics; ELOQUENT-2 ClinicalTrials.gov number, NCT01239797.).",
"title": ""
},
{
"docid": "79574c304675e0ec1a2282027c9fc7c6",
"text": "The metaphoric mapping theory suggests that abstract concepts, like time, are represented in terms of concrete dimensions such as space. This theory receives support from several lines of research ranging from psychophysics to linguistics and cultural studies; especially strong support comes from recent response time studies. These studies have reported congruency effects between the dimensions of time and space indicating that time evokes spatial representations that may facilitate or impede responses to words with a temporal connotation. The present paper reports the results of three linguistic experiments that examined this congruency effect when participants processed past- and future-related sentences. Response time was shorter when past-related sentences required a left-hand response and future-related sentences a right-hand response than when this mapping of time onto response hand was reversed (Experiment 1). This result suggests that participants can form time-space associations during the processing of sentences and thus this result is consistent with the view that time is mentally represented from left to right. The activation of these time-space associations, however, appears to be non-automatic as shown by the results of Experiments 2 and 3 when participants were asked to perform a non-temporal meaning discrimination task.",
"title": ""
},
{
"docid": "20190b5523357be0e7565f84b96fefef",
"text": "To accurately mimic the native tissue environment, tissue engineered scaffolds often need to have a highly controlled and varied display of three-dimensional (3D) architecture and geometrical cues. Additive manufacturing in tissue engineering has made possible the development of complex scaffolds that mimic the native tissue architectures. As such, architectural details that were previously unattainable or irreproducible can now be incorporated in an ordered and organized approach, further advancing the structural and chemical cues delivered to cells interacting with the scaffold. This control over the environment has given engineers the ability to unlock cellular machinery that is highly dependent upon the intricate heterogeneous environment of native tissue. Recent research into the incorporation of physical and chemical gradients within scaffolds indicates that integrating these features improves the function of a tissue engineered construct. This review covers recent advances on techniques to incorporate gradients into polymer scaffolds through additive manufacturing and evaluate the success of these techniques. As covered here, to best replicate different tissue types, one must be cognizant of the vastly different types of manufacturing techniques available to create these gradient scaffolds. We review the various types of additive manufacturing techniques that can be leveraged to fabricate scaffolds with heterogeneous properties and discuss methods to successfully characterize them.\n\n\nSTATEMENT OF SIGNIFICANCE\nAdditive manufacturing techniques have given tissue engineers the ability to precisely recapitulate the native architecture present within tissue. In addition, these techniques can be leveraged to create scaffolds with both physical and chemical gradients. This work offers insight into several techniques that can be used to generate graded scaffolds, depending on the desired gradient. Furthermore, it outlines methods to determine if the designed gradient was achieved. This review will help to condense the abundance of information that has been published on the creation and characterization of gradient scaffolds and to provide a single review discussing both methods for manufacturing gradient scaffolds and evaluating the establishment of a gradient.",
"title": ""
},
{
"docid": "40c2110eaefe79a096099aa5db7426fe",
"text": "One-hop broadcasting is the predominate form of network traffic in VANETs. Exchanging status information by broadcasting among the vehicles enhances vehicular active safety. Since there is no MAC layer broadcasting recovery for 802.11 based VANETs, efforts should be made towards more robust and effective transmission of such safety-related information. In this paper, a channel adaptive broadcasting method is proposed. It relies solely on channel condition information available at each vehicle by employing standard supported sequence number mechanisms. The proposed method is fully compatible with 802.11 and introduces no communication overhead. Simulation studies show that it outperforms standard broadcasting in term of reception rate and channel utilization.",
"title": ""
},
{
"docid": "75a53d2e1f13de6241742b71cf5fdbc4",
"text": "Encoders were video recorded giving either truthful or deceptive descriptions of video footage designed to generate either emotional or unemotional responses. Decoders were asked to indicate the truthfulness of each item, what cues they used in making their judgements, and then to complete both the Micro Expression Training Tool (METT) and Subtle Expression Training Tool (SETT). Although overall performance on the deception detection task was no better than chance, performance for emotional lie detection was significantly above chance, while that for unemotional lie detection was significantly below chance. Emotional lie detection accuracy was also significantly positively correlated with reported use of facial expressions and with performance on the SETT, but not on the METT. The study highlights the importance of taking the type of lie into account when assessing skill in deception detection.",
"title": ""
},
{
"docid": "c6f9c8ee92acfd02e49253b1e065ca46",
"text": "The majority of penile carcinoma is squamous cell carcinoma. Although uncommon in the United States, it represents a larger proportion of cancers in the underdeveloped world. Invasive squamous cell carcinoma may arise from precursor lesions or de novo , and has been associated with lack of circumcision and HPV infection. Early diagnosis is imperative as lymphatic spread is associated with a poor prognosis. Radical surgical treatment is no longer the mainstay, and penile sparing treatments now are often used, including Mohs micrographic surgery. Therapeutic decisions should be made with regard to the size and location of the tumor, as well as the functional desires of the patient. It is critical for the dermatologist to be familiar with the evaluation, grading/staging, and treatment advances of penile squamous cell carcinoma. Herein, we present a review of the literature regarding penile squamous cell carcinoma, as well as a case report of invasive squamous cell carcinoma treated with Mohs micrographic surgery.",
"title": ""
},
{
"docid": "72ddcb7a55918a328576a811a89d245b",
"text": "Among all new emerging RNA species, microRNAs (miRNAs) have attracted the interest of the scientific community due to their implications as biomarkers of prognostic value, disease progression, or diagnosis, because of defining features as robust association with the disease, or stable presence in easily accessible human biofluids. This field of research has been established twenty years ago, and the development has been considerable. The regulatory nature of miRNAs makes them great candidates for the treatment of infectious diseases, and a successful example in the field is currently being translated to clinical practice. This review will present a general outline of miRNAmolecules, as well as successful stories of translational significance which are getting us closer from the basic bench studies into clinical practice.",
"title": ""
}
] |
scidocsrr
|
6576d351efa9dbcd3a5f6b38a24f65c8
|
Sensor Cloud: A Cloud of Virtual Sensors
|
[
{
"docid": "fa3c52e9b3c4a361fd869977ba61c7bf",
"text": "The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application.",
"title": ""
}
] |
[
{
"docid": "e737c117cd6e7083cd50069b70d236cb",
"text": "In this article we discuss a data structure, which combines advantages of two different ways for representing graphs: adjacency matrix and collection of adjacency lists. This data structure can fast add and search edges (advantages of adjacency matrix), use linear amount of memory, let to obtain adjacency list for certain vertex (advantages of collection of adjacency lists). Basic knowledge of linked lists and hash tables is required to understand this article. The article contains examples of implementation on Java.",
"title": ""
},
{
"docid": "68d6d818596518114dc829bb9ecc570f",
"text": "Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learningfocused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges.",
"title": ""
},
{
"docid": "7ea1ad3f27cb76dc6fd0e4e0dd48b09e",
"text": "This paper presents results on the modeling and control for an Unmanned Aerial Vehicle (UAV) kind quadrotor transporting a cable-suspended payload. The mathematical model is based on Euler-Lagrange formulation, where the integrated dynamics of the quadrotor, cable and payload are considered. An Interconnection and Damping Assignment Passivity - Based Control (IDA-PBC) for a quadrotor UAV transporting a cable-suspended payload is designed. The control objective is to transport the payload from point to point transfer with swing suppression along trajectory. The cable is considered rigid. Numerical simulations are carried out to validate the overall control approach.",
"title": ""
},
{
"docid": "d9c514f3e1089f258732eef4a949fe55",
"text": "Shading is a tedious process for artists involved in 2D cartoon and manga production given the volume of contents that the artists have to prepare regularly over tight schedule. While we can automate shading production with the presence of geometry, it is impractical for artists to model the geometry for every single drawing. In this work, we aim to automate shading generation by analyzing the local shapes, connections, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual editing work, and experiment with different shading effects under different conditions. To achieve this, we have made three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes. Second, we formulate stroke interpretation as a global optimization model that simultaneously balances different interpretations suggested by the perceptual cues and minimizes the interpretation discrepancy. Lastly, we develop a wrinkle-aware inflation method to generate a height field for the surface to support the shading region computation. In particular, we enable the generation of two commonly-used shading styles: 3D-like soft shading and manga-style flat shading.",
"title": ""
},
{
"docid": "56ec3abe17259cae868e17dc2163fc0e",
"text": "This paper reports a case study about lessons learned and usability issues encountered in a usability inspection of a digital library system called the Networked Computer Science Technical Reference Library (NCSTRL). Using a co-discovery technique with a team of three expert usability inspectors (the authors), we performed a usability inspection driven by a broad set of anticipated user tasks. We found many good design features in NCSTRL, but the primary result of a usability inspection is a list of usability problems as candidates for fixing. The resulting problems are organized by usability problem type and by system functionality, with emphasis on the details of problems specific to digital library functions. The resulting usability problem list was used to illustrate a cost/importance analysis technique that trades off importance to fix against cost to fix. The problems are sorted by the ratio of importance to cost, producing a priority ranking for resolution.",
"title": ""
},
{
"docid": "c7f0a749e38b3b7eba871fca80df9464",
"text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.",
"title": ""
},
{
"docid": "2f566d97cf0949ae54276525b805239e",
"text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to di¤erences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second one two ambiguous elements are present, each of which functions both as a connector and",
"title": ""
},
{
"docid": "bfcd6adc2df1cb6260696f9aeb4d4ea6",
"text": "The microtubule-dependent GEF-H1 pathway controls synaptic re-networking and overall gene expression via regulating cytoskeleton dynamics. Understanding this pathway after ischemia is essential to developing new therapies for neuronal function recovery. However, how the GEF-H1 pathway is regulated following transient cerebral ischemia remains unknown. This study employed a rat model of transient forebrain ischemia to investigate alterations of the GEF-H1 pathway using Western blotting, confocal and electron microscopy, dephosphorylation analysis, and pull-down assay. The GEF-H1 activity was significantly upregulated by: (i) dephosphorylation and (ii) translocation to synaptic membrane and nuclear structures during the early phase of reperfusion. GEF-H1 protein was then downregulated in the brain regions where neurons were destined to undergo delayed neuronal death, but markedly upregulated in neurons that were resistant to the same episode of cerebral ischemia. Consistently, GTP-RhoA, a GEF-H1 substrate, was significantly upregulated after brain ischemia. Electron microscopy further showed that neuronal microtubules were persistently depolymerized in the brain region where GEF-H1 protein was downregulated after brain ischemia. The results demonstrate that the GEF-H1 activity is significantly upregulated in both vulnerable and resistant brain regions in the early phase of reperfusion. However, GEF-H1 protein is downregulated in the vulnerable neurons but upregulated in the ischemic resistant neurons during the recovery phase after ischemia. The initial upregulation of GEF-H1 activity may contribute to excitotoxicity, whereas the late upregulation of GEF-H1 protein may promote neuroplasticity after brain ischemia.",
"title": ""
},
{
"docid": "39edb0849fcecb5261c51a071f19acfa",
"text": "In 1899, Galton first captured ink-on-paper fingerprints of a single child from birth until the age of 4.5 years, manually compared the prints, and concluded that “the print of a child at the age of 2.5 years would serve to identify him ever after.” Since then, ink-on-paper fingerprinting and manual comparison methods have been superseded by digital capture and automatic fingerprint comparison techniques, but only a few feasibility studies on child fingerprint recognition have been conducted. Here, we present the first systematic and rigorous longitudinal study that addresses the following questions: 1) Do fingerprints of young children possess the salient features required to uniquely recognize a child? 2) If so, at what age can a child’s fingerprints be captured with sufficient fidelity for recognition? 3) Can a child’s fingerprints be used to reliably recognize the child as he ages? For this paper, we collected fingerprints of 309 children (0–5 years old) four different times over a one year period. We show, for the first time, that fingerprints acquired from a child as young as 6-h old exhibit distinguishing features necessary for recognition, and that state-of-the-art fingerprint technology achieves high recognition accuracy (98.9% true accept rate at 0.1% false accept rate) for children older than six months. In addition, we use mixed-effects statistical models to study the persistence of child fingerprint recognition accuracy and show that the recognition accuracy is not significantly affected over the one year time lapse in our data. Given rapidly growing requirements to recognize children for vaccination tracking, delivery of supplementary food, and national identification documents, this paper demonstrates that fingerprint recognition of young children (six months and older) is a viable solution based on available capture and recognition technology.",
"title": ""
},
{
"docid": "0907539385c59f9bd476b2d1fb723a38",
"text": "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.",
"title": ""
},
{
"docid": "d5c57af0f7ab41921ddb92a5de31c33a",
"text": "This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.",
"title": ""
},
{
"docid": "c89b903e497ebe8e8d89e8d1d931fae1",
"text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4f537c9e63bbd967e52f22124afa4480",
"text": "Computer role playing games engage players through interleaved story and open-ended game play. We present an approach to procedurally generating, rendering, and making playable novel games based on a priori unknown story structures. These stories may be authored by humans or by computational story generation systems. Our approach couples player, designer, and algorithm to generate a novel game using preferences for game play style, general design aesthetics, and a novel story structure. Our approach is implemented in Game Forge, a system that uses search-based optimization to find and render a novel game world configuration that supports a sequence of plot points plus play style preferences. Additionally, Game Forge supports execution of the game through reactive control of game world logic and non-player character behavior.",
"title": ""
},
{
"docid": "ce15521ba1e67b111f685f1c0b23a638",
"text": "In this paper, we try to leverage a large-scale and multilingual knowledge base, Wikipedia, to help effectively analyze and organize Web information written in different languages. Based on the observation that one Wikipedia concept may be described by articles in different languages, we adapt existing topic modeling algorithm for mining multilingual topics from this knowledge base. The extracted 'universal' topics have multiple types of representations, with each type corresponding to one language. Accordingly, new documents of different languages can be represented in a space using a group of universal topics, which makes various multilingual Web applications feasible.",
"title": ""
},
{
"docid": "dc84e401709509638a1a9e24d7db53e1",
"text": "AIM AND OBJECTIVES\nExocrine pancreatic insufficiency caused by inflammation or pancreatic tumors results in nutrient malfunction by a lack of digestive enzymes and neutralization compounds. Despite satisfactory clinical results with current enzyme therapies, a normalization of fat absorption in patients is rare. An individualized therapy is required that includes high dosage of enzymatic units, usage of enteric coating, and addition of gastric proton pump inhibitors. The key goal to improve this therapy is to identify digestive enzymes with high activity and stability in the gastrointestinal tract.\n\n\nMETHODS\nWe cloned and analyzed three novel ciliate lipases derived from Tetrahymena thermophila. Using highly precise pH-STAT-titration and colorimetric methods, we determined stability and lipolytic activity under physiological conditions in comparison with commercially available porcine and fungal digestive enzyme preparations. We measured from pH 2.0 to 9.0, with different bile salts concentrations, and substrates such as olive oil and fat derived from pig diet.\n\n\nRESULTS\nCiliate lipases CL-120, CL-130, and CL-230 showed activities up to 220-fold higher than Creon, pancreatin standard, and rizolipase Nortase within a pH range from pH 2.0 to 9.0. They are highly active in the presence of bile salts and complex pig diet substrate, and more stable after incubation in human gastric juice compared with porcine pancreatic lipase and rizolipase.\n\n\nCONCLUSIONS\nThe newly cloned and characterized lipases fulfilled all requirements for high activity under physiological conditions. These novel enzymes are therefore promising candidates for an improved enzyme replacement therapy for exocrine pancreatic insufficiency.",
"title": ""
},
{
"docid": "dba804ec55201a683e8f4d82dbd15b6a",
"text": "We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras or even to our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or a simple parametric surface. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.",
"title": ""
},
{
"docid": "de73980005a62a24820ed199fab082a3",
"text": "Natural language interfaces offer end-users a familiar and convenient option for querying ontology-based knowledge bases. Several studies have shown that they can achieve high retrieval performance as well as domain independence. This paper focuses on usability and investigates if NLIs are useful from an end-user’s point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full sentences as query language and confirm that NLIs are useful for querying Semantic Web data.",
"title": ""
},
{
"docid": "43269c32b765b0f5d5d0772e0b1c5906",
"text": "Silver nanoparticles (AgNPs) have been synthesized by Lantana camara leaf extract through simple green route and evaluated their antibacterial and catalytic activities. The leaf extract (LE) itself acts as both reducing and stabilizing agent at once for desired nanoparticle synthesis. The colorless reaction mixture turns to yellowish brown attesting the AgNPs formation and displayed UV-Vis absorption spectra. Structural analysis confirms the crystalline nature and formation of fcc structured metallic silver with majority (111) facets. Morphological studies elicit the formation of almost spherical shaped nanoparticles and as AgNO3 concentration is increased, there is an increment in the particle size. The FTIR analysis evidences the presence of various functional groups of biomolecules of LE is responsible for stabilization of AgNPs. Zeta potential measurement attests the higher stability of synthesized AgNPs. The synthesized AgNPs exhibited good antibacterial activity when tested against Escherichia coli, Pseudomonas spp., Bacillus spp. and Staphylococcus spp. using standard Kirby-Bauer disc diffusion assay. Furthermore, they showed good catalytic activity on the reduction of methylene blue by L. camara extract which is monitored and confirmed by the UV-Vis spectrophotometer.",
"title": ""
},
{
"docid": "76976c3c640f33b546999b6136150636",
"text": "Investigations that require the exploitation of large volumes of face imagery are increasingly common in current forensic scenarios (e.g., Boston Marathon bombing), but effective solutions for triaging such imagery (i.e., low importance, moderate importance, and of critical interest) are not available in the literature. General issues for investigators in these scenarios are a lack of systems that can scale to volumes of images of the order of a few million, and a lack of established methods for clustering the face images into the unknown number of persons of interest contained in the collection. As such, we explore best practices for clustering large sets of face images (up to 1 million here) into large numbers of clusters (approximately 200 thousand) as a method of reducing the volume of data to be investigated by forensic analysts. Our analysis involves a performance comparison of several clustering algorithms in terms of the accuracy of grouping face images by identity, run-time, and efficiency in representing large datasets of face images in terms of compact and isolated clusters. For two different face datasets, a mugshot database (PCSO) and the well known unconstrained dataset, LFW, we find the rank-order clustering method to be effective in clustering accuracy, and relatively efficient in terms of run-time.",
"title": ""
}
] |
scidocsrr
|
d32ce1437cfd22c502e5c22950942c19
|
Knowledge Tracing Machines: Factorization Machines for Knowledge Tracing
|
[
{
"docid": "6708846369ea2f352ac8784c75e4652d",
"text": "This work presents simple and fast structured Bayesian learning for matrix and tensor factorization models. An unblocked Gibbs sampler is proposed for factorization machines (FM) which are a general class of latent variable models subsuming matrix, tensor and many other factorization models. We empirically show on the large Netflix challenge dataset that Bayesian FM are fast, scalable and more accurate than state-of-the-art factorization models.",
"title": ""
},
{
"docid": "7209596ad58da21211bfe0ceaaccc72b",
"text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.",
"title": ""
},
{
"docid": "b75336a7470fe2b002e742dbb6bfa8d5",
"text": "In Intelligent Tutoring System (ITS), tracing the student's knowledge state during learning has been studied for several decades in order to provide more supportive learning instructions. In this paper, we propose a novel model for knowledge tracing that i) captures students' learning ability and dynamically assigns students into distinct groups with similar ability at regular time intervals, and ii) combines this information with a Recurrent Neural Network architecture known as Deep Knowledge Tracing. Experimental results confirm that the proposed model is significantly better at predicting student performance than well known state-of-the-art techniques for student modelling.",
"title": ""
},
{
"docid": "2cf13325c8901f25418f6c6266106075",
"text": "Knowledge tracing—where a machine models the knowledge of a student as they interact with coursework—is a well established problem in computer supported education. Though effectively modeling student knowledge would have high educational impact, the task has many inherent challenges. In this paper we explore the utility of using Recurrent Neural Networks (RNNs) to model student learning. The RNN family of models have important advantages over previous methods in that they do not require the explicit encoding of human domain knowledge, and can capture more complex representations of student knowledge. Using neural networks results in substantial improvements in prediction performance on a range of knowledge tracing datasets. Moreover the learned model can be used for intelligent curriculum design and allows straightforward interpretation and discovery of structure in student tasks. These results suggest a promising new line of research for knowledge tracing and an exemplary application task for RNNs.",
"title": ""
}
] |
[
{
"docid": "ef89fd9b1e748280e988210c663b406f",
"text": "Better life of human is a central goal of information technology. To make a useful technology, in sensor network area, activity recognition (AR) is becoming a key feature. Using the AR technology it is now possible to know peoples behaviors like what they do, how they do and when they do etc. In recent years, there have been frequent accidental reports of aged dementia patients, and social cost has been increasing to take care of them. AR can be utilized to take care of these patients. In this paper, we present an efficient method that converts sensor’s raw data to readable patterns in order to classify their current activities and then compare these patterns with previously stored patterns to detect several abnormal patterns like wandering which is one of the early symptoms of dementia and so on. In this way, we digitalize human activities and can detect wandering and so can infer dementia through activity pattern matching. Here, we present a novel algorithm about activity digitalization using acceleration sensors as well as a wandering estimation algorithm in order to overcome limitations of existing models to detect/infer dementia.",
"title": ""
},
{
"docid": "ba93902813caa2fc8cddfbaa5f8b4917",
"text": "This paper proposes a technique to utilize the power of chatterbots to serve as interactive Support systems to enterprise applications which aim to address a huge audience. The need for support systems arises due to inability of computer illiterate audience to utilize the services offered by an enterprise application. Setting up customer support centers works well for small-medium sized businesses but for mass applications (here E-Governance Systems) the audience counts almost all a country has as its population, Setting up support center that can afford such load is irrelevant. This paper proposes a solution by using AIML based chatterbots to implement Artificial Support Entity (ASE) to such Applications.",
"title": ""
},
{
"docid": "5f63f65789e46b2eb9b9e853aba9bd72",
"text": "The cost of rare earth (RE) permanent magnet along with the associated supply volatility have intensified the interests for machine topologies which eliminate or reduce the RE magnets usage. This paper presents one such design solution, the separately excited synchronous motor (SESM) which eliminates RE magnets, however, but does not sacrifice the peak torque and power of the motor. The major drawback of such motors is the necessity of brushes to supply the field current. This is especially a challenge for hybrid or electric vehicle applications where the machine is actively cooled with oil inside the transmission. Sealing the brushes from the oil is challenging and would limit the application of such motor inside a transmission. To overcome this problem, a contactless rotary transformer is designed and implemented for the rotor field excitation. The designed motor is built and tested. The test data show that the designed motor outperforms an equivalent interior permanent magnet (IPM) motor, which is optimized for a hybrid application, for both peak torque and power. Better drive system efficiency is measured at high speed compared to the IPM machine, while the later outperforms (for efficiency) the SESM at low and medium speed range.",
"title": ""
},
{
"docid": "c0ee14083f779e3f4115f8b5fd822f67",
"text": "The booming popularity of smartphones is partly a result of application markets where users can easily download wide range of third-party applications. However, due to the open nature of markets, especially on Android, there have been several privacy and security concerns with these applications. On Google Play, as with most other markets, users have direct access to natural-language descriptions of those applications, which give an intuitive idea of the functionality including the security-related information of those applications. Google Play also provides the permissions requested by applications to access security and privacy-sensitive APIs on the devices. Users may use such a list to evaluate the risks of using these applications. To best assist the end users, the descriptions should reflect the need for permissions, which we term description-to-permission fidelity. In this paper, we present a system AutoCog to automatically assess description-to-permission fidelity of applications. AutoCog employs state-of-the-art techniques in natural language processing and our own learning-based algorithm to relate description with permissions. In our evaluation, AutoCog outperforms other related work on both performance of detection and ability of generalization over various permissions by a large extent. On an evaluation of eleven permissions, we achieve an average precision of 92.6% and an average recall of 92.0%. Our large-scale measurements over 45,811 applications demonstrate the severity of the problem of low description-to-permission fidelity. AutoCog helps bridge the long-lasting usability gap between security techniques and average users.",
"title": ""
},
{
"docid": "6323d02c10ab093262eafc0bab3d70a7",
"text": "and Applied Analysis 3 Definition 12. Let (X, d) be a metric space. We say that G : X → CL(X) is a subintegral type (α∗, ψ)-contractive if there exist two functions ψ ∈ Ψ and φ ∈ Φ s such that for each x ∈ X and y ∈ Gx, there exists z ∈ Gy satisfying",
"title": ""
},
{
"docid": "d1b509ce63a9ca777d6a0d4d8af19ae3",
"text": "The study explores the reliability, validity, and measurement invariance of the Video game Addiction Test (VAT). Game-addiction problems are often linked to Internet enabled online games; the VAT has the unique benefit that it is theoretically and empirically linked to Internet addiction. The study used data (n=2,894) from a large-sample paper-and-pencil questionnaire study, conducted in 2009 on secondary schools in Netherlands. Thus, the main source of data was a large sample of schoolchildren (aged 13-16 years). Measurements included the proposed VAT, the Compulsive Internet Use Scale, weekly hours spent on various game types, and several psychosocial variables. The VAT demonstrated excellent reliability, excellent construct validity, a one-factor model fit, and a high degree of measurement invariance across gender, ethnicity, and learning year, indicating that the scale outcomes can be compared across different subgroups with little bias. In summary, the VAT can be helpful in the further study of video game addiction, and it contributes to the debate on possible inclusion of behavioral addictions in the upcoming DSM-V.",
"title": ""
},
{
"docid": "7dcb86bfac5e4f6195fc2700ef98af36",
"text": "This paper introduces a multilinear principal component analysis (MPCA) framework for tensor object feature extraction. Objects of interest in many computer vision and pattern recognition applications, such as 2D/3D images and video sequences are naturally described as tensors or multilinear arrays. The proposed framework performs feature extraction by determining a multilinear projection that captures most of the original tensorial input variation. The solution is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. As part of this work, methods for subspace dimensionality determination are proposed and analyzed. It is shown that the MPCA framework discussed in this work supplants existing heterogeneous solutions such as the classical principal component analysis (PCA) and its 2D variant (2D PCA). Finally, a tensor object recognition system is proposed with the introduction of a discriminative tensor feature selection mechanism and a novel classification strategy, and applied to the problem of gait recognition. Results presented here indicate MPCA's utility as a feature extraction tool. It is shown that even without a fully optimized design, an MPCA-based gait recognition module achieves highly competitive performance and compares favorably to the state-of-the-art gait recognizers.",
"title": ""
},
{
"docid": "62c49155e92350a0420fb215f0a92f78",
"text": "Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is perhaps the key problem of the discipline of Distributed Artificial Intelligence (DAI). In order to make advances it is important that the theories and principles which guide this central activity are uncovered and analysed in a systematic and rigourous manner. To this end, this paper models agent communities using a distributed goal search formalism, and argues that commitments (pledges to undertake a specific course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in all DAI systems. 1. The Coordination Problem Participation in any social situation should be both simultaneously constraining, in that agents must make a contribution to it, and yet enriching, in that participation provides resources and opportunities which would otherwise be unavailable (Gerson, 1976). Coordination, the process by which an agent reasons about its local actions and the (anticipated) actions of others to try and ensure the community acts in a coherent manner, is the key to achieving this objective. Without coordination the benefits of decentralised problem solving vanish and the community may quickly degenerate into a collection of chaotic, incohesive individuals. In more detail, the objectives of the coordination process are to ensure: that all necessary portions of the overall problem are included in the activities of at least one agent, that agents interact in a manner which permits their activities to be developed and integrated into an overall solution, that team members act in a purposeful and consistent manner, and that all of these objectives are achievable within the available computational and resource limitations (Lesser and Corkill, 1987). Specific examples of coordination activities include supplying timely information to needy agents, ensuring the actions of multiple actors are synchronised and avoiding redundant problem solving. There are three main reasons why the actions of multiple agents need to be coordinated: • because there are dependencies between agents’ actions Interdependence occurs when goals undertaken by individual agents are related either because local decisions made by one agent have an impact on the decisions of other community members (eg when building a house, decisions about the size and location of rooms impacts upon the wiring and plumbing) or because of the possibility of harmful interactions amongst agents (eg two mobile robots may attempt to pass through a narrow exit simultaneously, resulting in a collision, damage to the robots and blockage of the exit). Contribution to Foundations of DAI 2 • because there is a need to meet global constraints Global constraints exist when the solution being developed by a group of agents must satisfy certain conditions if it is to be deemed successful. For instance, a house building team may have a budget of £250,000, a distributed monitoring system may have to react to critical events within 30 seconds and a distributed air traffic control system may have to control the planes with a fixed communication bandwidth. If individual agents acted in isolation and merely tried to optimise their local performance, then such overarching constraints are unlikely to be satisfied. Only through coordinated action will acceptable solutions be developed. • because no one individual has sufficient competence, resources or information to solve the entire problem Many problems cannot be solved by individuals working in isolation because they do not possess the necessary expertise, resources or information. Relevant examples include the tasks of lifting a heavy object, driving in a convoy and playing a symphony. It may be impractical or undesirable to permanently synthesize the necessary components into a single entity because of historical, political, physical or social constraints, therefore temporary alliances through cooperative problem solving may be the only way to proceed. Differing expertise may need to be combined to produce a result outside of the scope of any of the individual constituents (eg in medical diagnosis, knowledge about heart disease, blood disorders and respiratory problems may need to be combined to diagnose a patient’s illness). Different agents may have different resources (eg processing power, memory and communications) which all need to be harnessed to solve a complex problem. Finally, different agents may have different information or viewpoints of a problem (eg in concurrent engineering systems, the same product may be viewed from a design, manufacturing and marketing perspective). Even when individuals can work independently, meaning coordination is not essential, information discovered by one agent can be of sufficient use to another that the two agents can solve the problem more than twice as fast. For example, when searching for a lost object in a large area it is often better, though not essential, to do so as a team. Analysis of this “combinatorial implosion” phenomena (Kornfield and Hewitt, 1981) has resulted in the postulation that cooperative search, when sufficiently large, can display universal characteristics which are independent of the nature of either the individual processes or the particular domain being tackled (Clearwater et al., 1991). If all the agents in the system could have complete knowledge of the goals, actions and interactions of their fellow community members and could also have infinite processing power, it would be possible to know exactly what each agent was doing at present and what it is intending to do in the future. In such instances, it would be possible to avoid conflicting and redundant efforts and systems could be perfectly coordinated (Malone, 1987). However such complete knowledge is infeasible, in any community of reasonable complexity, because bandwidth limitations make it impossible for agents to be constantly informed of all developments. Even in modestly sized communities, a complete analysis to determine the detailed activities of each agent is impractical the computation and communication costs of determining the optimal set and allocation of activities far outweighs the improvement in problem solving performance (Corkill and Lesser, 1986). Contribution to Foundations of DAI 3 As all community members cannot have a complete and accurate perspective of the overall system, the next easiest way of ensuring coherent behaviour is to have one agent with a wider picture. This global controller could then direct the activities of the others, assign agents to tasks and focus problem solving to ensure coherent behaviour. However such an approach is often impractical in realistic applications because even keeping one agent informed of all the actions in the community would swamp the available bandwidth. Also the controller would become a severe communication bottleneck and would render the remaining components unusable if it failed. To produce systems without bottlenecks and which exhibit graceful degradation of performance, most DAI research has concentrated on developing communities in which both control and data are distributed. Distributed control means that individuals have a degree of autonomy in generating new actions and in deciding which tasks to do next. When designing such systems it is important to ensure that agents spend the bulk of their time engaged on solving the domain level problems for which they were built, rather than in communication and coordination activities. To this end, the community should be decomposed into the most modular units possible. However the designer should ensure that these units are of sufficient granularity to warrant the overhead inherent in goal distribution distributing small tasks can prove more expensive than performing them in one place (Durfee et al., 1987). The disadvantage of distributing control and data is that knowledge of the system’s overall state is dispersed throughout the community and each individual has only a partial and imprecise perspective. Thus there is an increased degree of uncertainty about each agent’s actions, meaning that it more difficult to attain coherent global behaviour for example, agents may spread misleading and distracting information, multiple agents may compete for unshareable resources simultaneously, agents may unwittingly undo the results of each others activities and the same actions may be carried out redundantly. Also the dynamics of such systems can become extremely complex, giving rise to nonlinear oscillations and chaos (Huberman and Hogg, 1988). In such cases the coordination process becomes correspondingly more difficult as well as more important1. To develop better and more integrated models of coordination, and hence improve the efficiency and utility of DAI systems, it is necessary to obtain a deeper understanding of the fundamental concepts which underpin agent interactions. The first step in this analysis is to determine the perspective from which coordination should be described. When viewing agents from a purely behaviouristic (external) perspective, it is, in general, impossible to determine whether they have coordinated their actions. Firstly, actions may be incoherent even if the agents tried to coordinate their behaviour. This may occur, for instance, because their models of each other or of the environment are incorrect. For example, robot1 may see robot2 heading for exit2 and, based on this observation and the subsequent deduction that it will use this exit, decide to use exit1. However if robot2 is heading towards exit2 to pick up a particular item and actually intends to use exit1 then there may be incoherent behaviour (both agents attempting to use the same exit) although there was coordination. Secondly, even if there is coherent action, it may not",
"title": ""
},
{
"docid": "2a2db7ff8bb353143ca2bb9ad8ec2d7d",
"text": "A revision of the genus Leptoplana Ehrenberg, 1831 in the Mediterranean basin is undertaken. This revision deals with the distribution and validity of the species of Leptoplana known for the area. The Mediterranean sub-species polyclad, Leptoplana tremellaris forma mediterranea Bock, 1913 is elevated to the specific level. Leptoplana mediterranea comb. nov. is redescribed from the Lake of Tunis, Tunisia. This flatworm is distinguished from Leptoplana tremellaris mainly by having a prostatic vesicle provided with a long diverticulum attached ventrally to the seminal vesicle, a genital pit closer to the male pore than to the female one and a twelve-eyed hatching juvenile instead of the four-eyed juvenile of L. tremellaris. The direct development in L. mediterranea is described at 15 °C.",
"title": ""
},
{
"docid": "4eeb792ffb70d9ae015e806c85000cd7",
"text": "Optimal instruction scheduling and register allocation are NP-complete problems that require heuristic solutions. By restricting the problem of register allocation and instruction scheduling for delayed-load architectures to expression trees we are able to nd optimal schedules quickly. This thesis presents a fast, optimal code scheduling algorithm for processors with a delayed load of 1 instruction cycle. The algorithm minimizes both execution time and register use and runs in time proportional to the size of the expression tree. In addition, the algorithm is simple; it ts on one page. The dominant paradigm in modern global register allocation is graph coloring. Unlike graph-coloring, our technique, Probabilistic Register Allocation, is unique in its ability to quantify the likelihood that a particular value might actually be allocated a register before allocation actually completes. By computing the likelihood that a value will be assigned a register by a register allocator, register candidates that are competing heavily for scarce registers can be isolated from those that have less competition. Probabilities allow the register allocator to concentrate its e orts where bene t is high and the likelihood of a successful allocation is also high. Probabilistic register allocation also avoids backtracking and complicated live-range splitting heuristics that plague graph-coloring algorithms. ii Optimal algorithms for instruction selection in tree-structured intermediate representations rely on dynamic programming techniques. Bottom-Up Rewrite System (BURS) technology produces extremely fast code generators by doing all possible dynamic programming before code generation. Thus, the dynamic programming process can be very slow. To make BURS technology more attractive, much e ort has gone into reducing the time to produce BURS code generators. Current techniques often require a signi cant amount of time to process a complex machine description (over 10 minutes on a fast workstation). This thesis presents an improved, faster BURS table generation algorithm that makes BURS technology more attractive for instruction selection. The optimized techniques have increased the speed to generate BURS code generators by a factor of 10 to 30. In addition, the algorithms simplify previous techniques, and were implemented in fewer than 2000 lines of C. iii Acknowledgements I have bene ted from the help and support of many people while attending the University of Wisconsin. They deserve my thanks. My mother encouraged me to pursue a PhD, and supported me, in too many ways to list, throughout the process. Professor Charles Fischer, my advisor, generously shared his time, guidance, and ideas with me. Professors Susan Horwitz and James Larus patiently read (and re-read) my thesis. Chris Fraser's zealous quest for small, simple and fast programs was a welcome change from the prevailing trend towards bloated, complex and slow software. Robert Henry explained his early BURS research and made his Codegen system available to me. Lorenz Huelsbergen distracted me with enough creative research ideas to keep graduate school fun. National Science Foundation grant CCR{8908355 provided my nancial support. Some computer resources were obtained through Digital Equipment Corporation External Research Grant 48428. iv",
"title": ""
},
{
"docid": "d1a12864a69f9919485a461f7a1ed2a8",
"text": "PURPOSE OF REVIEW\nThe focus of this review is outcome from mild traumatic brain injury. Recent literature relating to pathophysiology, neuropsychological outcome, and the persistent postconcussion syndrome will be integrated into the existing literature.\n\n\nRECENT FINDINGS\nThe MTBI literature is enormous, complex, methodologically flawed, and controversial. There have been dozens of studies relating to pathophysiology, neuropsychological outcome, and the postconcussion syndrome during the past year. Two major reviews have been published. Some of the most interesting prospective research has been done with athletes.\n\n\nSUMMARY\nThe cognitive and neurobehavioral sequelae are self-limiting and reasonably predictable. Mild traumatic brain injuries are characterized by immediate physiological changes conceptualized as a multilayered neurometabolic cascade in which affected cells typically recover, although under certain circumstances a small number might degenerate and die. The primary pathophysiologies include ionic shifts, abnormal energy metabolism, diminished cerebral blood flow, and impaired neurotransmission. During the first week after injury the brain undergoes a dynamic restorative process. Athletes typically return to pre-injury functioning (assessed using symptom ratings or brief neuropsychological measures) within 2-14 days. Trauma patients usually take longer to return to their pre-injury functioning. In these patients recovery can be incomplete and can be complicated by preexisting psychiatric or substance abuse problems, poor general health, concurrent orthopedic injuries, or comorbid problems (e.g. chronic pain, depression, substance abuse, life stress, unemployment, and protracted litigation).",
"title": ""
},
{
"docid": "a8fe62e387610682f90018ca1a56ba04",
"text": "Aarskog-Scott syndrome (AAS), also known as faciogenital dysplasia (FGD, OMIM # 305400), is an X-linked disorder of recessive inheritance, characterized by short stature and facial, skeletal, and urogenital abnormalities. AAS is caused by mutations in the FGD1 gene (Xp11.22), with over 56 different mutations identified to date. We present the clinical and molecular analysis of four unrelated families of Mexican origin with an AAS phenotype, in whom FGD1 sequencing was performed. This analysis identified two stop mutations not previously reported in the literature: p.Gln664* and p.Glu380*. Phenotypically, every male patient met the clinical criteria of the syndrome, whereas discrepancies were found between phenotypes in female patients. Our results identify two novel mutations in FGD1, broadening the spectrum of reported mutations; and provide further delineation of the phenotypic variability previously described in AAS.",
"title": ""
},
{
"docid": "b84c233a32dfe8fd004ad33a6565df9c",
"text": "Graph databases with a custom non-relational backend promote themselves to outperform relational databases in answering queries on large graphs. Recent empirical studies show that this claim is not always true. However, these studies focus only on pattern matching queries and neglect analytical queries used in practice such as shortest path, diameter, degree centrality or closeness centrality. In addition, there is no distinction between different types of pattern matching queries. In this paper, we introduce a set of analytical and pattern matching queries, and evaluate them in Neo4j and a market-leading commercial relational database system. We show that the relational database system outperforms Neo4j for our analytical queries and that Neo4j is faster for queries that do not filter on specific edge types.",
"title": ""
},
{
"docid": "685e6338727b4ab899cffe2bbc1a20fc",
"text": "Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.",
"title": ""
},
{
"docid": "a85c6e8a666d079c60b9bc31d6d9ae62",
"text": "When pedestrians encounter vehicles, they typically stop and wait for a signal from the driver to either cross or wait. What happens when the car is autonomous and there isn’t a human driver to signal them? This paper seeks to address this issue with an intent communication system (ICS) that acts in place of a human driver. This intent system has been developed to take into account the psychology behind what pedestrians are familiar with and what they expect from machines. The system integrates those expectations into the design of physical systems and mathematical algorithms. The goal of the system is to ensure that communication is simple, yet effective without leaving pedestrians with a sense of distrust in autonomous vehicles. To validate the ICS, two types of experiments have been run: field tests with an autonomous vehicle to determine how humans actually interact with the ICS and simulations to account for multiple potential behaviors.The results from both experiments show that humans react positively and more predictably when the intent of the vehicle is communicated compared to when the intent of the vehicle is unknown. In particular, the results from the simulation specifically showed a 142 percent difference between the pedestrian’s trust in the vehicle’s actions when the ICS is enabled and the pedestrian has prior knowledge of the vehicle than when the ICS is not enabled and the pedestrian having no prior knowledge of the vehicle.",
"title": ""
},
{
"docid": "5aee0b977228445f958cdd3016ad3171",
"text": "Cytokines are important in the regulation of haematopoiesis and immune responses, and can influence lymphocyte development. Here we have identified a class I cytokine receptor that is selectively expressed in lymphoid tissues and is capable of signal transduction. The full-length receptor was expressed in BaF3 cells, which created a functional assay for ligand detection and cloning. Conditioned media from activated human CD3+ T cells supported proliferation of the assay cell line. We constructed a complementary DNA expression library from activated human CD3+ T cells, and identified a cytokine with a four-helix-bundle structure using functional cloning. This cytokine is most closely related to IL2 and IL15, and has been designated IL21 with the receptor designated IL21R. In vitro assays suggest that IL21 has a role in the proliferation and maturation of natural killer (NK) cell populations from bone marrow, in the proliferation of mature B-cell populations co-stimulated with anti-CD40, and in the proliferation of T cells co-stimulated with anti-CD3.",
"title": ""
},
{
"docid": "7fa4f2ae0b90bb46df816dd7ae1b963b",
"text": "Traditionally, analytical methods have been used to solve imaging problems such as image restoration, inpainting, and superresolution (SR). In recent years, the fields of machine and deep learning have gained a lot of momentum in solving such imaging problems, often surpassing the performance provided by analytical approaches. Unlike analytical methods for which the problem is explicitly defined and domain-knowledge carefully engineered into the solution, deep neural networks (DNNs) do not benefit from such prior knowledge and instead make use of large data sets to learn the unknown solution to the inverse problem. In this article, we review deep-learning techniques for solving such inverse problems in imaging. More specifically, we review the popular neural network architectures used for imaging tasks, offering some insight as to how these deep-learning tools can solve the inverse problem. Furthermore, we address some fundamental questions, such as how deeplearning and analytical methods can be combined to provide better solutions to the inverse problem in addition to providing a discussion on the current limitations and future directions of the use of deep learning for solving inverse problem in imaging.",
"title": ""
},
{
"docid": "1efb7c32aee081bb652e9c6458a06303",
"text": "One of the major barriers to the deployment of Linked Data is the difficulty that data publishers have in determining which vocabularies to use to describe the semantics of data. This system report describes the Linked Open Vocabularies (LOV), a high quality catalogue of reusable vocabularies for the description of data on the Web. The LOV initiative gathers and makes visible indicators that have not been previously been harvested such as interconnection between vocabularies, version history, maintenance policy, along with past and current referent (individual or organization). The LOV goes beyond existing Semantic Web search engines and takes into consideration the value’s property type, matched with a query, to improve terms scoring. By providing an extensive range of data access methods (SPARQL endpoint, API, data dump or UI), we try to facilitate the reuse of well-documented vocabularies in the linked data ecosystem. We conclude that the adoption in many applications and methods of the LOV shows the benefits of such a set of vocabularies and related features to aid the design and publication of data on the Web.",
"title": ""
},
{
"docid": "e029a189f85f9cb47a5ad0a766efad1d",
"text": "\"Next generation\" data acquisition technologies are allowing scientists to collect exponentially more data at a lower cost. These trends are broadly impacting many scientific fields, including genomics, astronomy, and neuroscience. We can attack the problem caused by exponential data growth by applying horizontally scalable techniques from current analytics systems to accelerate scientific processing pipelines.\n In this paper, we describe ADAM, an example genomics pipeline that leverages the open-source Apache Spark and Parquet systems to achieve a 28x speedup over current genomics pipelines, while reducing cost by 63%. From building this system, we were able to distill a set of techniques for implementing scientific analyses efficiently using commodity \"big data\" systems. To demonstrate the generality of our architecture, we then implement a scalable astronomy image processing system which achieves a 2.8--8.9x improvement over the state-of-the-art MPI-based system.",
"title": ""
},
{
"docid": "9eb29fb373feaf664579e5b27db050a7",
"text": "A synthesis matrix is a table that summarizes various aspects of multiple documents. In our work, we specifically examine a problem of automatically generating a synthesis matrix for scientific literature review. As described in this paper, we first formulate the task as multidocument summarization and question-answering tasks given a set of aspects of the review based on an investigation of system summary tables of NLP tasks. Next, we present a method to address the former type of task. Our system consists of two steps: sentence ranking and sentence selection. In the sentence ranking step, the system ranks sentences in the input papers by regarding aspects as queries. We use LexRank and also incorporate query expansion and word embedding to compensate for tersely expressed queries. In the sentence selection step, the system selects sentences that remain in the final output. Specifically emphasizing the summarization type aspects, we regard this step as an integer linear programming problem with a special type of constraint imposed to make summaries comparable. We evaluated our system using a dataset we created from the ACL Anthology. The results of manual evaluation demonstrated that our selection method using comparability improved",
"title": ""
}
] |
scidocsrr
|
ed78a3c5e53296840b1447cdde40fd47
|
Smart City Services over a Future Internet Platform Based on Internet of Things and Cloud : The Smart Parking Case
|
[
{
"docid": "0521f79f13cdbe05867b5db733feac16",
"text": "This conceptual paper discusses how we can consider a particular city as a smart one, drawing on recent practices to make cities smart. A set of the common multidimensional components underlying the smart city concept and the core factors for a successful smart city initiative is identified by exploring current working definitions of smart city and a diversity of various conceptual relatives similar to smart city. The paper offers strategic principles aligning to the three main dimensions (technology, people, and institutions) of smart city: integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement.",
"title": ""
}
] |
[
{
"docid": "fc90741cd456d23335407e095a14e88a",
"text": "Mobility of a hexarotor UAV in its standard configuration is limited, since all the propeller force vectors are parallel and they achieve only 4-DoF actuation, similar, e.g., to quadrotors. As a consequence, the hexarotor pose cannot track an arbitrary trajectory while the center of mass is tracking a position trajectory. In this paper, we consider a different hexarotor architecture where propellers are tilted, without the need of any additional hardware. In this way, the hexarotor gains a 6-DoF actuation which allows to independently reach positions and orientations in free space and to be able to exert forces on the environment to resist any wrench for aerial manipulation tasks. After deriving the dynamical model of the proposed hexarotor, we discuss the controllability and the tilt angle optimization to reduce the control effort for the specific task. An exact feedback linearization and decoupling control law is proposed based on the input-output mapping, considering the Jacobian and task acceleration, for non-linear trajectory tracking. The capabilities of our approach are shown by simulation results.",
"title": ""
},
{
"docid": "c19d408eeed287d2e6f83fd98460966c",
"text": "The statistical modelling of language, together with advances in wide-coverage grammar development, have led to high levels of robustness and efficiency in NLP systems and made linguistically motivated large-scale language processing a possibility (Matsuzaki et al., 2007; Kaplan et al., 2004). This paper describes an NLP system which is based on syntactic and semantic formalisms from theoretical linguistics, and which we have used to analyse the entire Gigaword corpus (1 billion words) in less than 5 days using only 18 processors. This combination of detail and speed of analysis represents a breakthrough in NLP technology. The system is built around a wide-coverage Combinatory Categorial Grammar (CCG) parser (Clark and Curran, 2004b). The parser not only recovers the local dependencies output by treebank parsers such as Collins (2003), but also the long-range depdendencies inherent in constructions such as extraction and coordination. CCG is a lexicalized grammar formalism, so that each word in a sentence is assigned an elementary syntactic structure, in CCG’s case a lexical category expressing subcategorisation information. Statistical tagging techniques can assign lexical categories with high accuracy and low ambiguity (Curran et al., 2006). The combination of finite-state supertagging and highly engineered C++ leads to a parser which can analyse up to 30 sentences per second on standard hardware (Clark and Curran, 2004a). The C&C tools also contain a number of Maximum Entropy taggers, including the CCG supertagger, a POS tagger (Curran and Clark, 2003a), chunker, and named entity recogniser (Curran and Clark, 2003b). The taggers are highly efficient, with processing speeds of over 100,000 words per second. Finally, the various components, including the morphological analyser morpha (Minnen et al., 2001), are combined into a single program. The output from this program — a CCG derivation, POS tags, lemmas, and named entity tags — is used by the module Boxer (Bos, 2005) to produce interpretable structure in the form of Discourse Representation Structures (DRSs).",
"title": ""
},
{
"docid": "7fa9bacbb6b08065ecfe0530f082a391",
"text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.",
"title": ""
},
{
"docid": "a9cc523da8b5348dede4765a6eb9e290",
"text": "Recommender systems are efficient tools that overcome the information overload problem by providing users with the most relevant contents. This is generally done through user’s preferences/ratings acquired from log files of his former sessions. Besides these preferences, taking into account the interaction context of the user will improve the relevancy of recommendation process. In this paper, we propose a context-aware recommender system based on both user profile and context. The approach we present is based on a previous work on data personalization which leads to the definition of a Personalized Access Model that provides a set of personalization services. We show how these services can be deployed in order to provide advanced context-aware recommender systems.",
"title": ""
},
{
"docid": "9f1d881193369f1b7417d71a9a62bc19",
"text": "Neurofeedback (NFB) is a potential alternative treatment for children with ADHD that aims to optimize brain activity. Whereas most studies into NFB have investigated behavioral effects, less attention has been paid to the effects on neurocognitive functioning. The present randomized controlled trial (RCT) compared neurocognitive effects of NFB to (1) optimally titrated methylphenidate (MPH) and (2) a semi-active control intervention, physical activity (PA), to control for non-specific effects. Using a multicentre three-way parallel group RCT design, children with ADHD, aged 7–13, were randomly allocated to NFB (n = 39), MPH (n = 36) or PA (n = 37) over a period of 10–12 weeks. NFB comprised theta/beta training at CZ. The PA intervention was matched in frequency and duration to NFB. MPH was titrated using a double-blind placebo controlled procedure to determine the optimal dose. Neurocognitive functioning was assessed using parameters derived from the auditory oddball-, stop-signal- and visual spatial working memory task. Data collection took place between September 2010 and March 2014. Intention-to-treat analyses showed improved attention for MPH compared to NFB and PA, as reflected by decreased response speed during the oddball task [η p 2 = 0.21, p < 0.001], as well as improved inhibition, impulsivity and attention, as reflected by faster stop signal reaction times, lower commission and omission error rates during the stop-signal task (range η p 2 = 0.09–0.18, p values <0.008). Working memory improved over time, irrespective of received treatment (η p 2 = 0.17, p < 0.001). Overall, stimulant medication showed superior effects over NFB to improve neurocognitive functioning. Hence, the findings do not support theta/beta training applied as a stand-alone treatment in children with ADHD.",
"title": ""
},
{
"docid": "bbd9f9608409f7fa58d8cdbd8aa93989",
"text": "Competence-based theories of island effects play a central role in generative grammar, yet the graded nature of many syntactic islands has never been properly accounted for. Categorical syntactic accounts of island effects have persisted in spite of a wealth of data suggesting that island effects are not categorical in nature and that non-structural manipulations that leave island structures intact can radically alter judgments of island violations. We argue here, building on work by Deane, Kluender, and others, that processing factors have the potential to account for this otherwise unexplained variation in acceptability judgments.We report the results of self-paced reading experiments and controlled acceptability studies which explore the relationship between processing costs and judgments of acceptability. In each of the three self-paced reading studies, the data indicate that the processing cost of different types of island violations can be significantly reduced to a degree comparable to that of non-island filler-gap constructions by manipulating a single non-structural factor. Moreover, this reduction in processing cost is accompanied by significant improvements in acceptability. This evidence favors the hypothesis that island-violating constructions involve numerous processing pressures that aggregate to drive processing difficulty above a threshold so that a perception of unacceptability ensues. We examine the implications of these findings for the grammar of filler-gap dependencies.",
"title": ""
},
{
"docid": "08804b3859d70c6212bba05c7e792f9a",
"text": "Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a \"Bayesian sparse linear mixed model\" (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html.",
"title": ""
},
{
"docid": "94fd7030e7b638e02ca89f04d8ae2fff",
"text": "State-of-the-art deep learning algorithms generally require large amounts of data for model training. Lack thereof can severely deteriorate the performance, particularly in scenarios with fine-grained boundaries between categories. To this end, we propose a multimodal approach that facilitates bridging the information gap by means of meaningful joint embeddings. Specifically, we present a benchmark that is multimodal during training (i.e. images and texts) and single-modal in testing time (i.e. images), with the associated task to utilize multimodal data in base classes (with many samples), to learn explicit visual classifiers for novel classes (with few samples). Next, we propose a framework built upon the idea of cross-modal data hallucination. In this regard, we introduce a discriminative text-conditional GAN for sample generation with a simple self-paced strategy for sample selection. We show the results of our proposed discriminative hallucinated method for 1-, 2-, and 5shot learning on the CUB dataset, where the accuracy is improved by employing multimodal data.",
"title": ""
},
{
"docid": "11eba1f4575e548ffb0e557e9aee1bbe",
"text": "Compressive sensing (CS) is an effective approach for fast Magnetic Resonance Imaging (MRI). It aims at reconstructing MR images from a small number of under-sampled data in k-space, and accelerating the data acquisition in MRI. To improve the current MRI system in reconstruction accuracy and speed, in this paper, we propose two novel deep architectures, dubbed ADMM-Nets in basic and generalized versions. ADMM-Nets are defined over data flow graphs, which are derived from the iterative procedures in Alternating Direction Method of Multipliers (ADMM) algorithm for optimizing a general CS-based MRI model. They take the sampled k-space data as inputs and output reconstructed MR images. Moreover, we extend our network to cope with complex-valued MR images. In the training phase, all parameters of the nets, e.g., transforms, shrinkage functions, etc., are discriminatively trained end-to-end. In the testing phase, they have computational overhead similar to ADMM algorithm but use optimized parameters learned from the data for CS-based reconstruction task. We investigate different configurations in network structures and conduct extensive experiments on MR image reconstruction under different sampling rates. Due to the combination of the advantages in model-based approach and deep learning approach, the ADMM-Nets achieve state-of-the-art reconstruction accuracies with fast computational speed.",
"title": ""
},
{
"docid": "b640ed2bd02ba74ee0eb925ef6504372",
"text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.",
"title": ""
},
{
"docid": "c716e7dc1c0e770001bcb57eab871968",
"text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
"title": ""
},
{
"docid": "b69aae02d366b75914862f5bc726c514",
"text": "Nitrification in commercial aquaculture systems has been accomplished using many different technologies (e.g. trickling filters, fluidized beds and rotating biological contactors) but commercial aquaculture systems have been slow to adopt denitrification. Denitrification (conversion of nitrate, NO3 − to nitrogen gas, N2) is essential to the development of commercial, closed, recirculating aquaculture systems (B1 water turnover 100 day). The problems associated with manually operated denitrification systems have been incomplete denitrification (oxidation–reduction potential, ORP\\−200 mV) with the production of nitrite (NO2 ), nitric oxide (NO) and nitrous oxide (N2O) or over-reduction (ORPB−400 mV), resulting in the production of hydrogen sulfide (H2S). The need for an anoxic or anaerobic environment for the denitrifying bacteria can also result in lowered dissolved oxygen (DO) concentrations in the rearing tanks. These problems have now been overcome by the development of a computer automated denitrifying bioreactor specifically designed for aquaculture. The prototype bioreactor (process control version) has been in operation for 4 years and commercial versions of the bioreactor are now in continuous use; these bioreactors can be operated in either batch or continuous on-line modes, maintaining NO3 − concentrations below 5 ppm. The bioreactor monitors DO, ORP, pH and water flow rate and controls water pump rate and carbon feed rate. A fuzzy logic-based expert system replaced the classical process control system for operation of the bioreactor, continuing to optimize denitrification rates and eliminate discharge of toxic by-products (i.e. NO2 , NO, N2O or www.elsevier.nl/locate/aqua-online * Corresponding author. Tel.: +1-409-7722133; fax: +1-409-7726993. E-mail address: pglee@utmb.edu (P.G. Lee) 0144-8609/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S0144 -8609 (00 )00046 -7 38 P.G. Lee et al. / Aquacultural Engineering 23 (2000) 37–59 H2S). The fuzzy logic rule base was composed of \\40 fuzzy rules; it took into account the slow response time of the system. The fuzzy logic-based expert system maintained nitrate-nitrogen concentration B5 ppm while avoiding any increase in NO2 or H2S concentrations. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "210e26d5d11582be68337a0cc387ab8e",
"text": "This paper presents the results of experiments carried out with the goal of applying the machine learning techniques of reinforcement learning and neural networks with reinforcement learning to the game of Tetris. Tetris is a well-known computer game that can be played either by a single player or competitively with slight variations, toward the end of accumulating a high score or defeating the opponent. The fundamental hypothesis of this paper is that if the points earned in Tetris are used as the reward function for a machine learning agent, then that agent should be able to learn to play Tetris without other supervision. Toward this end, a state-space that summarizes the essential feature of the Tetris board is designed, high-level actions are developed to interact with the game, and agents are trained using Q-Learning and neural networks. As a result of these efforts, agents learn to play Tetris and to compete with other players. While the learning agents fail to accumulate as many points as the most advanced AI agents, they do learn to play more efficiently.",
"title": ""
},
{
"docid": "3bf37b20679ca6abd022571e3356e95d",
"text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.",
"title": ""
},
{
"docid": "c16470d7aa166ccb2d2724835a0c3370",
"text": "Currently, astronomical data have increased in terms of volume and complexity. To bring out the information in order to analyze and predict, the artificial intelligence techniques are required. This paper aims to apply artificial intelligence techniques to predict M-class solar flare. Artificial neural network, support vector machine and naïve bayes techniques are compared to define the best prediction performance accuracy technique. The dataset have been collected from daily data for 16 years, from 1998 to 2013. The attributes consist of solar flares data and sunspot number. The sunspots are a cooler spot on the surface of the sun, which have relation with solar flares. The Java-based machine learning WEKA is used for analysis and predicts solar flares. The best forecasted performance accuracy is achieved based on the artificial neural network method.",
"title": ""
},
{
"docid": "0c9112aeebf0b43b577c2cfd5f121d39",
"text": "The fundamental objective behind the present study is to demonstrate the visible effect of ComputerAssisted Instruction upon Iranian EFL learners' reading performance, and to see if it has any impact upon this skill in the Iranian EFLeducational settings. To this end, a sample of 50 male and female EFL learners was drawn from an English language institute in Iran. After participating in a proficiency pretest, these participants were assigned into two experimental and control groups, 25 and 25, respectively. An independent sample t-test was administered to find out if there were salient differences between the findings of the two selected groups in their reading test. The key research question was to see providing learners with computer-assisted instruction during the processes of learning and instruction for learners would have an affirmative influence upon the improvement and development of their reading skill. The results pinpointed computer-assisted instruction users' performance was meaningfully higher than that of nonusers (DF 1⁄4 48, P < 05). The consequences revealed that computer-assisted language learning and computer technology application have resulted in a greater promotion of students' reading improvement. In other words, computer-assisted instruction users outperformed the nonusers. The research, therefore, highlights the conclusion that EFL learners' use of computer-assisted instruction has the potential to promote more effective reading ability. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "98cc82852083eae53d06621f37cde9e5",
"text": "Automatically recognizing a large number of action categories from videos is of significant importance for video understanding. Most existing works focused on the design of more discriminative feature representation, and have achieved promising results when the positive samples are enough. However, very limited efforts were spent on recognizing a novel action without any positive exemplars, which is often the case in the real settings due to the large amount of action classes and the users’ queries dramatic variations. To address this issue, we propose to perform action recognition when no positive exemplars of that class are provided, which is often known as the zero-shot learning. Different from other zero-shot learning approaches, which exploit attributes as the intermediate layer for the knowledge transfer, our main contribution is SIR, which directly leverages the semantic inter-class relationships between the known and unknown actions followed by label transfer learning. The inter-class semantic relationships are automatically measured by continuous word vectors, which learned by the skip-gram model using the large-scale text corpus. Extensive experiments on the UCF101 dataset validate the superiority of our method over fully-supervised approaches using few positive exemplars.",
"title": ""
},
{
"docid": "19f4de5f01f212bf146087d4695ce15e",
"text": "Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms. In comparison with existing VO and V-SLAM algorithms, semi-direct visual odometry (SVO) has two main advantages that lead to stateof-the-art frame rate camera motion estimation: direct pixel correspondence and efficient implementation of probabilistic mapping method. This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a singleimage depth prediction network. By significantly reducing the depth uncertainty of the initialized map point (i.e., small variance centred about the depth prediction), the benefits are twofold: reliable feature correspondence between views and fast convergence to the true depth in order to create new map points. We evaluate our method with two outdoor datasets: KITTI dataset and Oxford Robotcar dataset. The experimental results indicate that the improved SVO mapping results in increased robustness and camera tracking accuracy.",
"title": ""
}
] |
scidocsrr
|
937bec8416217a0f5577d1223c514146
|
Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots
|
[
{
"docid": "749e11a625e94ab4e1f03a74aa6b3ab2",
"text": "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.",
"title": ""
}
] |
[
{
"docid": "fe3570c283fbf8b1f504e7bf4c2703a8",
"text": "We propose ThalNet, a deep learning model inspired by neocortical communication via the thalamus. Our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. We show that our model learns to route information hierarchically, processing input data by a chain of modules. We observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. Our model outperforms standard recurrent neural networks on several sequential benchmarks.",
"title": ""
},
{
"docid": "9da1449675af42a2fc75ba8259d22525",
"text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud",
"title": ""
},
{
"docid": "7fafda966819bb780b8b2b6ada4cc468",
"text": "Acne inversa (AI) is a chronic and recurrent inflammatory skin disease. It occurs in intertriginous areas of the skin and causes pain, drainage, malodor and scar formation. While supposedly caused by an autoimmune reaction, bacterial superinfection is a secondary event in the disease process. A unique case of a 43-year-old male patient suffering from a recurring AI lesion in the left axilla was retrospectively analysed. A swab revealed Actinomyces neuii as the only agent growing in the lesion. The patient was then treated with Amoxicillin/Clavulanic Acid 3 × 1 g until he was cleared for surgical excision. The intraoperative swab was negative for A. neuii. Antibiotics were prescribed for another 4 weeks and the patient has remained relapse free for more than 12 months now. Primary cutaneous Actinomycosis is a rare entity and the combination of AI and Actinomycosis has never been reported before. Failure to detect superinfections of AI lesions with slow-growing pathogens like Actinomyces spp. might contribute to high recurrence rates after immunosuppressive therapy of AI. The present case underlines the potentially multifactorial pathogenesis of the disease and the importance of considering and treating potential infections before initiating immunosuppressive regimens for AI patients.",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "53307a72e0a50b65da45f83e5a8ff9f0",
"text": "Although few studies dispute that there are gender differences in depression, the etiology is still unknown. In this review, we cover a number of proposed factors and the evidences for and against these factors that may account for gender differences in depression. These include the possible role of estrogens at puberty, differences in exposure to childhood trauma, differences in stress perception between men and women and the biological differences in stress response. None of these factors seem to explain gender differences in depression. Finally, we do know that when depressed, women show greater hypothalamic–pituitary–adrenal (HPA) axis activation than men and that menopause with loss of estrogens show the greatest HPA axis dysregulation. It may be the constantly changing steroid milieu that contributes to these phenomena and vulnerability to depression.",
"title": ""
},
{
"docid": "2cac667e743d0a020ef136215339e1ed",
"text": "We present the design and experimental validation of a scalable dc microgrid for rural electrification in emerging regions. A salient property of the dc microgrid architecture is the distributed control of the grid voltage, which enables both instantaneous power sharing and a metric for determining the available grid power. A droop-voltage power-sharing scheme is implemented wherein the bus voltage droops in response to low supply/high demand. In addition, the architecture of the dc microgrid aims to minimize the losses associated with stored energy by distributing storage to individual households. In this way, the number of conversion steps and line losses are reduced. We calculate that the levelized cost of electricity of the proposed dc microgrid over a 15-year time horizon is $0.35/kWh. We also present the experimental results from a scaled-down experimental prototype that demonstrates the steady-state behavior, the perturbation response, and the overall efficiency of the system. Moreover, we present fault mitigation strategies for various faults that can be expected to occur in a microgrid distribution system. The experimental results demonstrate the suitability of the presented dc microgrid architecture as a technically advantageous and cost-effective method for electrifying emerging regions.",
"title": ""
},
{
"docid": "e9438241965b4cb6601624456b60f990",
"text": "This paper proposes a model for designing games around Artificial Intelligence (AI). AI-based games put AI in the foreground of the player experience rather than in a supporting role as is often the case in many commercial games. We analyze the use of AI in a number of existing games and identify design patterns for AI in games. We propose a generative ideation technique to combine a design pattern with an AI technique or capacity to make new AI-based games. Finally, we demonstrate this technique through two examples of AI-based game prototypes created using these patterns.",
"title": ""
},
{
"docid": "e567034595d9bb6a236d15b8623efce7",
"text": "In this paper, we use artificial neural networks (ANNs) for voice conversion and exploit the mapping abilities of an ANN model to perform mapping of spectral features of a source speaker to that of a target speaker. A comparative study of voice conversion using an ANN model and the state-of-the-art Gaussian mixture model (GMM) is conducted. The results of voice conversion, evaluated using subjective and objective measures, confirm that an ANN-based VC system performs as good as that of a GMM-based VC system, and the quality of the transformed speech is intelligible and possesses the characteristics of a target speaker. In this paper, we also address the issue of dependency of voice conversion techniques on parallel data between the source and the target speakers. While there have been efforts to use nonparallel data and speaker adaptation techniques, it is important to investigate techniques which capture speaker-specific characteristics of a target speaker, and avoid any need for source speaker's data either for training or for adaptation. In this paper, we propose a voice conversion approach using an ANN model to capture speaker-specific characteristics of a target speaker and demonstrate that such a voice conversion approach can perform monolingual as well as cross-lingual voice conversion of an arbitrary source speaker.",
"title": ""
},
{
"docid": "c27eecae33fe87779d3452002c1bdf8a",
"text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.",
"title": ""
},
{
"docid": "2b540b2e48d5c381e233cb71c0cf36fe",
"text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.",
"title": ""
},
{
"docid": "d2c13b3daa3712b32172126404b14c20",
"text": "To adequately perform perioral rejuvenation procedures, it is necessary to understand the morphologic changes caused by facial aging. Anthropometric analyses of standardized frontal view and profile photographs could help to investigate such changes. Photographs of 346 male individuals were evaluated using 12 anthropometric indices. Data from two groups of health subjects, the first exhibiting a mean age of nearly 20 and the second of nearly 60 years, were compared. To evaluate the influence of combined nicotine and alcohol abuse, the data of the second group were compared to a third group exhibiting a similar mean age who were known alcohol and nicotine abusers. Comparison of the first to the second group showed significant decrease of the vertical height of upper and lower vermilion and relative enlargement of the cutaneous part of upper and lower lips. This effect was stronger in the upper vermilion and medial upper lips. The sagging of the upper lips led to the appearance of an increased mouth width. In the third group the effect of sagging of the upper lips, and especially its medial portion was significantly higher compared to the second group. The photo-assisted anthropometric measurements investigated gave reproducible results related to perioral aging.",
"title": ""
},
{
"docid": "00e56a93a3b8ee3a3d2cdab2fd27375e",
"text": "Omnidirectional image and video have gained popularity thanks to availability of capture and display devices for this type of content. Recent studies have assessed performance of objective metrics in predicting visual quality of omnidirectional content. These metrics, however, have not been rigorously validated by comparing their prediction results with ground-truth subjective scores. In this paper, we present a set of 360-degree images along with their subjective quality ratings. The set is composed of four contents represented in two geometric projections and compressed with three different codecs at four different bitrates. A range of objective quality metrics for each stimulus is then computed and compared to subjective scores. Statistical analysis is performed in order to assess performance of each objective quality metric in predicting subjective visual quality as perceived by human observers. Results show the estimated performance of the state-of-the-art objective metrics for omnidirectional visual content. Objective metrics specifically designed for 360-degree content do not outperform conventional methods designed for 2D images.",
"title": ""
},
{
"docid": "f395e3d72341bd20e1a16b97259bad7d",
"text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.",
"title": ""
},
{
"docid": "1e100608fd78b1e20020f892784199ed",
"text": "In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method [1]. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset.",
"title": ""
},
{
"docid": "335220bbad7798a19403d393bcbbf7fb",
"text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.",
"title": ""
},
{
"docid": "139d9d5866a1e455af954b2299bdbcf6",
"text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses",
"title": ""
},
{
"docid": "5ca36a618eb3eee79e40228fa71dc029",
"text": "To achieve the long-term goal of machines being able to engage humans in conversation, our models should be engaging. We focus on communication grounded in images, whereby a dialogue is conducted based on a given photo, a setup that is naturally engaging to humans (Hu et al., 2014). We collect a large dataset of grounded human-human conversations, where humans are asked to play the role of a given personality, as the use of personality in conversation has also been shown to be engaging (Shuster et al., 2018). Our dataset, ImageChat, consists of 202k dialogues and 401k utterances over 202k images using 215 possible personality traits. We then design a set of natural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. Automatic metrics and human evaluations show the efficacy of approach, in particular where our best performing model is preferred over human conversationalists 47.7% of the time.",
"title": ""
},
{
"docid": "20c3addef683da760967df0c1e83f8e3",
"text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.",
"title": ""
},
{
"docid": "cc5126ea8a6f9ebca587970377966067",
"text": "In this paper reliability model of the converter valves in VSC-HVDC system is analyzed. The internal structure and functions of converter valve are presented. Taking the StakPak IGBT from ABB Semiconductors for example, the mathematical reliability model for converter valve and its sub-module is established. By means of calculation and analysis, the reliability indices of converter valve under various voltage classes and redundancy designs are obtained, and then optimal redundant scheme is chosen. KeywordsReliability Analysis; VSC-HVDC; Converter Valve",
"title": ""
},
{
"docid": "1e4f13016c846039f7bbed47810b8b3d",
"text": "This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.",
"title": ""
}
] |
scidocsrr
|
945a35c975c8aa608373ee01f80fa0d9
|
Assessment of play and leisure: delineation of the problem.
|
[
{
"docid": "1a7e2ca13d00b6476820ad82c2a68780",
"text": "To understand the dynamics of mental health, it is essential to develop measures for the frequency and the patterning of mental processes in every-day-life situations. The Experience-Sampling Method (ESM) is an attempt to provide a valid instrument to describe variations in self-reports of mental processes. It can be used to obtain empirical data on the following types of variables: a) frequency and patterning of daily activity, social interaction, and changes in location; b) frequency, intensity, and patterning of psychological states, i.e., emotional, cognitive, and conative dimensions of experience; c) frequency and patterning of thoughts, including quality and intensity of thought disturbance. The article reviews practical and methodological issues of the ESM and presents evidence for its short- and long-term reliability when used as an instrument for assessing the variables outlined above. It also presents evidence for validity by showing correlation between ESM measures on the one hand and physiological measures, one-time psychological tests, and behavioral indices on the other. A number of studies with normal and clinical populations that have used the ESM are reviewed to demonstrate the range of issues to which the technique can be usefully applied.",
"title": ""
}
] |
[
{
"docid": "b5e0faba5be394523d10a130289514c2",
"text": "Child neglect results from either acts of omission or of commission. Fatalities from neglect account for 30% to 40% of deaths caused by child maltreatment. Deaths may occur from failure to provide the basic needs of infancy such as food or medical care. Medical care may also be withheld because of parental religious beliefs. Inadequate supervision may contribute to a child's injury or death through adverse events involving drowning, fires, and firearms. Recognizing the factors contributing to a child's death is facilitated by the action of multidisciplinary child death review teams. As with other forms of child maltreatment, prevention and early intervention strategies are needed to minimize the risk of injury and death to children.",
"title": ""
},
{
"docid": "945f129f81e9b7a69a6ba9dc982ed7c6",
"text": "Geographic location of a person is important contextual information that can be used in a variety of scenarios like disaster relief, directional assistance, context-based advertisements, etc. GPS provides accurate localization outdoors but is not useful inside buildings. We propose an coarse indoor localization approach that exploits the ubiquity of smart phones with embedded sensors. GPS is used to find the building in which the user is present. The Accelerometers are used to recognize the user’s dynamic activities (going up or down stairs or an elevator) to determine his/her location within the building. We demonstrate the ability to estimate the floor-level of a user. We compare two techniques for activity classification, one is naive Bayes classifier and the other is based on dynamic time warping. The design and implementation of a localization application on the HTC G1 platform running Google Android is also presented.",
"title": ""
},
{
"docid": "21122ab1659629627c46114cc5c3b838",
"text": "The introduction of more onboard autonomy in future single and multi-satellite missions is both a question of limited onboard resources and of how far can we actually thrust the autonomous functionalities deployed on board. In-flight experience with nasa's Deep Space 1 and Earth Observing 1 has shown how difficult it is to design, build and test reliable software for autonomy. The degree to which system-level onboard autonomy will be deployed in the single and multi satellite systems of tomorrow will depend, among other things, on the progress made in two key software technologies: autonomous onboard planning and robust execution. Parallel to the developments in these two areas, the actual integration of planning and execution engines is still nowadays a crucial issue in practical application. This paper presents an onboard autonomous model-based executive for execution of time-flexible plans. It describes its interface with an apsi-based timeline-based planner, its control approaches, architecture and its modelling language as an extension of apsl's ddl. In addition, it introduces a modified version of the classical blocks world toy planning problem which has been extended in scope and with a runtime environment for evaluation of integrated planning and executive engines.",
"title": ""
},
{
"docid": "8fb5a9d2f68601d9e07d4a96ea45e585",
"text": "The solid-state transformer (SST) is a promising power electronics solution that provides voltage regulation, reactive power compensation, dc-sourced renewable integration, and communication capabilities, in addition to the traditional step-up/step-down functionality of a transformer. It is gaining widespread attention for medium-voltage (MV) grid interfacing to enable increases in renewable energy penetration, and, commercially, the SST is of interest for traction applications due to its light weight as a result of medium-frequency isolation. The recent advancements in silicon carbide (SiC) power semiconductor device technology are creating a new paradigm with the development of discrete power semiconductor devices in the range of 10-15 kV and even beyond-up to 22 kV, as recently reported. In contrast to silicon (Si) IGBTs, which are limited to 6.5-kV blocking, these high-voltage (HV) SiC devices are enabling much simpler converter topologies and increased efficiency and reliability, with dramatic reductions of the size and weight of the MV power-conversion systems. This article presents the first-ever demonstration results of a three-phase MV grid-connected 100-kVA SST enabled by 15-kV SiC n-IGBTs, with an emphasis on the system design and control considerations. The 15-kV SiC n-IGBTs were developed by Cree and packaged by Powerex. The low-voltage (LV) side of the SST is built with 1,200-V, 100-A SiC MOSFET modules. The galvanic isolation is provided by three single-phase 22-kV/800-V, 10-kHz, 35-kVA-rated high-frequency (HF) transformers. The three-phase all-SiC SST that interfaces with 13.8-kV and 480-V distribution grids is referred to as a transformerless intelligent power substation (TIPS). The characterization of the 15-kV SiC n-IGBTs, the development of the MV isolated gate driver, and the design, control, and system demonstration of the TIPS were undertaken by North Carolina State University's (NCSU's) Future Renewable Electrical Energy Delivery and Management (FREEDM) Systems Center, sponsored by an Advanced Research Projects Agency-Energy (ARPA-E) project.",
"title": ""
},
{
"docid": "05874da7b27475377dcd8f7afdd1bc5a",
"text": "The main aim of this paper is to provide automatic irrigation to the plants which helps in saving money and water. The entire system is controlled using 8051 micro controller which is programmed as giving the interrupt signal to the sprinkler.Temperature sensor and humidity sensor are connected to internal ports of micro controller via comparator,When ever there is a change in temperature and humidity of the surroundings these sensors senses the change in temperature and humidity and gives an interrupt signal to the micro-controller and thus the sprinkler is activated.",
"title": ""
},
{
"docid": "9b34b171858ad3ebda73848b7bb5372d",
"text": "INTRODUCTION\nVulvar and vaginal atrophy (VVA) affects up to two thirds of postmenopausal women, but most symptomatic women do not receive prescription therapy.\n\n\nAIM\nTo evaluate postmenopausal women's perceptions of VVA and treatment options for symptoms in the Women's EMPOWER survey.\n\n\nMETHODS\nThe Rose Research firm conducted an internet survey of female consumers provided by Lightspeed Global Market Insite. Women at least 45 years of age who reported symptoms of VVA and residing in the United States were recruited.\n\n\nMAIN OUTCOME MEASURES\nSurvey results were compiled and analyzed by all women and by treatment subgroups.\n\n\nRESULTS\nRespondents (N = 1,858) had a median age of 58 years (range = 45-90). Only 7% currently used prescribed VVA therapies (local estrogen therapies or oral selective estrogen receptor modulators), whereas 18% were former users of prescribed VVA therapies, 25% used over-the-counter treatments, and 50% had never used any treatment. Many women (81%) were not aware of VVA or that it is a medical condition. Most never users (72%) had never discussed their symptoms with a health care professional (HCP). The main reason for women not to discuss their symptoms with an HCP was that they believed that VVA was just a natural part of aging and something to live with. When women spoke to an HCP about their symptoms, most (85%) initiated the discussion. Preferred sources of information were written material from the HCP's office (46%) or questionnaires to fill out before seeing the HCP (41%).The most negative attributes of hormonal products were perceived risk of systemic absorption, messiness of local creams, and the need to reuse an applicator. Overall, HCPs only recommended vaginal estrogen therapy to 23% and oral hormone therapies to 18% of women. When using vaginal estrogen therapy, less than half of women adhered to and complied with posology; only 33% to 51% of women were very to extremely satisfied with their efficacy.\n\n\nCONCLUSION\nThe Women's EMPOWER survey showed that VVA continues to be an under-recognized and under-treated condition, despite recent educational initiatives. A disconnect in education, communication, and information between HCPs and their menopausal patients remains prevalent. Kingsberg S, Krychman M, Graham S, et al. The Women's EMPOWER Survey: Identifying Women's Perceptions on Vulvar and Vaginal Atrophy and Its Treatment. J Sex Med 2017;14:413-424.",
"title": ""
},
{
"docid": "993d9d3edb3267328632a0ecf0a707f6",
"text": "Thin-film transistors (TFT) in hydrogenated amorphoussilicon, amorphousmetal oxide, andsmallmolecule and polymer organic semiconductors would all hold promise as potential device candidates to large area flexible electronics applications. A universal compact dc model was developed with a proper balance between the physical and mathematical approaches for these thin-film transistors (TFTs). It can capture the common key parameters used for device performance benchmarking of the different TFTs while being applicable to a wide range of TFT technologies in different materials and device structures. Based on this model, a user-friendly tool was developed to provide an interactive way for convenient parameter extraction. The model is continuous from the off-state and subthreshold regimes to the above-threshold regime, avoiding the convergence problems when being used in SPICE circuit simulations. Finally, for verification, it was implemented into a SPICE circuit simulator using Verilog-A to simulate a TFT circuit examplewith the simulated results agreeing verywell with the experimental measurements.",
"title": ""
},
{
"docid": "6dd151a412531cfaf043e5cef616769b",
"text": "In this paper a pattern classification and object recognition approach based on bio-inspired techniques is presented. It exploits the Hierarchical Temporal Memory (HTM) topology, which imitates human neocortex for recognition and categorization tasks. The HTM comprises a hierarchical tree structure that exploits enhanced spatiotemporal modules to memorize objects appearing in various orientations. In accordance with HTM's biological inspiration, human vision mechanisms can be used to preprocess the input images. Therefore, the input images undergo a saliency computation step, revealing the plausible information of the scene, where a human might fixate. The adoption of the saliency detection module releases the HTM network from memorizing redundant information and augments the classification accuracy. The efficiency of the proposed framework has been experimentally evaluated in the ETH-80 dataset, and the classification accuracy has been found to be greater than other HTM systems.",
"title": ""
},
{
"docid": "54041038352cf93f57d56153085a6f7c",
"text": "This study seeks to evaluate how student information retention and comprehension can be influenced by their preferred note taking medium. One-hundred and nine college students watched lectures and took notes with an assigned medium: longhand or computer. Prior to watching the lectures, participants self-reported their preferred note taking medium. These lectures were pre-recorded and featured PowerPoint presentations containing information relating to the lecture. After the lectures, students were able to review their notes briefly before they engaged in activities unrelated to the lecture. They then took two tests based on the lecture material and completed a questionnaire further inquiring about their note taking tendencies. Tests contained two types of questions: conceptual and specific. A main effect of question type was found, with both computer and longhand note takers performing better on specific questions. Further, computer-preferred note takers who were forced to take notes by hand performed worst overall on the tests. Regardless of preference and question type, computer and longhand users performed equally well overall, and the interaction of medium and question type on test performance was not significant. For transcription tendencies, computer note takers generated more words and more 3-word verbatim sequences than longhand note takers. For note taking tendencies, the use of computer notes somewhat positively correlated with the use of no notes. The results of this study help to further understand how students’ preferred note taking medium can influence performance on subsequent tests.",
"title": ""
},
{
"docid": "84063c2c456944f59413eb5c114cef8e",
"text": "In this paper we introduce ALMA - A Layered Model of Affect. It integrates three major affective characteristics: emotions, moods and personality that cover short, medium, and long term affect. The use of this model consists of two phases: In the preparation phase appraisal rules and personality profiles for characters must be specified with the help of AffectML - our XML based affect modeling language. In the runtime phase, the specified appraisal rules are used to compute real-time emotions and moods as results of a subjective appraisal of relevant input. The computed affective characteristics are represented in AffectML and can be processed by sub-sequent modules that control the cognitive processes and physical behavior of embodied conversational characters. ALMA is part of the VirtualHuman project which develops interactive virtual characters that serve as dialog partners with human-like conversational skills. ALMA provides our virtual humans with a personality profile and with real-time emotions and moods. These are used by the multimodal behavior generation module to enrich the lifelike and believable qualities.",
"title": ""
},
{
"docid": "20f1a40e7f352085c04709e27c1a2aa2",
"text": "Automatic speech recognition (ASR) outputs often contain various disfluencies. It is necessary to remove these disfluencies before processing downstream tasks. In this paper, an efficient disfluency detection approach based on right-to-left transitionbased parsing is proposed, which can efficiently identify disfluencies and keep ASR outputs grammatical. Our method exploits a global view to capture long-range dependencies for disfluency detection by integrating a rich set of syntactic and disfluency features with linear complexity. The experimental results show that our method outperforms state-of-the-art work and achieves a 85.1% f-score on the commonly used English Switchboard test set. We also apply our method to in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.",
"title": ""
},
{
"docid": "954d0ef5a1a648221ce8eb3f217f4071",
"text": "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this",
"title": ""
},
{
"docid": "4d7cd44f2bbe9896049a7868165bd415",
"text": "Testing previously studied information enhances long-term memory, particularly when the information is successfully retrieved from memory. The authors examined the effect of unsuccessful retrieval attempts on learning. Participants in 5 experiments read an essay about vision. In the test condition, they were asked about embedded concepts before reading the passage; in the extended study condition, they were given a longer time to read the passage. To distinguish the effects of testing from attention direction, the authors emphasized the tested concepts in both conditions, using italics or bolded keywords or, in Experiment 5, by presenting the questions but not asking participants to answer them before reading the passage. Posttest performance was better in the test condition than in the extended study condition in all experiments--a pretesting effect--even though only items that were not successfully retrieved on the pretest were analyzed. The testing effect appears to be attributable, in part, to the role unsuccessful tests play in enhancing future learning.",
"title": ""
},
{
"docid": "51955b4af0b9d7633c3b25bcc7010335",
"text": "OBJECTIVE\nTo use the trimmed cartilage as a support material for both internal and external valves.\n\n\nMETHODS\nThe lateral crural turn-in flap (LCTF) technique is simply to make cephalic trimming of the lateral crura and turn it into a pocket created under the remaining lateral crus. Twenty-four patients with lateral crura wider than 12 mm and in whom this technique was applied took part in this study. The trimmed cartilage was used to reshape and/or support the lateral crus and the internal valve by keeping the scroll intact. The support and suspension of the lateral crura \"sandwich\" helped not only to prevent stenosis of the internal valve angle but also to widen it in some cases.\n\n\nRESULTS\nThe LCTF has been used in 24 patients to reshape and/or add structure to the lateral crus with great success. The internal valve was also kept open by keeping the scroll area intact, especially in 1 patient with concave lateral crura in whom this technique helped to widen the internal valve angle.\n\n\nCONCLUSIONS\nThis study shows that the LCTF can be used to reshape and add structure to the lateral crus and to suspend the internal valve. Although it is a powerful technique by itself in functional rhinoplasty, it should be combined with other methods, such as spreader flaps/grafts or alar battens, to obtain the maximum functional result.",
"title": ""
},
{
"docid": "4f49a5cc49f1eeb864b4a6f347263710",
"text": "Future wireless applications will take advantage of rapidly deployable, self-configuring multihop ad hoc networks. Because of the difficulty of obtaining IEEE 802.11 feedback about link connectivity in real networks, many multihop ad hoc networks utilize hello messages to determine local connectivity. This paper uses an implementation of the Ad hoc On-demand Distance Vector (AODV) routing protocol to examine the effectiveness of hello messages for monitoring link status. In this study, it is determined that many factors influence the utility of hello messages, including allowed hello message loss settings, discrepancy between data and hello message size and 802.11b packet handling. This paper examines these factors and experimentally evaluates a variety of approaches for improving the accuracy of hello messages as an indicator of local connectivity.",
"title": ""
},
{
"docid": "31aa273b3922c33b544e48513c0507e4",
"text": "This article describes a model of teacher change originally presented nearly two decades ago (Guskey, 1986) that began my long and warm friendship with Michael Huberman. The model portrays the temporal sequence of events from professional development experiences to enduring change in teachers’ attitudes and perceptions. Research evidence supporting the model is summarized and the conditions under which change might be facilitated are described. The development and presentation of this model initiated a series of professional collaborations between Michael and myself, and led to the development of our co-edited book, Professional Development in Education: new paradigms and practices (Guskey & Huberman, 1995), which was named `Book of the Year’ by the National Staff Development Council in 1996.",
"title": ""
},
{
"docid": "66fd3e27e89554e4c6ea5eef294a345b",
"text": "Large-scale distributed training of deep neural networks suffer from the generalization gap caused by the increase in the effective mini-batch size. Previous approaches try to solve this problem by varying the learning rate and batch size over epochs and layers, or some ad hoc modification of the batch normalization. We propose an alternative approach using a second-order optimization method that shows similar generalization capability to first-order methods, but converges faster and can handle larger minibatches. To test our method on a benchmark where highly optimized first-order methods are available as references, we train ResNet-50 on ImageNet. We converged to 75% Top-1 validation accuracy in 35 epochs for mini-batch sizes under 16,384, and achieved 75% even with a mini-batch size of 131,072, which took 100 epochs.",
"title": ""
},
{
"docid": "b651dab78e39d59e3043cb091b7e4f1b",
"text": "Learning an acoustic model directly from the raw waveform has been an active area of research. However, waveformbased models have not yet matched the performance of logmel trained neural networks. We will show that raw waveform features match the performance of log-mel filterbank energies when used with a state-of-the-art CLDNN acoustic model trained on over 2,000 hours of speech. Specifically, we will show the benefit of the CLDNN, namely the time convolution layer in reducing temporal variations, the frequency convolution layer for preserving locality and reducing frequency variations, as well as the LSTM layers for temporal modeling. In addition, by stacking raw waveform features with log-mel features, we achieve a 3% relative reduction in word error rate.",
"title": ""
},
{
"docid": "df6e410fddeb22c7856f5362b7abc1de",
"text": "With the increasing prevalence of Web 2.0 and cloud computing, password-based logins play an increasingly important role on user-end systems. We use passwords to authenticate ourselves to countless applications and services. However, login credentials can be easily stolen by attackers. In this paper, we present a framework, TrustLogin, to secure password-based logins on commodity operating systems. TrustLogin leverages System Management Mode to protect the login credentials from malware even when OS is compromised. TrustLogin does not modify any system software in either client or server and is transparent to users, applications, and servers. We conduct two study cases of the framework on legacy and secure applications, and the experimental results demonstrate that TrustLogin is able to protect login credentials from real-world keyloggers on Windows and Linux platforms. TrustLogin is robust against spoofing attacks. Moreover, the experimental results also show TrustLogin introduces a low overhead with the tested applications.",
"title": ""
},
{
"docid": "b6818031020a04a5b9385603f38da147",
"text": "LINGUISTIC HEDGING IN FINANCIAL DOCUMENTS by CAITLIN CASSIDY (Under the Direction of Frederick W. Maier) ABSTRACT Each year, publicly incorporated companies are required to file a Form 10-K with the United States Securities and Exchange Commission. These documents contain an enormous amount of natural language data and may offer insight into financial performance prediction. This thesis attempts to analyze two dimensions of language held within this data: sentiment and linguistic hedging. An experiment was conducted with 325 human annotators to manually score a subset of the sentiment words contained in a corpus of 106 10-K filings, and an inference engine identified instances of hedges having governance over these words in a dependency tree. Finally, this work proposes an algorithm for the automatic classification of sentences in the financial domain as speculative or non-speculative using the previously defined hedge cues.",
"title": ""
}
] |
scidocsrr
|
0382bfc67f33996adab0fef43fd7af34
|
Frequency assignment: Theory and applications
|
[
{
"docid": "cc0c1c11d437060e9492a3a1218e1271",
"text": "Graph coloring problems, in which one would like to color the vertices of a given graph with a small number of colors so that no two adjacent vertices receive the same color, arise in many applications, including various scheduling and partitioning problems. In this paper the complexity and performance of algorithms which construct such colorings are investigated. For a graph <italic>G</italic>, let &khgr;(<italic>G</italic>) denote the minimum possible number of colors required to color <italic>G</italic> and, for any graph coloring algorithm <italic>A</italic>, let <italic>A</italic>(<italic>G</italic>) denote the number of colors used by <italic>A</italic> when applied to <italic>G</italic>. Since the graph coloring problem is known to be “NP-complete,” it is considered unlikely that any efficient algorithm can guarantee <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>) for all input graphs. In this paper it is proved that even coming close to khgr;(<italic>G</italic>) with a fast algorithm is hard. Specifically, it is shown that if for some constant <italic>r</italic> < 2 and constant <italic>d</italic> there exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) ≤ <italic>r</italic>·&khgr;(<italic>G</italic>) + <italic>d</italic>, then there also exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>).",
"title": ""
}
] |
[
{
"docid": "84a258c59b5f4e576763c0c90426c475",
"text": "Analysis of gene and protein name synonyms in Entrez Gene and UniProtKB resources",
"title": ""
},
{
"docid": "4313c87376e6ea9fac7dc32f359c2ae9",
"text": "Game engines are specialized middleware which facilitate rapid game development. Until now they have been highly optimized to extract maximum performance from single processor hardware. In the last couple of years improvements in single processor hardware have approached physical limits and performance gains have slowed to become incremental. As a consequence, improvements in game engine performance have also become incremental. Currently, hardware manufacturers are shifting to dual and multi-core processor architectures, and the latest game consoles also feature multiple processors. This presents a challenge to game engine developers because of the unfamiliarity and complexity of concurrent programming. The next generation of game engines must address the issues of concurrency if they are to take advantage of the new hardware. This paper discusses the issues, approaches, and tradeoffs that need to be considered in the design of a multi-threaded game engine.",
"title": ""
},
{
"docid": "6ef985d656f605d40705a582483d562e",
"text": "A rising issue in the scientific community entails the identification of patterns in the evolution of the scientific enterprise and the emergence of trends that influence scholarly impact. In this direction, this paper investigates the mechanism with which citation accumulation occurs over time and how this affects the overall impact of scientific output. Utilizing data regarding the SOFSEM Conference (International Conference on Current Trends in Theory and Practice of Computer Science), we study a corpus of 1006 publications with their associated authors and affiliations to uncover the effects of collaboration on the conference output. We proceed to group publications into clusters based on the trajectories they follow in their citation acquisition. Representative patterns are identified to characterize dominant trends of the conference, while exploring phenomena of early and late recognition by the scientific community and their correlation with impact.",
"title": ""
},
{
"docid": "6224f4f3541e9cd340498e92a380ad3f",
"text": "A personal story: From philosophy to software.",
"title": ""
},
{
"docid": "8647fa9c501cd3dbdf3b9842ec5378ca",
"text": "A new framework based on the theory of copulas is proposed to address semisupervised domain adaptation problems. The presented method factorizes any multivariate density into a product of marginal distributions and bivariate copula functions. Therefore, changes in each of these factors can be detected and corrected to adapt a density model accross different learning domains. Importantly, we introduce a novel vine copula model, which allows for this factorization in a non-parametric manner. Experimental results on regression problems with real-world data illustrate the efficacy of the proposed approach when compared to state-of-the-art techniques.",
"title": ""
},
{
"docid": "6dd39d60e6cf733692c87126bdb31e24",
"text": "Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a snapshot of this fast-growing field, specifically for microscopy image analysis. We briefly introduce the popular deep neural networks and summarize current deep learning achievements in various tasks, such as detection, segmentation, and classification in microscopy image analysis. In particular, we explain the architectures and the principles of convolutional neural networks, fully convolutional networks, recurrent neural networks, stacked autoencoders, and deep belief networks, and interpret their formulations or modelings for specific tasks on various microscopy images. In addition, we discuss the open challenges and the potential trends of future research in microscopy image analysis using deep learning.",
"title": ""
},
{
"docid": "34c343413fc748c1fc5e07fb40e3e97d",
"text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.",
"title": ""
},
{
"docid": "18cf88b01ff2b20d17590d7b703a41cb",
"text": "Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.",
"title": ""
},
{
"docid": "0bf3619586612c11ad7e4eb8bdb8993d",
"text": "In this paper, we propose a simple model to represent the slugging flow regime appearing in vertical risers. We consider a one dimensional two-phase flow composed of a liquid phase and a gaseous compressible phase. The presented model can be applied to a wide class of systems, ranging from pure vertical risers to more complex geometries such as those found on actual sub sea petroleum facilities. Following ideas from the literature, we introduce a virtual valve located at the bottom of the riser. This allows us to reproduce observed periodic regimes. It also brings insight into the physics of the slugging phenomenon. Most importantly, this model reveals relatively easy to tune and seems suitable for control design. A tuning methodology is proposed along with a proof of the existence of a limit cycle under simplifying assumptions.",
"title": ""
},
{
"docid": "f2a1e5d8e99977c53de9f2a82576db69",
"text": "During the last years, several masking schemes for AES have been proposed to secure hardware implementations against DPA attacks. In order to investigate the effectiveness of these countermeasures in practice, we have designed and manufactured an ASIC. The chip features an unmasked and two masked AES-128 encryption engines that can be attacked independently. In addition to conventional DPA attacks on the output of registers, we have also mounted attacks on the output of logic gates. Based on simulations and physical measurements we show that the unmasked and masked implementations leak side-channel information due to glitches at the output of logic gates. It turns out that masking the AES S-Boxes does not prevent DPA attacks, if glitches occur in the circuit.",
"title": ""
},
{
"docid": "87e050b5ae29487cb9cbdbbe672010ea",
"text": "The goal of data mining is to extract or “mine” knowledge from large amounts of data. However, data is often collected by several different sites. Privacy, legal and commercial concerns restrict centralized access to this data, thus derailing data mining projects. Recently, there has been growing focus on finding solutions to this problem. Several algorithms have been proposed that do distributed knowledge discovery, while providing guarantees on the non-disclosure of data. Vertical partitioning of data is an important data distribution model often found in real life. Vertical partitioning or heterogeneous distribution implies that different features of the same set of data are collected by different sites. In this chapter we survey some of the methods developed in the literature to mine vertically partitioned data without violating privacy and discuss challenges and complexities specific to vertical partitioning.",
"title": ""
},
{
"docid": "87ed7ebdf8528df1491936000649761b",
"text": "Internet of Vehicles (IoV) is an important constituent of next generation smart cities that enables city wide connectivity of vehicles for traffic management applications. A secure and reliable communications is an important ingredient of safety applications in IoV. While the use of a more robust security algorithm makes communications for safety applications secure, it could reduce application QoS due to increased packet overhead and security processing delays. Particularly, in high density scenarios where vehicles receive large number of safety packets from neighborhood, timely signature verification of these packets could not be guaranteed. As a result, critical safety packets remain unverified resulting in cryptographic loss. In this paper, we propose two security mechanisms that aim to reduce cryptographic loss rate. The first mechanism is random transmitter security level section whereas the second one is adaptive scheme that iteratively selects the best possible security level at the transmitter depending on the current cryptographic loss rate. Simulation results show the effectiveness of the proposed mechanisms in comparison with the static security technique recommended by the ETSI standard.",
"title": ""
},
{
"docid": "3a18976245cfc4b50e97aadf304ef913",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "e17a1429f4ca9de808caaa842ee5a441",
"text": "Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of 〈subject, relation, object〉 triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80, 000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53, 000+ objects and 29, 000+ relations, a scale at which no previous work has been evaluated at. We show superiority of our model over competitive baselines on the original Visual Genome dataset with 80, 000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.",
"title": ""
},
{
"docid": "a1f930147ad3c3ef48b6352e83d645d0",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
},
{
"docid": "03a7bcafb322ee8f7812d66abbd36ce6",
"text": "This paper presents a Deep Bidirectional Long Short Term Memory (LSTM) based Recurrent Neural Network architecture for text recognition. This architecture uses Connectionist Temporal Classification (CTC) for training to learn the labels of an unsegmented sequence with unknown alignment. This work is motivated by the results of Deep Neural Networks for isolated numeral recognition and improved speech recognition using Deep BLSTM based approaches. Deep BLSTM architecture is chosen due to its ability to access long range context, learn sequence alignment and work without the need of segmented data. Due to the use of CTC and forward backward algorithms for alignment of output labels, there are no unicode re-ordering issues, thus no need of lexicon or postprocessing schemes. This is a script independent and segmentation free approach. This system has been implemented for the recognition of unsegmented words of printed Oriya text. This system achieves 4.18% character level error and 12.11% word error rate on printed Oriya text.",
"title": ""
},
{
"docid": "697ed30a5d663c1dda8be0183fa4a314",
"text": "Due to the Web expansion, the prediction of online news popularity is becoming a trendy research topic. In this paper, we propose a novel and proactive Intelligent Decision Support System (IDSS) that analyzes articles prior to their publication. Using a broad set of extracted features (e.g., keywords, digital media content, earlier popularity of news referenced in the article) the IDSS first predicts if an article will become popular. Then, it optimizes a subset of the articles features that can more easily be changed by authors, searching for an enhancement of the predicted popularity probability. Using a large and recently collected dataset, with 39,000 articles from the Mashable website, we performed a robust rolling windows evaluation of five state of the art models. The best result was provided by a Random Forest with a discrimination power of 73%. Moreover, several stochastic hill climbing local searches were explored. When optimizing 1000 articles, the best optimization method obtained a mean gain improvement of 15 percentage points in terms of the estimated popularity probability. These results attest the proposed IDSS as a valuable tool for online news authors.",
"title": ""
},
{
"docid": "4b2e4a1bd3c6f6af713e507f1d63ba07",
"text": "Model validation constitutes a very important step in system dynamics methodology. Yet, both published and informal evidence indicates that there has been little effort in system dynamics community explicitly devoted to model validity and validation. Validation is a prolonged and complicated process, involving both formal/quantitative tools and informal/ qualitative ones. This paper focuses on the formal aspects of validation and presents a taxonomy of various aspects and steps of formal model validation. First, there is a very brief discussion of the philosophical issues involved in model validation, followed by a flowchart that describes the logical sequence in which various validation activities must be carried out. The crucial nature of structure validity in system dynamics (causal-descriptive) models is emphasized. Then examples are given of specific validity tests used in each of the three major stages of model validation: Structural tests. Introduction",
"title": ""
},
{
"docid": "bf3924a3d6262f538c06607c40c59f89",
"text": "Reading times on words in a sentence depend on the amount of information the words convey, which can be estimated by probabilistic language models. We investigate whether event-related potentials (ERPs), too, are predicted by information measures. Three types of language models estimated four different information measures on each word of a sample of English sentences. Six different ERP deflections were extracted from the EEG signal of participants reading the same sentences. A comparison between the information measures and ERPs revealed a reliable correlation between N400 amplitude and word surprisal. Language models that make no use of syntactic structure fitted the data better than did a phrase-structure grammar, which did not account for unique variance in N400 amplitude. These findings suggest that different information measures quantify cognitively different processes and that readers do not make use of a sentence's hierarchical structure for generating expectations about the upcoming word.",
"title": ""
},
{
"docid": "034f27103f8af63d47d92781a684b7c8",
"text": "Progress in mobile devices, wireless networks and context-aware technologies are bringing pervasive healthcareinto reality. With the help of wireless PDAs and portable computers, people may enjoy high quality care from a well-orchestrated team of healthcare professionals in the comfort of their own homes. The main technical challenges include mobility, adaptability, privacy, access authorization, and resource awareness. This paper presents a rule-based approach to context-aware access control in pervasive healthcare. The system is designed to work on resource-limited mobile devices over a peer-to-peer wireless network. Dynamic access authorization is achieved in real time by actively collecting context information, integrating the appropriate access control rules, and performing logical inference on the mobile device. Performance evaluations of the prototype implementation show the efficiency of the proposed mechanism.",
"title": ""
}
] |
scidocsrr
|
7d4d1560fd706b595b9a32da96c69a05
|
Wireless Sensor and Networking Technologies for Swarms of Aquatic Surface Drones
|
[
{
"docid": "3cb6ba4a950868c1d912b44b77b264be",
"text": "With the popularity of winter tourism, the winter recreation activities have been increased day by day in alpine environments. However, large numbers of people and rescuers are injured and lost in this environment due to the avalanche accidents every year. Drone-based rescue systems are envisioned as a viable solution for saving lives in this hostile environment. To this aim, a European project named “Smart collaboration between Humans and ground-aErial Robots for imProving rescuing activities in Alpine environments (SHERPA)” has been launched with the objective to develop a mixed ground and aerial drone platform to support search and rescue activities in a real-world hostile scenarios. In this paper, we study the challenges of existing wireless technologies for enabling drone wireless communications in alpine environment. We extensively discuss about the positive and negative aspects of the standards according to the SHERPA network requirements. Then based on that, we choose Worldwide interoperability for Microwave Access network (WiMAX) as a suitable technology for drone communications in this environment. Finally, we present a brief discussion about the use of operating band for WiMAX and the implementation issues of SHERPA network. The outcomes of this research assist to achieve the goal of the SHERPA project.",
"title": ""
}
] |
[
{
"docid": "9973dab94e708f3b87d52c24b8e18672",
"text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.",
"title": ""
},
{
"docid": "83d330486c50fe2ae1d6960a4933f546",
"text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.",
"title": ""
},
{
"docid": "a9de29e1d8062b4950e5ab3af6bea8df",
"text": "Asserts have long been a strongly recommended (if non-functional) adjunct to programs. They certainly don't add any user-evident feature value; and it can take quite some skill and effort to devise and add useful asserts. However, they are believed to add considerable value to the developer. Certainly, they can help with automated verification; but even in the absence of that, claimed advantages include improved understandability, maintainability, easier fault localization and diagnosis, all eventually leading to better software quality. We focus on this latter claim, and use a large dataset of asserts in C and C++ programs to explore the connection between asserts and defect occurrence. Our data suggests a connection: functions with asserts do have significantly fewer defects. This indicates that asserts do play an important role in software quality; we therefore explored further the factors that play a role in assertion placement: specifically, process factors (such as developer experience and ownership) and product factors, particularly interprocedural factors, exploring how the placement of assertions in functions are influenced by local and global network properties of the callgraph. Finally, we also conduct a differential analysis of assertion use across different application domains.",
"title": ""
},
{
"docid": "bf23473b7fe711e9dce9487c7df5b624",
"text": "A focus on population health management is a necessary ingredient for success under value-based payment models. As part of that effort, nine ways to embrace technology can help healthcare organizations improve population health, enhance the patient experience, and reduce costs: Use predictive analytics for risk stratification. Combine predictive modeling with algorithms for financial risk management. Use population registries to identify care gaps. Use automated messaging for patient outreach. Engage patients with automated alerts and educational campaigns. Automate care management tasks. Build programs and organize clinicians into care teams. Apply new technologies effectively. Use analytics to measure performance of organizations and providers.",
"title": ""
},
{
"docid": "b1d61ca503702f950ef1275b904850e7",
"text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.",
"title": ""
},
{
"docid": "65b843c30f69d33fa0c9aedd742e3434",
"text": "The computational study of complex systems increasingly requires model integration. The drivers include a growing interest in leveraging accepted legacy models, an intensifying pressure to reduce development costs by reusing models, and expanding user requirements that are best met by combining different modeling methods. There have been many published successes including supporting theory, conceptual frameworks, software tools, and case studies. Nonetheless, on an empirical basis, the published work suggests that correctly specifying model integration strategies remains challenging. This naturally raises a question that has not yet been answered in the literature, namely 'what is the computational difficulty of model integration?' This paper's contribution is to address this question with a time and space complexity analysis that concludes that deep model integration with proven correctness is both NP-complete and PSPACE-complete and that reducing this complexity requires sacrificing correctness proofs in favor of guidance from both subject matter experts and modeling specialists.",
"title": ""
},
{
"docid": "08e02afe2ef02fc9c8fff91cf7a70553",
"text": "Matrix factorization is a fundamental technique in machine learning that is applicable to collaborative filtering, information retrieval and many other areas. In collaborative filtering and many other tasks, the objective is to fill in missing elements of a sparse data matrix. One of the biggest challenges in this case is filling in a column or row of the matrix with very few observations. In this paper we introduce a Bayesian matrix factorization model that performs regression against side information known about the data in addition to the observations. The side information helps by adding observed entries to the factored matrices. We also introduce a nonparametric mixture model for the prior of the rows and columns of the factored matrices that gives a different regularization for each latent class. Besides providing a richer prior, the posterior distribution of mixture assignments reveals the latent classes. Using Gibbs sampling for inference, we apply our model to the Netflix Prize problem of predicting movie ratings given an incomplete user-movie ratings matrix. Incorporating rating information with gathered metadata information, our Bayesian approach outperforms other matrix factorization techniques even when using fewer dimensions.",
"title": ""
},
{
"docid": "c0315ef3bcc21723131d9b2687a5d5d1",
"text": "Network covert timing channels embed secret messages in legitimate packets by modulating interpacket delays. Unfortunately, such channels are normally implemented in higher network layers (layer 3 or above) and easily detected or prevented. However, access to the physical layer of a network stack allows for timing channels that are virtually invisible: Sub-microsecond modulations that are undetectable by software endhosts. Therefore, covert timing channels implemented in the physical layer can be a serious threat to the security of a system or a network. In fact, we empirically demonstrate an effective covert timing channel over nine routing hops and thousands of miles over the Internet (the National Lambda Rail). Our covert timing channel works with cross traffic, less than 10% bit error rate, which can be masked by forward error correction, and a covert rate of 81 kilobits per second. Key to our approach is access and control over every bit in the physical layer of a 10 Gigabit network stack (a bit is 100 picoseconds wide at 10 gigabit per seconds), which allows us to modulate and interpret interpacket spacings at sub-microsecond scale. We discuss when and how a timing channel in the physical layer works, how hard it is to detect such a channel, and what is required to do so.",
"title": ""
},
{
"docid": "757cb3e9b279f71cb0a9ff5b80c5f4ba",
"text": "When it comes to workplace preferences, Generation Y workers closely resemble Baby Boomers. Because these two huge cohorts now coexist in the workforce, their shared values will hold sway in the companies that hire them. The authors, from the Center for Work-Life Policy, conducted two large-scale surveys that reveal those values. Gen Ys and Boomers are eager to contribute to positive social change, and they seek out workplaces where they can do that. They expect flexibility and the option to work remotely, but they also want to connect deeply with colleagues. They believe in employer loyalty but desire to embark on learning odysseys. Innovative firms are responding by crafting reward packages that benefit both generations of workers--and their employers.",
"title": ""
},
{
"docid": "21a2347f9bb5b5638d63239b37c9d0e6",
"text": "This paper presents new circuits for realizing both current-mode and voltage-mode proportional-integralderivative (PID), proportional-derivative (PD) and proportional-integral (PI) controllers employing secondgeneration current conveyors (CCIIs) as active elements. All of the proposed PID, PI and PD controllers have grounded passive elements and adjustable parameters. The controllers employ reduced number of active and passive components with respect to the traditional op-amp-based PID, PI and PD controllers. A closed loop control system using the proposed PID controller is designed and simulated with SPICE.",
"title": ""
},
{
"docid": "cbe947b169331c8bb41c7fae2a8d0647",
"text": "In spite of high levels of poverty in low and middle income countries (LMIC), and the high burden posed by common mental disorders (CMD), it is only in the last two decades that research has emerged that empirically addresses the relationship between poverty and CMD in these countries. We conducted a systematic review of the epidemiological literature in LMIC, with the aim of examining this relationship. Of 115 studies that were reviewed, most reported positive associations between a range of poverty indicators and CMD. In community-based studies, 73% and 79% of studies reported positive associations between a variety of poverty measures and CMD, 19% and 15% reported null associations and 8% and 6% reported negative associations, using bivariate and multivariate analyses respectively. However, closer examination of specific poverty dimensions revealed a complex picture, in which there was substantial variation between these dimensions. While variables such as education, food insecurity, housing, social class, socio-economic status and financial stress exhibit a relatively consistent and strong association with CMD, others such as income, employment and particularly consumption are more equivocal. There are several measurement and population factors that may explain variation in the strength of the relationship between poverty and CMD. By presenting a systematic review of the literature, this paper attempts to shift the debate from questions about whether poverty is associated with CMD in LMIC, to questions about which particular dimensions of poverty carry the strongest (or weakest) association. The relatively consistent association between CMD and a variety of poverty dimensions in LMIC serves to strengthen the case for the inclusion of mental health on the agenda of development agencies and in international targets such as the millenium development goals.",
"title": ""
},
{
"docid": "c98e8abd72ba30e0d2cb2b7d146a3d13",
"text": "Process mining techniques help organizations discover and analyze business processes based on raw event data. The recently released \"Process Mining Manifesto\" presents guiding principles and challenges for process mining. Here, the authors summarize the manifesto's main points and argue that analysts should take into account the context in which events occur when analyzing processes.",
"title": ""
},
{
"docid": "1ef2bb601d91d77287d3517c73b453fe",
"text": "Proteins from silver-stained gels can be digested enzymatically and the resulting peptide analyzed and sequenced by mass spectrometry. Standard proteins yield the same peptide maps when extracted from Coomassie- and silver-stained gels, as judged by electrospray and MALDI mass spectrometry. The low nanogram range can be reached by the protocols described here, and the method is robust. A silver-stained one-dimensional gel of a fraction from yeast proteins was analyzed by nano-electrospray tandem mass spectrometry. In the sequencing, more than 1000 amino acids were covered, resulting in no evidence of chemical modifications due to the silver staining procedure. Silver staining allows a substantial shortening of sample preparation time and may, therefore, be preferable over Coomassie staining. This work removes a major obstacle to the low-level sequence analysis of proteins separated on polyacrylamide gels.",
"title": ""
},
{
"docid": "3f679dbd9047040d63da70fc9e977a99",
"text": "In this paper we consider videos (e.g. Hollywood movies) and their accompanying natural language descriptions in the form of narrative sentences (e.g. movie scripts without timestamps). We propose a method for temporally aligning the video frames with the sentences using both visual and textual information, which provides automatic timestamps for each narrative sentence. We compute the similarity between both types of information using vectorial descriptors and propose to cast this alignment task as a matching problem that we solve via dynamic programming. Our approach is simple to implement, highly efficient and does not require the presence of frequent dialogues, subtitles, and character face recognition. Experiments on various movies demonstrate that our method can successfully align the movie script sentences with the video frames of movies.",
"title": ""
},
{
"docid": "2d59fe09633ee41c60e9e951986e56a6",
"text": "Face alignment and 3D face reconstruction are traditionally accomplished as separated tasks. By exploring the strong correlation between 2D landmarks and 3D shapes, in contrast, we propose a joint face alignment and 3D face reconstruction method to simultaneously solve these two problems for 2D face images of arbitrary poses and expressions. This method, based on a summation model of 3D face shapes and cascaded regression in 2D and 3D face shape spaces, iteratively and alternately applies two cascaded regressors, one for updating 2D landmarks and the other for 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. Unlike existing methods, the proposed method can fully automatically generate both pose-and-expression-normalized (PEN) and expressive 3D face shapes and localize both visible and invisible 2D landmarks. Based on the PEN 3D face shapes, we devise a method to enhance face recognition accuracy across poses and expressions. Both linear and nonlinear implementations of the proposed method are presented and evaluated in this paper. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.",
"title": ""
},
{
"docid": "3c514740d7f8ce78f9afbaca92dc3b1c",
"text": "In the Brazil nut problem (BNP), hard spheres with larger diameters rise to the top. There are various explanations (percolation, reorganization, convection), but a broad understanding or control of this effect is by no means achieved. A theory is presented for the crossover from BNP to the reverse Brazil nut problem based on a competition between the percolation effect and the condensation of hard spheres. The crossover condition is determined, and theoretical predictions are compared to molecular dynamics simulations in two and three dimensions.",
"title": ""
},
{
"docid": "16d949f6915cbb958cb68a26c6093b6b",
"text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.",
"title": ""
},
{
"docid": "179d8daa30a7986c8f345a47eabfb2c8",
"text": "A key advantage of taking a statistical approach to spoken dialogue systems is the ability to formalise dialogue policy design as a stochastic optimization problem. However, since dialogue policies are learnt by interactively exploring alternative dialogue paths, conventional static dialogue corpora cannot be used directly for training and instead, a user simulator is commonly used. This paper describes a novel statistical user model based on a compact stack-like state representation called a user agenda which allows state transitions to be modeled as sequences of push- and pop-operations and elegantly encodes the dialogue history from a user's point of view. An expectation-maximisation based algorithm is presented which models the observable user output in terms of a sequence of hidden states and thereby allows the model to be trained on a corpus of minimally annotated data. Experimental results with a real-world dialogue system demonstrate that the trained user model can be successfully used to optimise a dialogue policy which outperforms a hand-crafted baseline in terms of task completion rates and user satisfaction scores.",
"title": ""
},
{
"docid": "d9fe0834ccf80bddadc5927a8199cd2c",
"text": "Deep Residual Networks (ResNets) have recently achieved state-of-the-art results on many challenging computer vision tasks. In this work we analyze the role of Batch Normalization (BatchNorm) layers on ResNets in the hope of improving the current architecture and better incorporating other normalization techniques, such as Normalization Propagation (NormProp), into ResNets. Firstly, we verify that BatchNorm helps distribute representation learning to residual blocks at all layers, as opposed to a plain ResNet without BatchNorm where learning happens mostly in the latter part of the network. We also observe that BatchNorm well regularizes Concatenated ReLU (CReLU) activation scheme on ResNets, whose magnitude of activation grows by preserving both positive and negative responses when going deeper into the network. Secondly, we investigate the use of NormProp as a replacement for BatchNorm in ResNets. Though NormProp theoretically attains the same effect as BatchNorm on generic convolutional neural networks, the identity mapping of ResNets invalidates its theoretical promise and NormProp exhibits a significant performance drop when naively applied. To bridge the gap between BatchNorm and NormProp in ResNets, we propose a simple modification to NormProp and employ the CReLU activation scheme. We experiment on visual object recognition benchmark datasets such as CIFAR10/100 and ImageNet and demonstrate that 1) the modified NormProp performs better than the original NormProp but is still not comparable to BatchNorm and 2) CReLU improves the performance of ResNets with or without normalizations.",
"title": ""
},
{
"docid": "be9b40cc2e2340249584f7324e26c4d3",
"text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.",
"title": ""
}
] |
scidocsrr
|
c0a9f15e4d3fa4681d56246e230c03b4
|
A Heart Disease Prediction Model Using Decision Tree
|
[
{
"docid": "0778eff54b2f48c9ed4554c617b2dcab",
"text": "The diagnosis of heart disease is a significant and tedious task in medicine. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not “mined” to determine concealed information for effective decision making by healthcare practitioners. The term Heart disease encompasses the diverse diseases that affect the heart. Cardiomyopathy and Cardiovascular disease are some categories of heart diseases. The reduction of blood and oxygen supply to the heart leads to heart disease. In this paper the data classification is based on supervised machine learning algorithms which result in accuracy, time taken to build the algorithm. Tanagra tool is used to classify the data and the data is evaluated using 10-fold cross validation and the results are compared.",
"title": ""
},
{
"docid": "ae2e62bd0e51299661822a85bd690cd1",
"text": "Today medical services have come a long way to treat patients with various diseases. Among the most lethal one is the heart disease problem which cannot be seen with a naked eye and comes instantly when its limitations are reached. Today diagnosing patients correctly and administering effective treatments have become quite a challenge. Poor clinical decisions may end to patient death and which cannot be afforded by the hospital as it loses its reputation. The cost to treat a patient with a heart problem is quite high and not affordable by every patient. To achieve a correct and cost effective treatment computer-based information and/or decision support Systems can be developed to do the task. Most hospitals today use some sort of hospital information systems to manage their healthcare or patient data. These systems typically generate huge amounts of data which take the form of numbers, text, charts and images. Unfortunately, these data are rarely used to support clinical decision making. There is a wealth of hidden information in these data that is largely untapped. This raises an important question: \" How can we turn data into useful information that can enable healthcare practitioners to make intelligent clinical decisions? \" So there is need of developing a master's project which will help practitioners predict the heart disease before it occurs.The diagnosis of diseases is a vital and intricate job in medicine. The recognition of heart disease from diverse features or signs is a multi-layered problem that is not free from false assumptions and is frequently accompanied by impulsive effects. Thus the attempt to exploit knowledge and experience of several specialists and clinical screening data of patients composed in databases to assist the diagnosis procedure is regarded as a valuable option.",
"title": ""
}
] |
[
{
"docid": "8e2530837982c698fadacec3c01f25e0",
"text": "Fuzz testing is widely used as an automatic solution for discovering vulnerabilities in binary programs that process files. Restricted by their high blindness and low code path coverage, fuzzing tests typically provide quite low efficiencies. In this paper, a novel API in-memory fuzz testing technique for eliminating the blindness of existing techniques is discussed. This technique employs dynamic taint analyses to locate the routines and instructions that belong to the target binary executables, and it consists of parsing and processing the input data. Within the testing phase, binary instrumentation is used to construct loops around such routines, in which the contained taint memory values are mutated in each loop. According to experiments using the prototype tool, this technique could effectively detect defects such as stack overflows. Comparedwith traditional fuzzing tools, this API in-memory fuzzing eliminated the bottleneck of interrupting execution paths and gained a greater than 95% enhancement in execution speed. Communicated by V. Loia. B Baojiang Cui cuibj@bupt.edu.cn Fuwei Wang fuwei.wfw@alibaba-inc.com Xiaofeng Chen xfchen@xidian.edu.cn 1 Beijing University of Posts and Telecommunications and National Engineering Laboratory for Mobile Network Security, Beijing, China 2 Security Department, Alibaba Group, Hangzhou, China 3 China Information Technology Security Evaluation Center, Beijing, China 4 School of Telecommunications Engineering, Xidian University, Xi’an, China",
"title": ""
},
{
"docid": "16f58cda028e7c542074832be620ec53",
"text": "A general circuit configuration for cross-coupled wideband bandstop filters is proposed. The distinct filtering characteristics of this new type of transmission line filter are investigated theoretically and experimentally. It is shown that a ripple stopband can be created, leading to a quasi-elliptic function response that enhances the rejection bandwidth. A demonstrator with approximately 80% fractional bandwidth at a mid-stopband frequency of 4 GHz is developed and presented. The proposed filter is successfully realized in theory and verified by full-wave electromagnetic simulation and the experiment. Theoretical, simulated, and measured results are in excellent agreement.",
"title": ""
},
{
"docid": "a218d5aac0f5d52d3828cdff05a9009b",
"text": "This paper proposes a single-stage high-power-factor (HPF) LED driver with coupled inductors for street-lighting applications. The presented LED driver integrates a dual buck-boost power-factor-correction (PFC) ac-dc converter with coupled inductors and a half-bridge-type LLC dc-dc resonant converter into a single-stage-conversion circuit topology. The coupled inductors inside the dual buck-boost converter subcircuit are designed to be operated in the discontinuous-conduction mode for obtaining high power-factor (PF). The half-bridge-type LLC resonant converter is designed for achieving soft-switching on two power switches and output rectifier diodes, in order to reduce their switching losses. This paper develops and implements a cost-effective driver for powering a 144-W-rated LED street-lighting module with input utility-line voltage ranging from 100 to 120 V. The tested prototype yields satisfying experimental results, including high circuit efficiency (>89.5%), low input-current total-harmonic distortion (<; 5.5%), high PF (> 0.99), low output-voltage ripple (<; 7.5%), and low output-current ripple (<; 5%), thus demonstrating the feasibility of the proposed LED driver.",
"title": ""
},
{
"docid": "b584a0e8f8d15ad2b4db6ace48d589ef",
"text": "In recent years, IT project failures have received a great deal of attention in the press as well as the boardroom. In an attempt to avoid disasters going forward, many organizations are now learning from the past by conducting retrospectives—that is, project postmortems or post-implementation reviews. While each individual retrospective tells a unique story and contributes to organizational learning, even more insight can be gained by examining multiple retrospectives across a variety of organizations over time. This research aggregates the knowledge gained from 99 retrospectives conducted in 74 organizations over the past seven years. It uses the findings to reveal the most common mistakes and suggest best practices for more effective project management.2",
"title": ""
},
{
"docid": "0f78628c309cc863680d60dd641cb7f0",
"text": "A systematic review was conducted to evaluate whether chocolate or its constituents were capable of influencing cognitive function and/or mood. Studies investigating potentially psychoactive fractions of chocolate were also included. Eight studies (in six articles) met the inclusion criteria for assessment of chocolate or its components on mood, of which five showed either an improvement in mood state or an attenuation of negative mood. Regarding cognitive function, eight studies (in six articles) met the criteria for inclusion, of which three revealed clear evidence of cognitive enhancement (following cocoa flavanols and methylxanthine). Two studies failed to demonstrate behavioral benefits but did identify significant alterations in brain activation patterns. It is unclear whether the effects of chocolate on mood are due to the orosensory characteristics of chocolate or to the pharmacological actions of chocolate constituents. Two studies have reported acute cognitive effects of supplementation with cocoa polyphenols. Further exploration of the effect of chocolate on cognitive facilitation is recommended, along with substantiation of functional brain changes associated with the components of cocoa.",
"title": ""
},
{
"docid": "03ebf532ff9df2cdd0f2a28cb2f55450",
"text": "We develop two variants of an energy-efficient cooperative diversity protocol that combats fading induced by multipath propagation in wireless networks. The underlying techniques build upon the classical relay channel and related work and exploit space diversity available at distributed antennas through coordinated transmission and processing by cooperating radios. While applicable to any wireless setting, these protocols are particularly attractive in ad-hoc or peer-to-peer wireless networks, in which radios are typically constrained to employ a single antenna. Substantial energy-savings resulting from these protocols can lead to reduced battery drain, longer network lifetime, and improved network performance in terms of, e.g., capacity.",
"title": ""
},
{
"docid": "ebf92a0faf6538f1d2b85fb2aa497e80",
"text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.",
"title": ""
},
{
"docid": "649a63144ef4b9db98781eba4f3a6179",
"text": "This paper focuses on a collection of methods that can be used to analyze the water-energy-food (WEF) nexus. We classify these methods as qualitative or quantitative for interdisciplinary and transdisciplinary research approaches. The methods for interdisciplinary research approaches can be used to unify a collection of related variables, visualize the research problem, evaluate the issue, and simulate the system of interest. Qualitative methods are generally used to describe the nexus in the region of interest, and include primary research methods such as Questionnaire Surveys, as well as secondary research methods such as Ontology Engineering and Integrated Maps. Quantitative methods for examining the nexus include Physical Models, Benefit-Cost Analysis (BCA), OPEN ACCESS",
"title": ""
},
{
"docid": "5618f1415cace8bb8c4773a7e44a4e3f",
"text": "Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix.",
"title": ""
},
{
"docid": "83797d5698f6962141744f591d946fa5",
"text": "In this paper, an S-band internally harmonic matched GaN FET is presented, which is designed so that up to third harmonic impedance is tuned to high efficiency condition. Harmonic load pull measurements were done for a small transistor cell at first. It was found that power added efficiency (PAE) of 78% together with 6W output power can be obtained by tuning impedances up to 3rd harmonic for both input and output sides. Then matching circuit was designed for large gate periphery multi-cell transistor. To make the circuit size small, harmonic matching was done in a hermetically sealed package. With total gate width of 64mm, 330W output power and 62% PAE was successfully obtained.",
"title": ""
},
{
"docid": "b2d1a0befef19d466cd29868d5cf963b",
"text": "Accurate prediction of the functional effect of genetic variation is critical for clinical genome interpretation. We systematically characterized the transcriptome effects of protein-truncating variants, a class of variants expected to have profound effects on gene function, using data from the Genotype-Tissue Expression (GTEx) and Geuvadis projects. We quantitated tissue-specific and positional effects on nonsense-mediated transcript decay and present an improved predictive model for this decay. We directly measured the effect of variants both proximal and distal to splice junctions. Furthermore, we found that robustness to heterozygous gene inactivation is not due to dosage compensation. Our results illustrate the value of transcriptome data in the functional interpretation of genetic variants.",
"title": ""
},
{
"docid": "9326b7c1bd16e7db931131f77aaad687",
"text": "We argue in this article that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalized grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution, and inference.",
"title": ""
},
{
"docid": "f2fb948a8e133be27dd2d27a3601606f",
"text": "If a document is about travel, we may expect that short snippets of the document should also be about travel. We introduce a general framework for incorporating these types of invariances into a discriminative classifier. The framework imagines data as being drawn from a slice of a Lévy process. If we slice the Lévy process at an earlier point in time, we obtain additional pseudo-examples, which can be used to train the classifier. We show that this scheme has two desirable properties: it preserves the Bayes decision boundary, and it is equivalent to fitting a generative model in the limit where we rewind time back to 0. Our construction captures popular schemes such as Gaussian feature noising and dropout training, as well as admitting new generalizations.",
"title": ""
},
{
"docid": "ebf5efac65fe9912b573843941ffa8cd",
"text": "Objectives Despite the popularity of closed circuit television (CCTV), evidence of its crime prevention capabilities is inconclusive. Research has largely reported CCTV effect as ‘‘mixed’’ without explaining this variance. The current study contributes to the literature by testing the influence of several micro-level factors on changes in crime levels within CCTV areas of Newark, NJ. Methods Viewsheds, denoting the line-of-sight of CCTV cameras, were units of analysis (N = 117). Location quotients, controlling for viewshed size and control-area crime incidence, measured changes in the levels of six crime categories, from the pre-installation period to the post-installation period. Ordinary least squares regression models tested the influence of specific micro-level factors—environmental features, camera line-of-sight, enforcement activity, and camera design—on each crime category. Results First, the influence of environmental features differed across crime categories, with specific environs being related to the reduction of certain crimes and the increase of others. Second, CCTV-generated enforcement was related to the reduction of overall crime, violent crime and theft-from-auto. Third, obstructions to CCTV line-of-sight caused by immovable objects were related to increased levels of auto theft and decreased levels of violent crime, theft from auto and robbery. Conclusions The findings suggest that CCTV operations should be designed in a manner that heightens their deterrent effect. Specifically, police should account for the presence of crime generators/attractors and ground-level obstructions when selecting camera sites, and design the operational strategy in a manner that generates maximum levels of enforcement.",
"title": ""
},
{
"docid": "afe0c431852191bc2316d1c5091f239b",
"text": "Dynamic models of pneumatic artificial muscles (PAMs) are important for simulation of the movement dynamics of the PAM-based actuators and also for their control. The simple models of PAMs are geometric models, which can be relatively easy used under certain simplification for obtaining of the static and dynamic characteristics of the pneumatic artificial muscle. An advanced geometric muscle model is used in paper for describing the dynamic behavior of PAM based antagonistic actuator.",
"title": ""
},
{
"docid": "f0659349cab12decbc4d07eb74361b79",
"text": "This article suggests that the context and process of resource selection have an important influence on firm heterogeneity and sustainable competitive advantage. It is argued that a firm’s sustainable advantage depends on its ability to manage the institutional context of its resource decisions. A firm’s institutional context includes its internal culture as well as broader influences from the state, society, and interfirm relations that define socially acceptable economic behavior. A process model of firm heterogeneity is proposed that combines the insights of a resourcebased view with the institutional perspective from organization theory. Normative rationality, institutional isolating mechanisms, and institutional sources of firm homogeneity are proposed as determinants of rent potential that complement and extend resource-based explanations of firm variation and sustainable competitive advantage. The article suggests that both resource capital and institutional capital are indispensable to sustainable competitive advantage. 1997 by John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "a8ff7afc96f0bf65ce80131617d5e156",
"text": "This paper presents a new algorithm for force directed graph layout on the GPU. The algorithm, whose goal is to compute layouts accurately and quickly, has two contributions. The first contribution is proposing a general multi-level scheme, which is based on spectral partitioning. The second contribution is computing the layout on the GPU. Since the GPU requires a data parallel programming model, the challenge is devising a mapping of a naturally unstructured graph into a well-partitioned structured one. This is done by computing a balanced partitioning of a general graph. This algorithm provides a general multi-level scheme, which has the potential to be used not only for computation on the GPU, but also on emerging multi-core architectures. The algorithm manages to compute high quality layouts of large graphs in a fraction of the time required by existing algorithms of similar quality. An application for visualization of the topologies of ISP (Internet service provider) networks is presented.",
"title": ""
},
{
"docid": "9f9719336bf6497d7c71590ac61a433b",
"text": "College and universities are increasingly using part-time, adjunct instructors on their faculties to facilitate greater fiscal flexibility. However, critics argue that the use of adjuncts is causing the quality of higher education to deteriorate. This paper addresses questions about the impact of adjuncts on student outcomes. Using a unique dataset of public four-year colleges in Ohio, we quantify how having adjunct instructors affects student persistence after the first year. Because students taking courses from adjuncts differ systematically from other students, we use an instrumental variable strategy to address concerns about biases. The findings suggest that, in general, students taking an \"adjunct-heavy\" course schedule in their first semester are adversely affected. They are less likely to persist into their second year. We reconcile these findings with previous research that shows that adjuncts may encourage greater student interest in terms of major choice and subsequent enrollments in some disciplines, most notably fields tied closely to specific professions. The authors are grateful for helpful suggestions from Ronald Ehrenberg and seminar participants at the NBER Labor Studies Meetings. The authors also thank the Ohio Board of Regents for their support during this research project. Rod Chu, Darrell Glenn, Robert Sheehan, and Andy Lechler provided invaluable access and help with the data. Amanda Starc, James Carlson, Erin Riley, and Suzan Akin provided excellent research assistance. All opinions and mistakes are our own. The authors worked equally on the project and are listed alphabetically.",
"title": ""
},
{
"docid": "5b29fce7205ec2c49cd62bd8ecca65de",
"text": "This article presents a novel technique for automatic archaeological sherd classification. Sherds that are found in the field usually have little to no visible textual information such as symbols, graphs, or marks on them. This makes manual classification an extremely difficult and time-consuming task for conservators and archaeologists. For a bunch of sherds found in the field, an expert identifies different classes and indicates at least one representative sherd for each class (training sample). The proposed technique uses the representative sherds in order to correctly classify the remaining sherds. For each sherd, local features based on color and texture information are extracted and are then transformed into a global vector that describes the whole sherd image, using a new bag of words technique. Finally, a feature selection algorithm is applied that locates features with high discriminative power. Extensive experiments were performed in order to verify the effectiveness of the proposed technique and show very promising results.",
"title": ""
},
{
"docid": "2777fdcc4442c3d63b51b92710f3914d",
"text": "Non-invasive pressure simulators that regenerate oscillometric waveforms promise an alternative to expensive clinical trials for validating oscillometric noninvasive blood pressure devices. However, existing simulators only provide oscillometric pressure in cuff and thus have a limited accuracy. It is promising to build a physical simulator that contains a synthetic arm with a built-in brachial artery and an affiliated hydraulic model of cardiovascular system. To guide the construction of this kind of simulator, this paper presents a computer model of cardiovascular system with a relatively simple structure, where the distribution of pressures and flows in aorta root and brachial artery can be simulated, and the produced waves are accordant with the physical data. This model can be used to provide the parameters and structure that will be needed to build the new simulator.",
"title": ""
}
] |
scidocsrr
|
84b44e59d1161104f0d82040ae0e4c51
|
A probabilistic framework for semi-supervised clustering
|
[
{
"docid": "3ac2f2916614a4e8f6afa1c31d9f704d",
"text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.",
"title": ""
}
] |
[
{
"docid": "a895b0b1b51c5eb20679dcea0dbac228",
"text": "Network analysis is an important task in a wide variety of application domains including analysis of social, financial, or transportation networks, to name a few. The appropriate visualization of graphs may reveal useful insight into relationships between network entities and subnetworks. However, often further algorithmic analysis of network structures is needed. In this paper, we propose a system for effective visual analysis of graphs which supports multiple analytic tasks. Our system enhances any graph layout algorithm by an analysis stage which detects predefined or arbitrarily specified subgraph structures (motifs). These motifs in turn are used to filter or aggregate the given network, which is particularly useful for search and analysis of interesting structures in large graphs. Our approach is fully interactive and can be iteratively refined, supporting analysis of graph structures at multiple levels of abstraction. Furthermore, our system supports the analysis of dataor user-driven graph dynamics by showing the implications of graph changes on the identified subgraph structures. The interactive facilities may be flexibly combined for gaining deep insight into the network structures for a wide range of analysis tasks. While we focus on directed, weighted graphs, the proposed tools can be easily extended to undirected and unweighted graphs. The usefulness of our approach is demonstrated by application on a phone call data set [18].",
"title": ""
},
{
"docid": "0871a9e6c97a0f26811bd0f6ae534b03",
"text": "OBJECTIVE\nTo measure the intracranial translucency (IT) and the cisterna magna (CM), to produce reference ranges and to examine the interobserver and intraobserver variability of those measurements. To examine the possible association of IT with chromosomal abnormalities.\n\n\nMETHODS\nProspective study on pregnancies assessed at 11 to 14 weeks. IT was measured retrospectively in 17 cases with aneuploidy.\n\n\nRESULTS\nTo produce reference ranges, 465 fetuses were used. IT and CM correlated linearly with crown-rump-length (CRL) and were independent of maternal demographic characteristics and biochemical indices. IT had a weak positive correlation with nuchal translucency. For IT the intraclass correlation coefficient was 0.88 for intraobserver variability and 0.83 for interobserver variability. For CM the intraclass correlation coefficient was 0.95 for intraobserver variability and 0.84 for interobserver variability. The IT multiple of the median was significantly increased in the chromosomally abnormal fetuses (1.02 for the normal and 1.28 for the chromosomally abnormal fetuses, Mann Whitney p < 0.001). IT multiple of the median was a significant predictor of chromosomal abnormality (Receiver Operator Characteristic curve analysis: Area under the curve = 0.86, CI=0.76-0.96, p<0.001).\n\n\nCONCLUSION\nIntracranial translucency and CM can be measured reliably at the 11 to 14 weeks examination and the measurements are highly reproducible. IT appears to be increased in fetuses with chromosomal abnormalities.",
"title": ""
},
{
"docid": "3690d1655578d94f620f510c8a5e1e40",
"text": "This paper presents a discrete-time option pricing model that is rooted in Reinforcement Learning (RL), and more specifically in the famous Q-Learning method of RL. We construct a riskadjusted Markov Decision Process for a discrete-time version of the classical Black-ScholesMerton (BSM) model, where the option price is an optimal Q-function, while the optimal hedge is a second argument of this optimal Q-function, so that both the price and hedge are parts of the same formula. Pricing is done by learning to dynamically optimize risk-adjusted returns for an option replicating portfolio, as in the Markowitz portfolio theory. Using Q-Learning and related methods, once created in a parametric setting, the model is able to go model-free and learn to price and hedge an option directly from data generated from a dynamic replicating portfolio which is rebalanced at discrete times. If the world is according to BSM, our risk-averse Q-Learner converges, given enough training data, to the true BSM price and hedge ratio of the option in the continuous time limit ∆t → 0, even if hedges applied at the stage of data generation are completely random (i.e. it can learn the BSM model itself, too!), because QLearning is an off-policy algorithm. If the world is different from a BSM world, the Q-Learner will find it out as well, because Q-Learning is a model-free algorithm. For finite time steps ∆t, the Q-Learner is able to efficiently calculate both the optimal hedge and optimal price for the option directly from trading data, and without an explicit model of the world. This suggests that RL may provide efficient data-driven and model-free methods for optimal pricing and hedging of options, once we depart from the academic continuous-time limit ∆t → 0, and vice versa, option pricing methods developed in Mathematical Finance may be viewed as special cases of model-based Reinforcement Learning. Further, due to simplicity and tractability of our model which only needs basic linear algebra (plus Monte Carlo simulation, if we work with synthetic data), and its close relation to the original BSM model, we suggest that our model could be used for benchmarking of different RL algorithms for financial trading applications. I would like to thank my students for their interest in this work and stimulating discussions that challenged me to look for simple explanations of complex topics. I thank Tom N.L. for an initial implementation of a timediscretized BSM model. This work is dedicated to my wife Lola on the occasion of her birthday and receiving a doctoral degree.",
"title": ""
},
{
"docid": "e52c40a4fcb6cdb3d9b177e371127185",
"text": "Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials—from scratch. Our manipulator is inaccurate and provides no pose feedback. For learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by exploiting the sequential structure of the stacking task.",
"title": ""
},
{
"docid": "6afe0360f074304e9da9c100e28e9528",
"text": "Unikernels are a promising alternative for application deployment in cloud platforms. They comprise a very small footprint, providing better deployment agility and portability among virtualization platforms. Similar to Linux containers, they are a lightweight alternative for deploying distributed applications based on microservices. However, the comparison of unikernels with other virtualization options regarding the concurrent provisioning of instances, as in the case of microservices-based applications, is still lacking. This paper provides an evaluation of KVM (Virtual Machines), Docker (Containers), and OSv (Unikernel), when provisioning multiple instances concurrently in an OpenStack cloud platform. We confirmed that OSv outperforms the other options and also identified opportunities for optimization.",
"title": ""
},
{
"docid": "c7a73ab57087752d50d79d38a84c0775",
"text": "In this paper, we address the problem of model-free online object tracking based on color representations. According to the findings of recent benchmark evaluations, such trackers often tend to drift towards regions which exhibit a similar appearance compared to the object of interest. To overcome this limitation, we propose an efficient discriminative object model which allows us to identify potentially distracting regions in advance. Furthermore, we exploit this knowledge to adapt the object representation beforehand so that distractors are suppressed and the risk of drifting is significantly reduced. We evaluate our approach on recent online tracking benchmark datasets demonstrating state-of-the-art results. In particular, our approach performs favorably both in terms of accuracy and robustness compared to recent tracking algorithms. Moreover, the proposed approach allows for an efficient implementation to enable online object tracking in real-time.",
"title": ""
},
{
"docid": "4805f0548cb458b7fad623c07ab7176d",
"text": "This paper presents a unified control framework for controlling a quadrotor tail-sitter UAV. The most salient feature of this framework is its capability of uniformly treating the hovering and forward flight, and enabling continuous transition between these two modes, depending on the commanded velocity. The key part of this framework is a nonlinear solver that solves for the proper attitude and thrust that produces the required acceleration set by the position controller in an online fashion. The planned attitude and thrust are then achieved by an inner attitude controller that is global asymptotically stable. To characterize the aircraft aerodynamics, a full envelope wind tunnel test is performed on the full-scale quadrotor tail-sitter UAV. In addition to planning the attitude and thrust required by the position controller, this framework can also be used to analyze the UAV's equilibrium state (trimmed condition), especially when wind gust is present. Finally, simulation results are presented to verify the controller's capacity, and experiments are conducted to show the attitude controller's performance.",
"title": ""
},
{
"docid": "423d15bbe1c47bc6225030307fc8e379",
"text": "In a secret sharing scheme, a datumd is broken into shadows which are shared by a set of trustees. The family {P′⊆P:P′ can reconstructd} is called the access structure of the scheme. A (k, n)-threshold scheme is a secret sharing scheme having the access structure {P′⊆P: |P′|≥k}. In this paper, by observing a simple set-theoretic property of an access structure, we propose its mathematical definition. Then we verify the definition by proving that every family satisfying the definition is realized by assigning two more shadows of a threshold scheme to trustees.",
"title": ""
},
{
"docid": "d8cd01ef5f39035b26124544c2e5c5aa",
"text": "Framing protocols employ cyclic redundancy check (CRC) to detect errors incurred during transmission. Generally whole frame is protected using CRC and upon detection of error, retransmission is requested. But certain protocols demand for single bit error correction capabilities for the header part of the frame, which often plays an important role in receiver synchronization. At a speed of 10 Gbps, header error correction implementation in hardware can be a bottleneck. This work presents a hardware efficient way of implementing CRC-16 over 16 bits of data, multiple bit error detection and single bit error correction on FPGA device.",
"title": ""
},
{
"docid": "af7698aa556687a7ec9c12047eed5433",
"text": "Debate continues over the precise causal contribution made by mesolimbic dopamine systems to reward. There are three competing explanatory categories: ‘liking’, learning, and ‘wanting’. Does dopamine mostly mediate the hedonic impact of reward (‘liking’)? Does it instead mediate learned predictions of future reward, prediction error teaching signals and stamp in associative links (learning)? Or does dopamine motivate the pursuit of rewards by attributing incentive salience to reward-related stimuli (‘wanting’)? Each hypothesis is evaluated here, and it is suggested that the incentive salience or ‘wanting’ hypothesis of dopamine function may be consistent with more evidence than either learning or ‘liking’. In brief, recent evidence indicates that dopamine is neither necessary nor sufficient to mediate changes in hedonic ‘liking’ for sensory pleasures. Other recent evidence indicates that dopamine is not needed for new learning, and not sufficient to directly mediate learning by causing teaching or prediction signals. By contrast, growing evidence indicates that dopamine does contribute causally to incentive salience. Dopamine appears necessary for normal ‘wanting’, and dopamine activation can be sufficient to enhance cue-triggered incentive salience. Drugs of abuse that promote dopamine signals short circuit and sensitize dynamic mesolimbic mechanisms that evolved to attribute incentive salience to rewards. Such drugs interact with incentive salience integrations of Pavlovian associative information with physiological state signals. That interaction sets the stage to cause compulsive ‘wanting’ in addiction, but also provides opportunities for experiments to disentangle ‘wanting’, ‘liking’, and learning hypotheses. Results from studies that exploited those opportunities are described here. In short, dopamine’s contribution appears to be chiefly to cause ‘wanting’ for hedonic rewards, more than ‘liking’ or learning for those rewards.",
"title": ""
},
{
"docid": "163c0be28804445bd99ad3e4a4e2c6dd",
"text": "We are witnessing a confluence between applied cryptography and secure hardware systems in enabling secure cloud computing. On one hand, work in applied cryptography has enabled efficient, oblivious data-structures and memory primitives. On the other, secure hardware and the emergence of Intel SGX has enabled a low-overhead and mass market mechanism for isolated execution. By themselves these technologies have their disadvantages. Oblivious memory primitives carry high performance overheads, especially when run non-interactively. Intel SGX, while more efficient, suffers from numerous softwarebased side-channel attacks, high context switching costs, and bounded memory size. In this work we build a new library of oblivious memory primitives, which we call ZeroTrace. ZeroTrace is designed to carefully combine state-of-the-art oblivious RAM techniques and SGX, while mitigating individual disadvantages of these technologies. To the best of our knowledge, ZeroTrace represents the first oblivious memory primitives running on a real secure hardware platform. ZeroTrace simultaneously enables a dramatic speed-up over pure cryptography and protection from softwarebased side-channel attacks. The core of our design is an efficient and flexible block-level memory controller that provides oblivious execution against any active software adversary, and across asynchronous SGX enclave terminations. Performance-wise, the memory controller can service requests for 4 B blocks in 1.2 ms and 1 KB blocks in 3.4 ms (given a 10 GB dataset). On top of our memory controller, we evaluate Set/Dictionary/List interfaces which can all perform basic operations (e.g., get/put/insert).",
"title": ""
},
{
"docid": "f8c2f5823ec951d64a48cdf645b83e04",
"text": "We explore the use of crowdsourcing to generate natural language in spoken dialogue systems. We introduce a methodology to elicit novel templates from the crowd based on a dialogue seed corpus, and investigate the effect that the amount of surrounding dialogue context has on the generation task. Evaluation is performed both with a crowd and with a system developer to assess the naturalness and suitability of the elicited phrases. Results indicate that the crowd is able to provide reasonable and diverse templates within this methodology. More work is necessary before elicited templates can be automatically plugged into the system.",
"title": ""
},
{
"docid": "2fcd7e151c658e29cacda5c4f5542142",
"text": "The connection between gut microbiota and energy homeostasis and inflammation and its role in the pathogenesis of obesity-related disorders are increasingly recognized. Animals models of obesity connect an altered microbiota composition to the development of obesity, insulin resistance, and diabetes in the host through several mechanisms: increased energy harvest from the diet, altered fatty acid metabolism and composition in adipose tissue and liver, modulation of gut peptide YY and glucagon-like peptide (GLP)-1 secretion, activation of the lipopolysaccharide toll-like receptor-4 axis, and modulation of intestinal barrier integrity by GLP-2. Instrumental for gut microbiota manipulation is the understanding of mechanisms regulating gut microbiota composition. Several factors shape the gut microflora during infancy: mode of delivery, type of infant feeding, hospitalization, and prematurity. Furthermore, the key importance of antibiotic use and dietary nutrient composition are increasingly recognized. The role of the Western diet in promoting an obesogenic gut microbiota is being confirmation in subjects. Following encouraging results in animals, several short-term randomized controlled trials showed the benefit of prebiotics and probiotics on insulin sensitivity, inflammatory markers, postprandial incretins, and glucose tolerance. Future research is needed to unravel the hormonal, immunomodulatory, and metabolic mechanisms underlying microbe-microbe and microbiota-host interactions and the specific genes that determine the health benefit derived from probiotics. While awaiting further randomized trials assessing long-term safety and benefits on clinical end points, a healthy lifestyle--including breast lactation, appropriate antibiotic use, and the avoidance of excessive dietary fat intake--may ensure a friendly gut microbiota and positively affect prevention and treatment of metabolic disorders.",
"title": ""
},
{
"docid": "d35bc5ef2ea3ce24bbba87f65ae93a25",
"text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.",
"title": ""
},
{
"docid": "dc2d5f9bfe41246ae9883aa6c0537c40",
"text": "Phosphatidylinositol 3-kinases (PI3Ks) are crucial coordinators of intracellular signalling in response to extracellular stimuli. Hyperactivation of PI3K signalling cascades is one of the most common events in human cancers. In this Review, we discuss recent advances in our knowledge of the roles of specific PI3K isoforms in normal and oncogenic signalling, the different ways in which PI3K can be upregulated, and the current state and future potential of targeting this pathway in the clinic.",
"title": ""
},
{
"docid": "6ff034e2ff0d54f7e73d23207789898d",
"text": "This letter presents two high-gain, multidirector Yagi-Uda antennas for use within the 24.5-GHz ISM band, realized through a multilayer, purely additive inkjet printing fabrication process on a flexible substrate. Multilayer material deposition is used to realize these 3-D antenna structures, including a fully printed 120- μm-thick dielectric substrate for microstrip-to-slotline feeding conversion. The antennas are fabricated, measured, and compared to simulated results showing good agreement and highlighting the reliable predictability of the printing process. An endfire realized gain of 8 dBi is achieved within the 24.5-GHz ISM band, presenting the highest-gain inkjet-printed antenna at this end of the millimeter-wave regime. The results of this work further demonstrate the feasibility of utilizing inkjet printing for low-cost, vertically integrated antenna structures for on-chip and on-package integration throughout the emerging field of high-frequency wireless electronics.",
"title": ""
},
{
"docid": "32b8087a31a588b03d5b6f4a100e6308",
"text": "This paper conceptually examines how and why projects and project teams may be conceived as highly generative episodic individual and team learning places that can serve as vehicles or agents to promote organizational learning. It draws on and dissects a broad and relevant literature concerning situated learning, organizational learning, learning spaces and project management. The arguments presented signal a movement towards a project workplace becoming more organizationally acknowledged and supported as a learning intense entity wherein, learning is a more conspicuous, deliberate and systematic social activity by project participants. This paper challenges conventional and limited organizational perceptions about project teams and their practices and discloses their extended value contributions to organizational learning development. © 2011 Elsevier Ltd. and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "cb7caf457e1926ecd9af18981491441a",
"text": "MOH Office is the preventive side Primary Health Care unit in Sri Lanka. It conduct varies clinics and health programs to reduce mortality and morbidity. Health promotion and preventive programs along with early and rapid access to treatment are all key factors to improve health sector. In MOH clinics the waiting time can be divided in to 3 types. Which are waiting time for registration, before consultation and after consultation. These are the indicators for one part of the service quality. Long waiting time for registration is more common feature in MOH clinics in SL. Waiting time after consultation is less in MOH clinics. This study mainly focuses on reducing the waiting time for registration.",
"title": ""
},
{
"docid": "319285416d58c9b2da618bb6f0c8021c",
"text": "Facial expression analysis is one of the popular fields of research in human computer interaction (HCI). It has several applications in next generation user interfaces, human emotion analysis, behavior and cognitive modeling. In this paper, a facial expression classification algorithm is proposed which uses Haar classifier for face detection purpose, Local Binary Patterns(LBP) histogram of different block sizes of a face image as feature vectors and classifies various facial expressions using Principal Component Analysis (PCA). The algorithm is implemented in real time for expression classification since the computational complexity of the algorithm is small. A customizable approach is proposed for facial expression analysis, since the various expressions and intensity of expressions vary from person to person. The system uses grayscale frontal face images of a person to classify six basic emotions namely happiness, sadness, disgust, fear, surprise and anger.",
"title": ""
},
{
"docid": "33084a3b41e8932b4dfaba5825d469e4",
"text": "OBJECTIVE\nBecause adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media.\n\n\nMETHODS AND MATERIALS\nWe develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter).\n\n\nRESULTS\nWhen investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines.\n\n\nCONCLUSIONS\nOur experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction capability. Kernel-based approaches, which can stay away from the feature sparsity issue, are qualified to address the ADE extraction problem. Combining different individual classifiers using suitable combination methods can further enhance the ADE extraction effectiveness.",
"title": ""
}
] |
scidocsrr
|
131c97d8f1f4ec4c3f809c6087ac07aa
|
Compressibility and Generalization in Large-Scale Deep Learning
|
[
{
"docid": "065ca3deb8cb266f741feb67e404acb5",
"text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet",
"title": ""
},
{
"docid": "da16264601254568abbe650b2381be6d",
"text": "This tutorial gives a concise overview of existing PAC-Bayesian theory focusing on three generalization bounds. The first is an Occam bound which handles rules with finite precision parameters and which states that generalization loss is near training loss when the number of bits needed to write the rule is small compared to the sample size. The second is a PAC-Bayesian bound providing a generalization guarantee for posterior distributions rather than for individual rules. The PAC-Bayesian bound naturally handles infinite precision rule parameters, L2 regularization, provides a bound for dropout training, and defines a natural notion of a single distinguished PAC-Bayesian posterior distribution. The third bound is a training-variance bound — a kind of bias-variance analysis but with bias replaced by expected training loss. The training-variance bound dominates the other bounds but is more difficult to interpret. It seems to suggest variance reduction methods such as bagging and may ultimately provide a more meaningful analysis of dropouts.",
"title": ""
},
{
"docid": "d07ba52b14c098ca5e2178ce64fc4403",
"text": "Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights why multilayer feedforward neural networks perform well in practice. Interestingly, the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression scaling the network depth with the logarithm of the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates.",
"title": ""
}
] |
[
{
"docid": "d9d68377bb73d7abca39455b49abe8b7",
"text": "A boosting-based method of learning a feed-forward artificial neural network (ANN) with a single layer of hidden neurons and a single output neuron is presented. Initially, an algorithm called Boostron is described that learns a single-layer perceptron using AdaBoost and decision stumps. It is then extended to learn weights of a neural network with a single hidden layer of linear neurons. Finally, a novel method is introduced to incorporate non-linear activation functions in artificial neural network learning. The proposed method uses series representation to approximate non-linearity of activation functions, learns the coefficients of nonlinear terms by AdaBoost. It adapts the network parameters by a layer-wise iterative traversal of neurons and an appropriate reduction of the problem. A detailed performances comparison of various neural network models learned the proposed methods and those learned using the Least Mean Squared learning (LMS) and the resilient back-propagation (RPROP) is provided in this paper. Several favorable results are reported for 17 synthetic and real-world datasets with different degrees of difficulties for both binary and multi-class problems. Email addresses: mubasher.baig@nu.edu.pk, awais@lums.edu.pk (Mirza M. Baig, Mian. M. Awais), alfy@kfupm.edu.sa (El-Sayed M. El-Alfy) Preprint submitted to Neurocomputing March 9, 2017",
"title": ""
},
{
"docid": "c70d8ae9aeb8a36d1f68ba0067c74696",
"text": "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on simple link structure between a finite set of entities, ignoring the variety of data types that are often used in knowledge bases, such as text, images, and numerical values. In this paper, we propose multimodal knowledge base embeddings (MKBE) that use different neural encoders for this variety of observed data, and combine them with existing relational models to learn embeddings of the entities and multimodal data. Further, using these learned embedings and different neural decoders, we introduce a novel multimodal imputation model to generate missing multimodal values, like text and images, from information in the knowledge base. We enrich existing relational datasets to create two novel benchmarks that contain additional information such as textual descriptions and images of the original entities. We demonstrate that our models utilize this additional information effectively to provide more accurate link prediction, achieving state-of-the-art results with a considerable gap of 5-7% over existing methods. Further, we evaluate the quality of our generated multimodal values via a user study. We have release the datasets and the opensource implementation of our models at https: //github.com/pouyapez/mkbe.",
"title": ""
},
{
"docid": "7e6eab1db77c8404720563d0eed1b325",
"text": "With the success of Open Data a huge amount of tabular data sources became available that could potentially be mapped and linked into the Web of (Linked) Data. Most existing approaches to “semantically label” such tabular data rely on mappings of textual information to classes, properties, or instances in RDF knowledge bases in order to link – and eventually transform – tabular data into RDF. However, as we will illustrate, Open Data tables typically contain a large portion of numerical columns and/or non-textual headers; therefore solutions that solely focus on textual “cues” are only partially applicable for mapping such data sources. We propose an approach to find and rank candidates of semantic labels and context descriptions for a given bag of numerical values. To this end, we apply a hierarchical clustering over information taken from DBpedia to build a background knowledge graph of possible “semantic contexts” for bags of numerical values, over which we perform a nearest neighbour search to rank the most likely candidates. Our evaluation shows that our approach can assign fine-grained semantic labels, when there is enough supporting evidence in the background knowledge graph. In other cases, our approach can nevertheless assign high level contexts to the data, which could potentially be used in combination with other approaches to narrow down the search space of possible labels.",
"title": ""
},
{
"docid": "16741aac03ea1a864ddab65c8c73eb7c",
"text": "This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid \"CMOL\" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of \"tiles\". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance.",
"title": ""
},
{
"docid": "7a52fecf868040da5db3bd6fcbdcc0b2",
"text": "Mobile edge computing (MEC) is a promising paradigm to provide cloud-computing capabilities in close proximity to mobile devices in fifth-generation (5G) networks. In this paper, we study energy-efficient computation offloading (EECO) mechanisms for MEC in 5G heterogeneous networks. We formulate an optimization problem to minimize the energy consumption of the offloading system, where the energy cost of both task computing and file transmission are taken into consideration. Incorporating the multi-access characteristics of the 5G heterogeneous network, we then design an EECO scheme, which jointly optimizes offloading and radio resource allocation to obtain the minimal energy consumption under the latency constraints. Numerical results demonstrate energy efficiency improvement of our proposed EECO scheme.",
"title": ""
},
{
"docid": "f09c6cf181c19e7ddd64121f2e9d368c",
"text": "Authentication of biometric system is vulnerable to impostor attacks. Recent research considers face anti-spoofing as a binary classification problem. To differentiate between genuine access and fake attacks, many systems are trained and the number of counter measures is gradually increasing. In this paper, we propose a novel technique for face anti-spoofing. This method is based on Spatio-temporal information to distinguish between legitimate access and impostor videos or video sequences of picture attacks. The idea is to utilize convolutional neural network (CNN) with handcrafted technique such as LBP-TOP for feature extraction and training of the classifier. Proposed approach requires no preprocessing steps such as face detection and refining face regions or enlarging the original images with particular re-scaling ratios. CNN itself cannot learn temporal features but for face anti-spoofing spatio-temporal features are important. We cascade LBP-TOP with CNN to extract spatio-temporal features from video sequences and capture the most discriminative clues between genuine access and impostor attacks. Extensive experiments are conducted on two very challenging datasets: CASIA and REPLAY-ATTACK which are publically available and achieved high competitive score compared with state-of-art techniques results.",
"title": ""
},
{
"docid": "fa7416bd48a3f4b5edbbcefadc74f72d",
"text": "This paper introduces a meaning representation for spoken language understanding. The Alexa meaning representation language (AMRL), unlike previous approaches, which factor spoken utterances into domains, provides a common representation for how people communicate in spoken language. AMRL is a rooted graph, links to a large-scale ontology, supports cross-domain queries, finegrained types, complex utterances and composition. A spoken language dataset has been collected for Alexa, which contains ∼ 20k examples across eight domains. A version of this meaning representation was released to developers at a trade show in 2016.",
"title": ""
},
{
"docid": "2ac9b0d68c4147a6a4def86184e292c8",
"text": "In this paper we explore the application of travel-speed prediction to query processing in Moving Objects Databases. We propose to revise the motion plans of moving objects using the predicted travel-speeds. This revision occurs before answering queries. We develop three methods of doing this. These methods differ in the time when the motion plans are revised, and which of them are revised. We analyze the three methods theoretically and experimentally.",
"title": ""
},
{
"docid": "216698730aa68b3044f03c64b77e0e62",
"text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.",
"title": ""
},
{
"docid": "e93eaa695003cb409957e5c7ed19bf2a",
"text": "Prominent research argues that consumers often use personal budgets to manage self-control problems. This paper analyzes the link between budgeting and selfcontrol problems in consumption-saving decisions. It shows that the use of goodspecific budgets depends on the combination of a demand for commitment and the demand for flexibility resulting from uncertainty about intratemporal trade-offs between goods. It explains the subtle mechanism which renders budgets useful commitments, their interaction with minimum-savings rules (another widely-studied form of commitment), and how budgeting depends on the intensity of self-control problems. This theory matches several empirical findings on personal budgeting. JEL CLASSIFICATION: D23, D82, D86, D91, E62, G31",
"title": ""
},
{
"docid": "adb9c43bb23ca4737aebbb9ee4b6c14e",
"text": "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.",
"title": ""
},
{
"docid": "ee4d5fae117d6af503ceb65707814c1b",
"text": "We investigate the use of syntactically related pairs of words for the task of text classification. The set of all pairs of syntactically related words should intuitively provide a better description of what a document is about, than the set of proximity-based N-grams or selective syntactic phrases. We generate syntactically related word pairs using a dependency parser. We experimented with Support Vector Machines and Decision Tree learners on the 10 most frequent classes from the Reuters-21578 corpus. Results show that syntactically related pairs of words produce better results in terms of accuracy and precision when used alone or combined with unigrams, compared to unigrams alone.",
"title": ""
},
{
"docid": "2d7caeda83bb77297bb53237dc1f198d",
"text": "BACKGROUND\nWe present a classification system that progresses in severity, indicates the pathomechanics that cause the fracture and therefore guides the surgeon to what fixation will be necessary by which approach.\n\n\nMETHODS\nThe primary posterior malleolar fracture fragments were characterized into 3 groups. A type 1 fracture was described as a small extra-articular posterior malleolar primary fragment. Type 2 fractures consisted of a primary fragment of the posterolateral triangle of the tibia (Volkmann area). A type 3 primary fragment was characterized by a coronal plane fracture line involving the whole posterior plafond.\n\n\nRESULTS\nIn type 1 fractures, the syndesmosis was disrupted in 100% of cases, although a proportion only involved the posterior syndesmosis. In type 2 posterior malleolar fractures, there was a variable medial injury with mixed avulsion/impaction etiology. In type 3 posterior malleolar fractures, most fibular fractures were either a high fracture or a long oblique fracture in the same fracture alignment as the posterior shear tibia fragment. Most medial injuries were Y-type or posterior oblique fractures. This fracture pattern had a low incidence of syndesmotic injury.\n\n\nCONCLUSION\nThe value of this approach was that by following the pathomechanism through the ankle, it demonstrated which other structures were likely to be damaged by the path of the kinetic energy. With an understanding of the pattern of associated injuries for each category, a surgeon may be able to avoid some pitfalls in treatment of these injuries.\n\n\nLEVEL OF EVIDENCE\nLevel III, retrospective comparative series.",
"title": ""
},
{
"docid": "febf797870da28d6492885095b92ef1f",
"text": "Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.",
"title": ""
},
{
"docid": "c95111a04a132021212f389e92645a61",
"text": "This paper develops a model of a learning market-maker by extending the GlostenMilgrom model of dealer markets. The market-maker tracks the changing true value of a stock in settings with informed traders (with noisy signals) and liquidity traders, and sets bid and ask prices based on its estimate of the true value. We empirically evaluate the performance of the market-maker in markets with different parameter values to demonstrate the effectiveness of the algorithm, and then use the algorithm to derive properties of price processes in simulated markets. When the true value is governed by a jump process, there is a two regime behavior marked by significant heterogeneity of information and large spreads immediately following a price jump, which is quickly resolved by the market-maker, leading to a rapid return to homogeneity of information and small spreads. We also discuss the similarities and differences between our model and real stock market data in terms of distributional and time series properties of returns. Submitted to: Quantitative Finance",
"title": ""
},
{
"docid": "831c76f52665bcdc1ee370d1847c11df",
"text": "The evolutionarily conserved Hippo signaling pathway is known to regulate cell proliferation and maintain tissue homeostasis during development. We found that activation of Yorkie (Yki), the effector of the Hippo signaling pathway, causes separable effects on growth and differentiation of the Drosophila eye. We present evidence supporting a role for Yki in suppressing eye fate by downregulation of the core retinal determination genes. Other upstream regulators of the Hippo pathway mediate this effect of Yki on retinal differentiation. Here, we show that, in the developing eye, Yki can prevent retinal differentiation by blocking morphogenetic furrow (MF) progression and R8 specification. The inhibition of MF progression is due to ectopic induction of Wingless (Wg) signaling and Homothorax (Hth), the negative regulators of eye development. Modulating Wg signaling can modify Yki-mediated suppression of eye fate. Furthermore, ectopic Hth induction due to Yki activation in the eye is dependent on Wg. Last, using Cut (Ct), a marker for the antennal fate, we show that suppression of eye fate by hyperactivation of yki does not change the cell fate (from eye to antenna-specific fate). In summary, we provide the genetic mechanism by which yki plays a role in cell fate specification and differentiation - a novel aspect of Yki function that is emerging from multiple model organisms.",
"title": ""
},
{
"docid": "45e2b42d26ac6071cad542026ff06183",
"text": "This article discusses MPCA (Multi-way Principal Component Analysis) and MPLS (Multi-way Partial Least Squares) have been used to compress the information into low-dimensional spaces and pinpoint the root causes of batch-to-batch difference. From engineering perspective, this paper focuses on applying MPCA and MPLS to data analysis of batch process combining with operation experiences to find “golden batch benchmark” that describes the best operation of historical batches. This work includes data pre-treatment, batch process modelling and chemical reaction initiation status decision. Finally, optimizing control strategies and batch process improvement are also been discussed. Process and control engineers are be able to obtain the valuable data analyzing and control optimization methods for batch process from this study.",
"title": ""
},
{
"docid": "7db1b7cb31b1b2a3594f38a5b0a9ce0a",
"text": "The new biological effect of picosecond pulsed electric fields (psPEFs) has elicited the interest of researchers. A pulse generator based on an avalanche transistorized Marx circuit has been proposed. However, the problem of reflection in the transmission of the generated picosecond pulse based on this circuit has not received much attention and remains unresolved. In this paper, a compact picosecond pulse generator based on microstrip transmission theory was developed. A partial matching model based on microstrip transmission line theory was also proposed to eliminate reflection, and a series inductor was utilized to optimize pulse waveform. Through simulation studies and preliminary experimental tests, a pulse optimized with 1015 V amplitude, 620-ps width, and 10-kHz high stability repetition rate was generated. This pulse generator can be used with microelectrodes in cell experiments to explore the biological effect mechanism of psPEF.",
"title": ""
},
{
"docid": "8380a623e744a44f2ab7a077c620db37",
"text": "We present a novel video representation for human action recognition by considering temporal sequences of visual words. Based on state-of-the-art dense trajectories, we introduce temporal bundles of dominant, that is most frequent, visual words. These are employed to construct a complementary action representation of ordered dominant visual word sequences, that additionally incorporates fine grained temporal information. We exploit the introduced temporal information by applying local sub-sequence alignment that quantifies the similarity between sequences. This facilitates the fusion of our representation with the bag-of-visual-words (BoVW) representation. Our approach incorporates sequential temporal structure and results in a low-dimensional representation compared to the BoVW, while still yielding a descent result when combined with it. Experiments on the KTH, Hollywood2 and the challenging HMDB51 datasets show that the proposed framework is complementary to the BoVW representation, which discards temporal order.",
"title": ""
},
{
"docid": "3295d82d7477e7c1ca86f946394f8ac8",
"text": "This paper introduces a new structure-aware shape deformation technique. The key idea is to detect continuous and discrete regular patterns and ensure that these patterns are preserved during free-form deformation. We propose a variational deformation model that preserves these structures, and a discrete algorithm that adaptively inserts or removes repeated elements in regular patterns to minimize distortion. As a tool for such structural adaptation, we introduce sliding dockers, which represent repeatable elements that fit together seamlessly for arbitrary repetition counts. We demonstrate the presented approach on a number of complex 3D models from commercial shape libraries.",
"title": ""
}
] |
scidocsrr
|
7ec7fc919cb7fb96f0b0982315794937
|
An Improved Scheme for Full Fingerprint Reconstruction
|
[
{
"docid": "ca4100a8c305c064ea8716702859f11b",
"text": "It is widely believed, in the areas of optics, image analysis, and visual perception, that the Hilbert transform does not extend naturally and isotropically beyond one dimension. In some areas of image analysis, this belief has restricted the application of the analytic signal concept to multiple dimensions. We show that, contrary to this view, there is a natural, isotropic, and elegant extension. We develop a novel two-dimensional transform in terms of two multiplicative operators: a spiral phase spectral (Fourier) operator and an orientational phase spatial operator. Combining the two operators results in a meaningful two-dimensional quadrature (or Hilbert) transform. The new transform is applied to the problem of closed fringe pattern demodulation in two dimensions, resulting in a direct solution. The new transform has connections with the Riesz transform of classical harmonic analysis. We consider these connections, as well as others such as the propagation of optical phase singularities and the reconstruction of geomagnetic fields.",
"title": ""
},
{
"docid": "166b16222ecc15048972e535dbf4cb38",
"text": "Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.",
"title": ""
}
] |
[
{
"docid": "3ede320df9b96c7b9a5806813e4a42c4",
"text": "Sensors deployed to monitor the surrounding environment report such information as event type, location, and time when a real event of interest is detected. An adversary may identify the real event source through eavesdropping and traffic analysis. Previous work has studied the source location privacy problem under a local adversary model. In this work, we aim to provide a stronger notion: event source unobservability, which promises that a global adversary cannot know whether a real event has ever occurred even if he is capable of collecting and analyzing all the messages in the network at all the time. Clearly, event source unobservability is a desirable and critical security property for event monitoring applications, but unfortunately it is also very difficult and expensive to achieve for resource-constrained sensor network.\n Our main idea is to introduce carefully chosen dummy traffic to hide the real event sources in combination with mechanisms to drop dummy messages to prevent explosion of network traffic. To achieve the latter, we select some sensors as proxies that proactively filter dummy messages on their way to the base station. Since the problem of optimal proxy placement is NP-hard, we employ local search heuristics. We propose two schemes (i) Proxy-based Filtering Scheme (PFS) and (ii) Tree-based Filtering Scheme (TFS) to accurately locate proxies. Simulation results show that our schemes not only quickly find nearly optimal proxy placement, but also significantly reduce message overhead and improve message delivery ratio. A prototype of our scheme was implemented for TinyOS-based Mica2 motes.",
"title": ""
},
{
"docid": "cd9d162462c6aafde953cedffbd29b5f",
"text": "ion is a perplexing problem. Perhaps we cannot design such a machine. However, if we cannot, it will be difficult for existing machines to cope with people who are increasingly more complex. This is a catch-22 situation, with machines expected to be real experts while, at the same time, people or the problems become more and more complex than they were before such devices came into being in the first place. Real Time: Can Machines Think? Much of human behavior has nothing to do with time, and much of it deals solely with time. Time separates \"now\" from \"then,\" and \"now\" from \"when.\" It is generally accepted that there are day people, who work well during the daylight hours; and night people, who experience their best at night. Time is the structure that separates events into those occurring simultaneously and those taking place over an infinite spectrum of time. Much of brain activity takes place constantly, although activities also exist that have specific time requirements. This is the paradox of time. For example, sleep generally takes place at night when one is tired; therefore, night might be a factor in triggering sleep. Yet, in another sense, the brain is active all the time in order to keep us alive during sleep. Thus, the brain is always operating in real time and is constantly at work. Machines, however, are generally at rest, except when called upon by humans to work. There are many theories about this active sense of the brain at work, but little is known about how much work is actually being performed by the brain, with the possible exception of research into the understanding of dreams. It is this concept of dreams and their associated representational approach to the brain that is intriguing as a way to understand a larger view of behavior and thinking. Various types of activities, from physical motor activities to speech and language, are all available spontaneously to humans. Humans can stand, sit, yell, or perform an infinite variety of functions without thinking about them. Essentially, the software or program behind these activities has been well written and debugged. What is not provided genetically, we program in ourselves. In many areas, the program code is waiting to be written. Learning how to ski, write a sonnet, or fly an airplane is something that we program ourselves to do. These types of activities represent a step beyond the level of merely replacing an activity that is known with another that is unknown and communicated to us. Figure 2.2 shows some of the steps in a knowledge engineering system. Under consideration is the issue of real-time thinking. One might ask, What is the comparison? There is a new genre of research that suggests that nonreal-time activity opens a new dimension in offering humans contemplative time. In fact, real-time communication is quite interruptive and even disruptive. This represents 64 © TECHtionary.com J^gwledg^&re Information Engineering Workbench\" Future Direction This map illustrates Ihe intended funclional overview of KnowledoeWare's Information Engineering Workbench® The use of color indicates existing product modules. Uncolored areas represent lulu re functionality (hat may be implemented either as discrete product modules or as capabilities within modules. This map is intended as an aid to discussion of IEW operational concepts, not as a depiction of actual product architecture.",
"title": ""
},
{
"docid": "0d080005c4ac1d1f8ec9875adf2bde15",
"text": "In Infrastructure-as-a-Service (IaaS) cloud computing, computational resources are provided to remote users in the form of leases. For a cloud user, he/she can request multiple cloud services simultaneously. In this case, parallel processing in the cloud system can improve the performance. When applying parallel processing in cloud computing, it is necessary to implement a mechanism to allocate resource and schedule the execution order of tasks. Furthermore, a resource optimization mechanism with preemptable task execution can increase the utilization of clouds. In this paper, we propose two online dynamic resource allocation algorithms for the IaaS cloud systemwith preemptable tasks. Our algorithms adjust the resource allocation dynamically based on the updated information of the actual task executions. And the experimental results show that our algorithms can significantly improve the performance in the situation where resource contention is fierce. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "b893e0321a51a2b06e1d8f2a59a296b6",
"text": "Green tea (GT) and green tea extracts (GTE) have been postulated to decrease cancer incidence. In vitro results indicate a possible effect; however, epidemiological data do not support cancer chemoprevention. We have performed a PubMED literature search for green tea consumption and the correlation to the common tumor types lung, colorectal, breast, prostate, esophageal and gastric cancer, with cohorts from both Western and Asian countries. We additionally included selected mechanistical studies for a possible mode of action. The comparability between studies was limited due to major differences in study outlines; a meta analysis was thus not possible and studies were evaluated individually. Only for breast cancer could a possible small protective effect be seen in Asian and Western cohorts, whereas for esophagus and stomach cancer, green tea increased the cancer incidence, possibly due to heat stress. No effect was found for colonic/colorectal and prostatic cancer in any country, for lung cancer Chinese studies found a protective effect, but not studies from outside China. Epidemiological studies thus do not support a cancer protective effect. GT as an indicator of as yet undefined parameters in lifestyle, environment and/or ethnicity may explain some of the observed differences between China and other countries.",
"title": ""
},
{
"docid": "8d6da0919363f3c528e9105ee41b0315",
"text": "There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors.\n This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.",
"title": ""
},
{
"docid": "6f6042046ef1c1642bb95bc47f38cdbb",
"text": "Jean-Jacques Rousseau's concepts of self-love (amour propre) and love of self (amour de soi même) are applied to the psychology of terrorism. Self-love is concern with one's image in the eyes of respected others, members of one's group. It denotes one's feeling of personal significance, the sense that one's life has meaning in accordance with the values of one's society. Love of self, in contrast, is individualistic concern with self-preservation, comfort, safety, and the survival of self and loved ones. We suggest that self-love defines a motivational force that when awakened arouses the goal of a significance quest. When a group perceives itself in conflict with dangerous detractors, its ideology may prescribe violence and terrorism against the enemy as a means of significance gain that gratifies self-love concerns. This may involve sacrificing one's self-preservation goals, encapsulated in Rousseau's concept of love of self. The foregoing notions afford the integration of diverse quantitative and qualitative findings on individuals' road to terrorism and back. Understanding the significance quest and the conditions of its constructive fulfillment may be crucial to reversing the current tide of global terrorism.",
"title": ""
},
{
"docid": "d64a0520a0cb49b1906d1d343ca935ec",
"text": "A 3D LTCC (low temperature co-fired ceramic) millimeter wave balun using asymmetric structure was investigated in this paper. The proposed balun consists of embedded multilayer microstrip and CPS (coplanar strip) lines. It was designed at 40GHz. The measured insertion loss of the back-to-back balanced transition is -1.14dB, thus the estimated insertion loss of each device is -0.57dB including the CPS line loss. The 10dB return loss bandwidth of the unbalanced back-to-back transition covers the frequency range of 17.3/spl sim/46.6GHz (91.7%). The area occupied by this balun is 0.42 /spl times/ 0.066/spl lambda//sub 0/ (2.1 /spl times/ 0.33mm/sup 2/). The high performances have been achieved using the low loss and relatively high dielectric constant of LTCC (/spl epsiv//sub r/=5.4, tan/spl delta/=0.0015 at 35GHz) and a 3D stacked configuration. This balun can be used as a transition of microstrip-to-CPS and vice-versa and insures also an impedance transformation from 50 to 110 Ohm for an easy integration with a high input impedance antenna. This is the first reported 40 GHz wideband 3D LTCC balun using asymmetric structure to balance the output amplitude and phase difference.",
"title": ""
},
{
"docid": "56ec8f3e88731992a028a9322dbc4890",
"text": "The term knowledge visualization has been used in many different fields with many different definitions. In this paper, we propose a new definition of knowledge visualization specifically in the context of visual analysis and reasoning. Our definition begins with the differentiation of knowledge as either explicit and tacit knowledge. We then present a model for the relationship between the two through the use visualization. Instead of directly representing data in a visualization, we first determine the value of the explicit knowledge associated with the data based on a cost/benefit analysis and display the knowledge in accordance to its importance. We propose that the displayed explicit knowledge leads us to create our own tacit knowledge through visual analytical reasoning and discovery.",
"title": ""
},
{
"docid": "d880535f198a1f0a26b18572f674b829",
"text": "Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.",
"title": ""
},
{
"docid": "a261f45ef58363638b69616089386e1f",
"text": "This paper presents a new balancing control approach for regulating the center of mass position and trunk orientation of a bipedal robot in a compliant way. The controller computes a desired wrench (force and torque) required to recover the posture when an unknown external perturbation has changed the posture of the robot. This wrench is later distributed as forces at predefined contact points via a constrained optimization, which aims at achieving the desired wrench while minimizing the Euclidean norm of the contact forces. The formulation of the force distribution as an optimization problem is adopted from the grasping literature and allows to consider restrictions coming from the friction between the contact points and the ground.",
"title": ""
},
{
"docid": "492ff30c0ff913abb2af8ff90f2c8f5c",
"text": "Pattern matching is a highly computationally intensive operation used in a plethora of applications. Unfortunately, due to the ever increasing storage capacity and link speeds, the amount of data that needs to be matched against a given set of patterns is growing rapidly. In this paper, we explore how the highly parallel computational capabilities of commodity graphics processing units (GPUs) can be exploited for high-speed pattern matching. We present the design, implementation, and evaluation of a pattern matching library running on the GPU, which can be used transparently by a wide range of applications to increase their overall performance. The library supports both string searching and regular expression matching on the NVIDIA CUDA architecture. We have also explored the performance impact of different types of memory hierarchies, and present solutions to alleviate memory congestion problems. The results of our performance evaluation using off-the-self graphics processors demonstrate that GPU-based pattern matching can reach tens of gigabits per second on different workloads.",
"title": ""
},
{
"docid": "cb79a110e271ea6f37ef686145be7d1c",
"text": "By exploring the relationships between different AGI architectures, one can work toward a holistic cognitive model of human-level intelligence. In this vein, here an integrative architecture diagram for human-like general intelligence is proposed, via merging of lightly modified version of prior diagrams including Aaron Sloman’s high-level cognitive model, Stan Franklin and the LIDA group’s model of working memory and the cognitive cycle, Joscha Bach and Dietrich Dorner’s Psi model of motivated action and cognition, James Albus’s three-hierarchy intelligent robotics model, and the author’s prior work on cognitive synergy in deliberative thought and metacognition, along with ideas from deep learning and computational linguistics. The purpose is not to propose an actual merger of the various AGI systems considered, but rather to highlight the points of compatibility between the different approaches, as well as the differences of both focus and substance. The result is perhaps the most comprehensive architecture diagram of human-cognition yet produced, tying together all key aspects of human intelligence in a coherent way that is not tightly bound to any particular cognitive or AGI theory. Finally, the question of the dynamics associated with the architecture is considered, including the potential that human-level intelligence requires cognitive synergy between these various components is considered; and the possibility of a “trickiness” property causing the intelligence of the overall system to be badly suboptimal if any of the components are missing or insufficiently cooperative. One idea emerging from these dynamic consideration is that implementing the whole integrative architecture diagram may be necessary for achieving anywhere near human-level, human-like general intelligence.",
"title": ""
},
{
"docid": "585c589cdab52eaa63186a70ac81742d",
"text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).",
"title": ""
},
{
"docid": "87c6a5a8d00a284f313d923c27531f75",
"text": "Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy.",
"title": ""
},
{
"docid": "124dffade8cbc98b95292a21b71b31e0",
"text": "High performance photodetectors play important roles in the development of innovative technologies in many fields, including medicine, display and imaging, military, optical communication, environment monitoring, security check, scientific research and industrial processing control. Graphene, the most fascinating two-dimensional material, has demonstrated promising applications in various types of photodetectors from terahertz to ultraviolet, due to its ultrahigh carrier mobility and light absorption in broad wavelength range. Graphene field effect transistors are recognized as a type of excellent transducers for photodetection thanks to the inherent amplification function of the transistors, the feasibility of miniaturization and the unique properties of graphene. In this review, we will introduce the applications of graphene transistors as photodetectors in different wavelength ranges including terahertz, infrared, visible, and ultraviolet, focusing on the device design, physics and photosensitive performance. Since the device properties are closely related to the quality of graphene, the devices based on graphene prepared with different methods will be addressed separately with a view to demonstrating more clearly their advantages and shortcomings in practical applications. It is expected that highly sensitive photodetectors based on graphene transistors will find important applications in many emerging areas especially flexible, wearable, printable or transparent electronics and high frequency communications.",
"title": ""
},
{
"docid": "8108f8c3d53f44ca3824f4601aacdce1",
"text": "This paper presents a robust multi-class multi-object tracking (MCMOT) formulated by a Bayesian filtering framework. Multiobject tracking for unlimited object classes is conducted by combining detection responses and changing point detection (CPD) algorithm. The CPD model is used to observe abrupt or abnormal changes due to a drift and an occlusion based spatiotemporal characteristics of track states. The ensemble of convolutional neural network (CNN) based object detector and Lucas-Kanede Tracker (KLT) based motion detector is employed to compute the likelihoods of foreground regions as the detection responses of different object classes. Extensive experiments are performed using lately introduced challenging benchmark videos; ImageNet VID and MOT benchmark dataset. The comparison to state-of-the-art video tracking techniques shows very encouraging results.",
"title": ""
},
{
"docid": "ae3d6467c0952a770956e8c0eed04c8d",
"text": "Many modern cities strive to integrate information technology into every aspect of city life to create so-called smart cities. Smart cities rely on a large number of application areas and technologies to realize complex interactions between citizens, third parties, and city departments. This overwhelming complexity is one reason why holistic privacy protection only rarely enters the picture. A lack of privacy can result in discrimination and social sorting, creating a fundamentally unequal society. To prevent this, we believe that a better understanding of smart cities and their privacy implications is needed. We therefore systematize the application areas, enabling technologies, privacy types, attackers, and data sources for the attacks, giving structure to the fuzzy term “smart city.” Based on our taxonomies, we describe existing privacy-enhancing technologies, review the state of the art in real cities around the world, and discuss promising future research directions. Our survey can serve as a reference guide, contributing to the development of privacy-friendly smart cities.",
"title": ""
},
{
"docid": "ef53fb4fa95575c6472173db51d77a65",
"text": "I review existing knowledge, unanswered questions, and new directions in research on stress, coping resource, coping strategies, and social support processes. New directions in research on stressors include examining the differing impacts of stress across a range of physical and mental health outcomes, the \"carry-overs\" of stress from one role domain or stage of life into another, the benefits derived from negative experiences, and the determinants of the meaning of stressors. Although a sense of personal control and perceived social support influence health and mental health both directly and as stress buffers, the theoretical mechanisms through which they do so still require elaboration and testing. New work suggests that coping flexibility and structural constraints on individuals' coping efforts may be important to pursue. Promising new directions in social support research include studies of the negative effects of social relationships and of support giving, mutual coping and support-giving dynamics, optimal \"matches\" between individuals' needs and support received, and properties of groups which can provide a sense of social support. Qualitative comparative analysis, optimal matching analysis, and event-structure analysis are new techniques which may help advance research in these broad topic areas. To enhance the effectiveness of coping and social support interventions, intervening mechanisms need to be better understood. Nevertheless, the policy implications of stress research are clear and are important given current interest in health care reform in the United States.",
"title": ""
},
{
"docid": "4ec9cc4a2e65415cbb1e23c54ff5fd2c",
"text": "A number of recent approaches to policy learning in 2D game domains have been successful going directly from raw input images to actions. However when employed in complex 3D environments, they typically suffer from challenges related to partial observability, combinatorial exploration spaces, path planning, and a scarcity of rewarding scenarios. Inspired from prior work in human cognition that indicates how humans employ a variety of semantic concepts and abstractions (object categories, localisation, etc.) to reason about the world, we build an agent-model that incorporates such abstractions into its policy-learning framework. We augment the raw image input to a Deep QLearning Network (DQN), by adding details of objects and structural elements encountered, along with the agent’s localisation. The different components are automatically extracted and composed into a topological representation using on-the-fly object detection and 3D-scene reconstruction. We evaluate the efficacy of our approach in “Doom”, a 3D first-person combat game that exhibits a number of challenges discussed, and show that our augmented framework consistently learns better, more effective policies.",
"title": ""
}
] |
scidocsrr
|
143fadf14d576a15525aca9274f68509
|
Growing random forest on deep convolutional neural networks for scene categorization
|
[
{
"docid": "2a56702663e6e52a40052a5f9b79a243",
"text": "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures.",
"title": ""
},
{
"docid": "ff71aa2caed491f9bf7b67a5377b4d66",
"text": "In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer.",
"title": ""
}
] |
[
{
"docid": "cd1274c785a410f0e38b8e033555ee9b",
"text": "This paper presents a graph signal denoising method with the trilateral filter defined in the graph spectral domain. The original trilateral filter (TF) is a data-dependent filter that is widely used as an edge-preserving smoothing method for image processing. However, because of the data-dependency, one cannot provide its frequency domain representation. To overcome this problem, we establish the graph spectral domain representation of the data-dependent filter, i.e., a spectral graph TF (SGTF). This representation enables us to design an effective graph signal denoising filter with a Tikhonov regularization. Moreover, for the proposed graph denoising filter, we provide a parameter optimization technique to search for a regularization parameter that approximately minimizes the mean squared error w.r.t. the unknown graph signal of interest. Comprehensive experimental results validate our graph signal processing-based approach for images and graph signals.",
"title": ""
},
{
"docid": "7e92b2c7f39b7200dd8b9330676294b9",
"text": "Realizing the democratic promise of nanopore sequencing requires the development of new bioinformatics approaches to deal with its specific error characteristics. Here we present GraphMap, a mapping algorithm designed to analyse nanopore sequencing reads, which progressively refines candidate alignments to robustly handle potentially high-error rates and a fast graph traversal to align long reads with speed and high precision (>95%). Evaluation on MinION sequencing data sets against short- and long-read mappers indicates that GraphMap increases mapping sensitivity by 10-80% and maps >95% of bases. GraphMap alignments enabled single-nucleotide variant calling on the human genome with increased sensitivity (15%) over the next best mapper, precise detection of structural variants from length 100 bp to 4 kbp, and species and strain-specific identification of pathogens using MinION reads. GraphMap is available open source under the MIT license at https://github.com/isovic/graphmap.",
"title": ""
},
{
"docid": "e575d7f1065ab5da9d229396bef8c437",
"text": "This paper advances the claim that tacit knowledge has been greatly misunderstood in management studies. Nonaka and Takeuchi’s widely adopted interpretation of tacit knowledge as knowledge awaiting “translation” or “conversion” into explicit knowledge is erroneous: contrary to Polanyi’s argument, it ignores the essential ineffability of tacit knowledge. In the paper I show why the idea of focussing on a set of tacitly known particulars and “converting” them into explicit knowledge is unsustainable. However, the ineffability of tacit knowledge does not mean that we cannot discuss the skilled performances in which we are involved. We can discuss them provided we stop insisting on “converting” tacit knowledge and, instead, start recursively drawing our attention to how we draw each other’s attention to things. Instructive forms of talk help us re-orientate ourselves to how we relate to others and the world around us, thus enabling us to talk and act differently. Following Wittgenstein and Shotter, I argue that we can command a clearer view of our skilled performances if we “re-mind” ourselves of how we do things, so that distinctions, which we had previously not noticed, and features, which had previously escaped our attention, may be brought forward. We cannot operationalise tacit knowledge but we can find new ways of talking, fresh forms of interacting and novel ways of distinguishing and connecting. Tacit knowledge cannot be “captured”, “translated”, or “converted” but only displayed and manifested, in what we do. New knowledge comes about not when the tacit becomes explicit, but when our skilled performance is punctuated in new ways through social interaction. Presented to Knowledge Economy and Society Seminar, LSE Department of Information Systems, 14 June 2002",
"title": ""
},
{
"docid": "8107b3dc36d240921571edfc778107ff",
"text": "FinFET devices have been proposed as a promising substitute for conventional bulk CMOS-based devices at the nanoscale due to their extraordinary properties such as improved channel controllability, a high on/off current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. This brief builds standard cell libraries for the advanced 7-nm FinFET technology, supporting multiple threshold voltages and supply voltages. The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reductions on average in a superthreshold regime and 16× and 3000× energy reductions on average in a near-threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes, respectively.",
"title": ""
},
{
"docid": "83f14923970c83a55152464179e6bae9",
"text": "Urine drug screening can detect cases of drug abuse, promote workplace safety, and monitor drugtherapy compliance. Compliance testing is necessary for patients taking controlled drugs. To order and interpret these tests, it is required to know of testing modalities, kinetic of drugs, and different causes of false-positive and false-negative results. Standard immunoassay testing is fast, cheap, and the preferred primarily test for urine drug screening. This method reliably detects commonly drugs of abuse such as opiates, opioids, amphetamine/methamphetamine, cocaine, cannabinoids, phencyclidine, barbiturates, and benzodiazepines. Although immunoassays are sensitive and specific to the presence of drugs/drug metabolites, false negative and positive results may be created in some cases. Unexpected positive test results should be checked with a confirmatory method such as gas chromatography/mass spectrometry. Careful attention to urine collection methods and performing the specimen integrity tests can identify some attempts by patients to produce false-negative test results.",
"title": ""
},
{
"docid": "9cf4d68ab09e98cd5b897308c8791d26",
"text": "Gesture Recognition Technology has evolved greatly over the years. The past has seen the contemporary Human – Computer Interface techniques and their drawbacks, which limit the speed and naturalness of the human brain and body. As a result gesture recognition technology has developed since the early 1900s with a view to achieving ease and lessening the dependence on devices like keyboards, mice and touchscreens. Attempts have been made to combine natural gestures to operate with the technology around us to enable us to make optimum use of our body gestures making our work faster and more human friendly. The present has seen huge development in this field ranging from devices like virtual keyboards, video game controllers to advanced security systems which work on face, hand and body recognition techniques. The goal is to make full use of the movements of the body and every angle made by the parts of the body in order to supplement technology to become human friendly and understand natural human behavior and gestures. The future of this technology is very bright with prototypes of amazing devices in research and development to make the world equipped with digital information at hand whenever and wherever required.",
"title": ""
},
{
"docid": "52db010b2fa3ddcbfb73309705006d42",
"text": "Recent work in cognitive psychology and social cognition bears heavily on concerns of sociologists of culture. Cognitive research confirms views of culture as fragmented; clarifies the roles of institutions and agency; and illuminates supraindividual aspects of culture. Individuals experience culture as disparate bits of information and as schematic structures that organize that information. Culture carried by institutions, networks, and social movements diffuses, activates, and selects among available schemata. Implications for the study of identity, collective memory, social classification, and logics of action are developed.",
"title": ""
},
{
"docid": "682921e4e2f000384fdcb9dc6fbaa61a",
"text": "The use of Cloud Computing for computation offloading in the robotics area has become a field of interest today. The aim of this work is to demonstrate the viability of cloud offloading in a low level and intensive computing task: a vision-based navigation assistance of a service mobile robot. In order to do so, a prototype, running over a ROS-based mobile robot (Erratic by Videre Design LLC) is presented. The information extracted from on-board stereo cameras will be used by a private cloud platform consisting of five bare-metal nodes with AMD Phenom 965 × 4 CPU, with the cloud middleware Openstack Havana. The actual task is the shared control of the robot teleoperation, that is, the smooth filtering of the teleoperated commands with the detected obstacles to prevent collisions. All the possible offloading models for this case are presented and analyzed. Several performance results using different communication technologies and offloading models are explained as well. In addition to this, a real navigation case in a domestic circuit was done. The tests demonstrate that offloading computation to the Cloud improves the performance and navigation results with respect to the case where all processing is done by the robot.",
"title": ""
},
{
"docid": "88615ac1788bba148f547ca52bffc473",
"text": "This paper describes a probabilistic framework for faithful reproduction of dynamic facial expressions on a synthetic face model with MPEG-4 facial animation parameters (FAPs) while achieving very low bitrate in data transmission. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the FAPs and facial action coding system (FACS) into a dynamic Bayesian network (DBN) to account for uncertainties in FAP extraction and to model the dynamic evolution of facial expressions. At the synthesizer, a static BN reconstructs the FAPs and their intensity. The two BNs are connected statically through a data stream link. Using the coupled BN to analyze and synthesize the dynamic facial expressions is the major novelty of this work. The novelty brings about several benefits. First, very low bitrate (9 bytes per frame) in data transmission can be achieved. Second, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetected FAPs. Third, more realistic looking facial expressions can be reproduced by modelling the dynamics of human expressions.",
"title": ""
},
{
"docid": "746b9e9e1fdacc76d3acb4f78d824901",
"text": "This paper proposes a new method for the detection of glaucoma using fundus image which mainly affects the optic disc by increasing the cup size is proposed. The ratio of the optic cup to disc (CDR) in retinal fundus images is one of the primary physiological parameter for the diagnosis of glaucoma. The Kmeans clustering technique is recursively applied to extract the optic disc and optic cup region and an elliptical fitting technique is applied to find the CDR values. The blood vessels in the optic disc region are detected by using local entropy thresholding approach. The ratio of area of blood vessels in the inferiorsuperior side to area of blood vessels in the nasal-temporal side (ISNT) is combined with the CDR for the classification of fundus image as normal or glaucoma by using K-Nearest neighbor , Support Vector Machine and Bayes classifier. A batch of 36 retinal images obtained from the Aravind Eye Hospital, Madurai, Tamilnadu, India is used to assess the performance of the proposed system and a classification rate of 95% is achieved.",
"title": ""
},
{
"docid": "02fcb473984048f2265dd810b374e999",
"text": "Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths expected in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival based on incidence data from the National Cancer Institute, the Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries and mortality data from the National Center for Health Statistics. A total of 1,638,910 new cancer cases and 577,190 deaths from cancer are projected to occur in the United States in 2012. During the most recent 5 years for which there are data (2004-2008), overall cancer incidence rates declined slightly in men (by 0.6% per year) and were stable in women, while cancer death rates decreased by 1.8% per year in men and by 1.6% per year in women. Over the past 10 years of available data (1999-2008), cancer death rates have declined by more than 1% per year in men and women of every racial/ethnic group with the exception of American Indians/Alaska Natives, among whom rates have remained stable. The most rapid declines in death rates occurred among African American and Hispanic men (2.4% and 2.3% per year, respectively). Death rates continue to decline for all 4 major cancer sites (lung, colorectum, breast, and prostate), with lung cancer accounting for almost 40% of the total decline in men and breast cancer accounting for 34% of the total decline in women. The reduction in overall cancer death rates since 1990 in men and 1991 in women translates to the avoidance of about 1,024,400 deaths from cancer. Further progress can be accelerated by applying existing cancer control knowledge across all segments of the population, with an emphasis on those groups in the lowest socioeconomic bracket.",
"title": ""
},
{
"docid": "ff5d8069062073285e1770bfae096d7e",
"text": "As Face Recognition(FR) technology becomes more mature and commercially available in the market, many different anti-spoofing techniques have been recently developed to enhance the security, reliability, and effectiveness of FR systems. As a part of anti-spoofing techniques, face liveness detection plays an important role to make FR systems be more secured from various attacks. In this paper, we propose a novel method for face liveness detection by using focus, which is one of camera functions. In order to identify fake faces (e.g. 2D pictures), our approach utilizes the variation of pixel values by focusing between two images sequentially taken in different focuses. The experimental result shows that our focus-based approach is a new method that can significantly increase the level of difficulty of spoof attacks, which is a way to improve the security of FR systems. The performance is evaluated and the proposed method achieves 100% fake detection in a given DoF(Depth of Field).",
"title": ""
},
{
"docid": "9b519ba8a3b32d7b5b8a117b2d4d06ca",
"text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.",
"title": ""
},
{
"docid": "fdd01ae46b9c57eada917a6e74796141",
"text": "This paper presents a high-level discussion of dexterity in robotic systems, focusing particularly on manipulation and hands. While it is generally accepted in the robotics community that dexterity is desirable and that end effectors with in-hand manipulation capabilities should be developed, there has been little, if any, formal description of why this is needed, particularly given the increased design and control complexity required. This discussion will overview various definitions of dexterity used in the literature and highlight issues related to specific metrics and quantitative analysis. It will also present arguments regarding why hand dexterity is desirable or necessary, particularly in contrast to the capabilities of a kinematically redundant arm with a simple grasper. Finally, we overview and illustrate the various classes of in-hand manipulation, and review a number of dexterous manipulators that have been previously developed. We believe this work will help to revitalize the dialogue on dexterity in the manipulation community and lead to further formalization of the concepts discussed here.",
"title": ""
},
{
"docid": "bf62cf6deb1b11816fa271bfecde1077",
"text": "EASL–EORTC Clinical Practice Guidelines (CPG) on the management of hepatocellular carcinoma (HCC) define the use of surveillance, diagnosis, and therapeutic strategies recommended for patients with this type of cancer. This is the first European joint effort by the European Association for the Study of the Liver (EASL) and the European Organization for Research and Treatment of Cancer (EORTC) to provide common guidelines for the management of hepatocellular carcinoma. These guidelines update the recommendations reported by the EASL panel of experts in HCC published in 2001 [1]. Several clinical and scientific advances have occurred during the past decade and, thus, a modern version of the document is urgently needed. The purpose of this document is to assist physicians, patients, health-care providers, and health-policy makers from Europe and worldwide in the decision-making process according to evidencebased data. Users of these guidelines should be aware that the recommendations are intended to guide clinical practice in circumstances where all possible resources and therapies are available. Thus, they should adapt the recommendations to their local regulations and/or team capacities, infrastructure, and cost– benefit strategies. Finally, this document sets out some recommendations that should be instrumental in advancing the research and knowledge of this disease and ultimately contribute to improve patient care. The EASL–EORTC CPG on the management of hepatocellular carcinoma provide recommendations based on the level of evi-",
"title": ""
},
{
"docid": "0aca6e378ed309dd9b72228e3ce8228d",
"text": "BACKGROUND\nThe objective was to determine the test-retest reliability and criterion validity of the Physical Activity Scale for Individuals with Physical Disabilities (PASIPD).\n\n\nMETHODS\nForty-five non-wheelchair dependent subjects were recruited from three Dutch rehabilitation centers. Subjects' diagnoses were: stroke, spinal cord injury, whiplash, and neurological-, orthopedic- or back disorders. The PASIPD is a 7-d recall physical activity questionnaire that was completed twice, 1 wk apart. During this week, physical activity was also measured with an Actigraph accelerometer.\n\n\nRESULTS\nThe test-retest reliability Spearman correlation of the PASIPD was 0.77. The criterion validity Spearman correlation was 0.30 when compared to the accelerometer.\n\n\nCONCLUSIONS\nThe PASIPD had test-retest reliability and criterion validity that is comparable to well established self-report physical activity questionnaires from the general population.",
"title": ""
},
{
"docid": "a4a14545419680d45898d530db7a031c",
"text": "The Internet along with the rapidly growing power of computing has emerged as a compelling channel for sale of garments. A number of initiatives have arisen recently across the world [1][2][3], revolving around the concepts of Made-to-Measure manufacturing and shopping via the Internet. These initiatives are fueled by the current Web technologies available, providing an exciting and aesthetically pleasing interface to the general public.",
"title": ""
},
{
"docid": "ffbe9764c410651e17ed0f63fc68c743",
"text": "Antibiotics are among the most successful group of pharmaceuticals used for human and veterinary therapy. However, large amounts of antibiotics are released into municipal wastewater due to incomplete metabolism in humans or due to disposal of unused antibiotics, which finally find their ways into different natural environmental compartments. The emergence and rapid spread of antibiotic resistant bacteria (ARB) has led to an increasing concern about the potential environmental and public health risks. ARB and antibiotic resistant genes (ARGs) have been detected extensively in wastewater samples. Available data show significantly higher proportion of antibiotic resistant bacteria contained in raw and treated wastewater relative to surface water. According to these studies, the conditions in wastewater treatment plants (WWTPs) are favourable for the proliferation of ARB. Moreover, another concern with regards to the presence of ARB and ARGs is their effective removal from sewage. This review gives an overview of the available data on the occurrence of ARB and ARGs and their fate in WWTPs, on the biological methods dealing with the detection of bacterial populations and their resistance genes, and highlights areas in need for further research studies.",
"title": ""
},
{
"docid": "db3c5c93daf97619ad927532266b3347",
"text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.",
"title": ""
},
{
"docid": "3d550cfc16fc2e05606099034dd4383a",
"text": "Android malware has emerged in the last decade as a consequence of the increasing popularity of smartphones and tablets. While most previous work focuses on inherent characteristics of Android apps to detect malware, this study analyses indirect features to identify patterns often observed in malware applications. We show that modern Machine Learning techniques applied to collected metadata from Google Play can provide a first approach towards the detection of malware applications, and we further identify which features have the highest predictive power among the total.",
"title": ""
}
] |
scidocsrr
|
08e47c7470974c8abbc87fc2e85753a8
|
CloudSimDisk: Energy-Aware Storage Simulation in CloudSim
|
[
{
"docid": "7d53fcce145badeeaeff55b5299010b9",
"text": "Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.",
"title": ""
}
] |
[
{
"docid": "83a13b090260a464064a3c884a75ad91",
"text": "While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called Word Mover’s Distance (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the Word Mover’s Embedding (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.",
"title": ""
},
{
"docid": "266f636d13f406ecbacf8ed8443b2b5c",
"text": "This review examines the most frequently cited sociological theories of crime and delinquency. The major theoretical perspectives are presented, beginning with anomie theory and the theories associated with the Chicago School of Sociology. They are followed by theories of strain, social control, opportunity, conflict, and developmental life course. The review concludes with a conceptual map featuring the inter-relationships and contexts of the major theoretical perspectives.",
"title": ""
},
{
"docid": "a95761b5a67a07d02547c542ddc7e677",
"text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; RL9J@virginia.edu. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.",
"title": ""
},
{
"docid": "9a973833c640e8a9fe77cd7afdae60f2",
"text": "Metastasis is a characteristic trait of most tumour types and the cause for the majority of cancer deaths. Many tumour types, including melanoma and breast and prostate cancers, first metastasize via lymphatic vessels to their regional lymph nodes. Although the connection between lymph node metastases and shorter survival times of patients was made decades ago, the active involvement of the lymphatic system in cancer, metastasis has been unravelled only recently, after molecular markers of lymphatic vessels were identified. A growing body of evidence indicates that tumour-induced lymphangiogenesis is a predictive indicator of metastasis to lymph nodes and might also be a target for prevention of metastasis. This article reviews the current understanding of lymphangiogenesis in cancer anti-lymphangiogenic strategies for prevention and therapy of metastatic disease, quantification of lymphangiogenesis for the prognosis and diagnosis of metastasis and in vivo imaging technologies for the assessment of lymphatic vessels, drainage and lymph nodes.",
"title": ""
},
{
"docid": "636f5002b3ced8a541df3e0568604f71",
"text": "We report density functional theory (M06L) calculations including Poisson-Boltzmann solvation to determine the reaction pathways and barriers for the hydrogen evolution reaction (HER) on MoS2, using both a periodic two-dimensional slab and a Mo10S21 cluster model. We find that the HER mechanism involves protonation of the electron rich molybdenum hydride site (Volmer-Heyrovsky mechanism), leading to a calculated free energy barrier of 17.9 kcal/mol, in good agreement with the barrier of 19.9 kcal/mol estimated from the experimental turnover frequency. Hydronium protonation of the hydride on the Mo site is 21.3 kcal/mol more favorable than protonation of the hydrogen on the S site because the electrons localized on the Mo-H bond are readily transferred to form dihydrogen with hydronium. We predict the Volmer-Tafel mechanism in which hydrogen atoms bound to molybdenum and sulfur sites recombine to form H2 has a barrier of 22.6 kcal/mol. Starting with hydrogen atoms on adjacent sulfur atoms, the Volmer-Tafel mechanism goes instead through the M-H + S-H pathway. In discussions of metal chalcogenide HER catalysis, the S-H bond energy has been proposed as the critical parameter. However, we find that the sulfur-hydrogen species is not an important intermediate since the free energy of this species does not play a direct role in determining the effective activation barrier. Rather we suggest that the kinetic barrier should be used as a descriptor for reactivity, rather than the equilibrium thermodynamics. This is supported by the agreement between the calculated barrier and the experimental turnover frequency. These results suggest that to design a more reactive catalyst from edge exposed MoS2, one should focus on lowering the reaction barrier between the metal hydride and a proton from the hydronium in solution.",
"title": ""
},
{
"docid": "966c6c47b9b55fbbab7196622af7027b",
"text": "Wotif Group used DevOps principles to recover from the downward spiral of manual release activity that many IT departments face. Its approach involved the idea of \"making it easy to do the right thing.\" By defining the right thing (deployment standards) for development and operations teams and making it easy to adopt, Wotif drastically improved the average release cycle time. This article is part of a theme issue on DevOps.",
"title": ""
},
{
"docid": "46849f5c975551b401bccae27edd9d81",
"text": "Many ideas of High Performance Computing are applicable to Big Data problems. The more so now, that hybrid, GPU computing gains traction in mainstream computing applications. This work discusses the differences between the High Performance Computing software stack and the Big Data software stack and then focuses on two popular computing workloads, the Alternating Least Squares algorithm and the Singular Value Decomposition, and shows how their performance can be maximized using hybrid computing techniques.",
"title": ""
},
{
"docid": "8700c7f150c00013990c837a4bf7b655",
"text": "The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.",
"title": ""
},
{
"docid": "52d2004c762d4701ab275d9757c047fc",
"text": "Somatic mosaicism — the presence of genetically distinct populations of somatic cells in a given organism — is frequently masked, but it can also result in major phenotypic changes and reveal the expression of otherwise lethal genetic mutations. Mosaicism can be caused by DNA mutations, epigenetic alterations of DNA, chromosomal abnormalities and the spontaneous reversion of inherited mutations. In this review, we discuss the human disorders that result from somatic mosaicism, as well as the molecular genetic mechanisms by which they arise. Specifically, we emphasize the role of selection in the phenotypic manifestations of mosaicism.",
"title": ""
},
{
"docid": "5589dfc1ff9246b85e326e8f394cd514",
"text": "justice. Women, by contrast, were believed to be at a lower stage because they were found to have a sense of agency still tied primarily to their social relationships and to make political and moral decisions based on context-specific principles based on these relationships rather than on the grounds of their own autonomous judgments. Students of gender studies know well just how busy social scientists have been kept by their efforts to come up with ever more sociological \"alibis\" for the question of why women did not act like men. Gilligan's response was to refuse the terms of the debate altogether. She thus did not develop yet another explanation for why women are \"deviant.\" Instead, she turned the question on its head by asking what was wrong with the theory a theory whose central premises defines 50% of social beings as \"abnormal.\" Gilligan translated this question into research by subjecting the abstraction of universal and discrete agency to comparative research into female behavior evaluated on its own terms The new research revealed women to be more \"concrete\" in their thinking and more attuned to \"fairness\" while men acted on \"abstract reasoning\" and \"rules of justice.\" These research findings transformed female otherness into variation and difference but difference now freed from the normative de-",
"title": ""
},
{
"docid": "52da42b320e23e069519c228f1bdd8b5",
"text": "Over the last few years, C-RAN is proposed as a transformative architecture for 5G cellular networks that brings the flexibility and agility of cloud computing to wireless communications. At the same time, content caching in wireless networks has become an essential solution to lower the content- access latency and backhaul traffic loading, leading to user QoE improvement and network cost reduction. In this article, a novel cooperative hierarchical caching (CHC) framework in C-RAN is introduced where contents are jointly cached at the BBU and at the RRHs. Unlike in traditional approaches, the cache at the BBU, cloud cache, presents a new layer in the cache hierarchy, bridging the latency/capacity gap between the traditional edge-based and core-based caching schemes. Trace-driven simulations reveal that CHC yields up to 51 percent improvement in cache hit ratio, 11 percent decrease in average content access latency, and 18 percent reduction in backhaul traffic load compared to the edge-only caching scheme with the same total cache capacity. Before closing the article, we discuss the key challenges and promising opportunities for deploying content caching in C-RAN in order to make it an enabler technology in 5G ultra-dense systems.",
"title": ""
},
{
"docid": "0e3135a7846cee7f892b99dc4881b461",
"text": "OBJECTIVE: This study examined the relation among children's physical activity, sedentary behaviours, and body mass index (BMI), while controlling for sex, family structure, and socioeconomic status.DESIGN: Epidemiological study examining the relations among physical activity participation, sedentary behaviour (video game use and television (TV)/video watching), and BMI on a nationally representative sample of Canadian children.SUBJECTS: A representative sample of Canadian children aged 7–11 (N=7216) from the 1994 National Longitudinal Survey of Children and Youth was used in the analysis.MEASUREMENTS: Physical activity and sport participation, sedentary behaviour (video game use and TV/video watching), and BMI measured by parental report.RESULTS: Both organized and unorganized sport and physical activity are negatively associated with being overweight (10–24% reduced risk) or obese (23–43% reduced risk), while TV watching and video game use are risk factors for being overweight (17–44% increased risk) or obese (10–61% increased risk). Physical activity and sedentary behaviour partially account for the association of high socioeconomic status and two-parent family structure with the likelihood of being overweight or obese.CONCLUSION: This study provides evidence supporting the link between physical inactivity and obesity of Canadian children.",
"title": ""
},
{
"docid": "146402a4b52f16b583e224cbf9a84119",
"text": "Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call \"Variational Mixture of Posteriors\" prior, or VampPrior for short. The VampPrior consists of a mixture distribution (e.g., a mixture of Gaussians) with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extend this prior to a two layer hierarchical model and show that this architecture with a coupled prior and posterior, learns significantly better models. The model also avoids the usual local optima issues related to useless latent dimensions that plague VAEs. We provide empirical studies on six datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes, Frey Faces and Histopathology patches, and show that applying the hierarchical VampPrior delivers state-of-the-art results on all datasets in the unsupervised permutation invariant setting and the best results or comparable to SOTA methods for the approach with convolutional networks.",
"title": ""
},
{
"docid": "981e88bd1f4187972f8a3d04960dd2dd",
"text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.",
"title": ""
},
{
"docid": "adce2e04608819ad5cf30452bd864226",
"text": "Throughout the history of mathematics, concepts of number and space have been tightly intertwined. We tested the hypothesis that cortical circuits for spatial attention contribute to mental arithmetic in humans. We trained a multivariate classifier algorithm to infer the direction of an eye movement, left or right, from the brain activation measured in the posterior parietal cortex. Without further training, the classifier then generalized to an arithmetic task. Its left versus right classification could be used to sort out subtraction versus addition trials, whether performed with symbols or with sets of dots. These findings are consistent with the suggestion that mental arithmetic co-opts parietal circuitry associated with spatial coding.",
"title": ""
},
{
"docid": "3c28ee0844687013d5ac5a88ee529d60",
"text": "Kohonen's Self-Organizing Map (SOM) is one of the most popular arti cial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall into the same or neighboring map nodes. Nodes may thus be viewed as word categories. Although no a priori information about classes is given, during the self-organizing process a model of the word classes emerges. The central topic of the thesis is the use of the SOM in natural language processing. The approach based on the word category maps is compared with the methods that are widely used in arti cial intelligence research. Modeling gradience, conceptual change, and subjectivity of natural language interpretation are considered. The main application area is information retrieval and textual data mining for which a speci c SOM-based method called the WEBSOM has been developed. The WEBSOM method organizes a document collection on a map display that provides an overview of the collection and facilitates interactive browsing. 1",
"title": ""
},
{
"docid": "0ff483e916f4f7eda4671ba31b60d160",
"text": "Nowadays, the rapid proliferation of data makes it possible to build complex models for many real applications. Such models, however, usually require large amount of labeled data, and the labeling process can be both expensive and tedious for domain experts. To address this problem, researchers have resorted to crowdsourcing to collect labels from non-experts with much less cost. The key challenge here is how to infer the true labels from the large number of noisy labels provided by non-experts. Different from most existing work on crowdsourcing, which ignore the structure information in the labeling data provided by non-experts, in this paper, we propose a novel structured approach based on tensor augmentation and completion. It uses tensor representation for the labeled data, augments it with a ground truth layer, and explores two methods to estimate the ground truth layer via low rank tensor completion. Experimental results on 6 real data sets demonstrate the superior performance of the proposed approach over state-of-the-art techniques.",
"title": ""
},
{
"docid": "6b0b0483cf5eeba1bcee560835651a0e",
"text": "Four experiments were carried out to investigate an early- versus late-selection explanation for the attentional blink (AB). In both Experiments 1 and 2, 3 groups of participants were required to identify a noun (Experiment 1) or a name (Experiment 2) target (experimental conditions) and then to identify the presence or absence of a 2nd target (probe), which was their own name, another name, or a specified noun from among a noun distractor stream (Experiment 1) or a name distractor stream (Experiment 2). The conclusions drawn are that individuals do not experience an AB for their own names but do for either other names or nouns. In Experiments 3 and 4, either the participant's own name or another name was presented, as the target and as the item that immediately followed the target, respectively. An AB effect was revealed in both experimental conditions. The results of these experiments are interpreted as support for a late-selection interference account of the AB.",
"title": ""
},
{
"docid": "6e4bb5d16c72c8dc706f934fa3558adb",
"text": "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in Dupuis et al. (1998) and Trouvé (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0○ϕ−1=I 1 where ϕ=Φ1 is the end point at t= 1 of curve Φ t , t∈[0, 1] satisfying .Φ t =v t (Φ t ), t∈ [0,1] with Φ0=id. The variational problem takes the form $$\\mathop {\\arg {\\text{m}}in}\\limits_{\\upsilon :\\dot \\phi _t = \\upsilon _t \\left( {\\dot \\phi } \\right)} \\left( {\\int_0^1 {\\left\\| {\\upsilon _t } \\right\\|} ^2 {\\text{d}}t + \\left\\| {I_0 \\circ \\phi _1^{ - 1} - I_1 } \\right\\|_{L^2 }^2 } \\right),$$ where ‖v t‖ V is an appropriate Sobolev norm on the velocity field v t(·), and the second term enforces matching of the images with ‖·‖L 2 representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields v t, t∈[0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by ∫0 1‖v t‖ V dt on the geodesic shortest paths.",
"title": ""
},
{
"docid": "2e2cffc777e534ad1ab7a5c638e0574e",
"text": "BACKGROUND\nPoly(ADP-ribose)polymerase-1 (PARP-1) is a highly promising novel target in breast cancer. However, the expression of PARP-1 protein in breast cancer and its associations with outcome are yet poorly characterized.\n\n\nPATIENTS AND METHODS\nQuantitative expression of PARP-1 protein was assayed by a specific immunohistochemical signal intensity scanning assay in a range of normal to malignant breast lesions, including a series of patients (N = 330) with operable breast cancer to correlate with clinicopathological factors and long-term outcome.\n\n\nRESULTS\nPARP-1 was overexpressed in about a third of ductal carcinoma in situ and infiltrating breast carcinomas. PARP-1 protein overexpression was associated to higher tumor grade (P = 0.01), estrogen-negative tumors (P < 0.001) and triple-negative phenotype (P < 0.001). The hazard ratio (HR) for death in patients with PARP-1 overexpressing tumors was 7.24 (95% CI; 3.56-14.75). In a multivariate analysis, PARP-1 overexpression was an independent prognostic factor for both disease-free (HR 10.05; 95% CI 5.42-10.66) and overall survival (HR 1.82; 95% CI 1.32-2.52).\n\n\nCONCLUSIONS\nNuclear PARP-1 is overexpressed during the malignant transformation of the breast, particularly in triple-negative tumors, and independently predicts poor prognosis in operable invasive breast cancer.",
"title": ""
}
] |
scidocsrr
|
bb89d9cabfaec131b4489f1079d03e70
|
CE-Storm: Confidential Elastic Processing of Data Streams
|
[
{
"docid": "0a263c6abbfc97faa169b95d415c9896",
"text": "We introduce ChronoStream, a distributed system specifically designed for elastic stateful stream computation in the cloud. ChronoStream treats internal state as a first-class citizen and aims at providing flexible elastic support in both vertical and horizontal dimensions to cope with workload fluctuation and dynamic resource reclamation. With a clear separation between application-level computation parallelism and OS-level execution concurrency, ChronoStream enables transparent dynamic scaling and failure recovery by eliminating any network I/O and state-synchronization overhead. Our evaluation on dozens of computing nodes shows that ChronoStream can scale linearly and achieve transparent elasticity and high availability without sacrificing system performance or affecting collocated tenants.",
"title": ""
},
{
"docid": "8fd269218b8bafbe2912c46726dd8533",
"text": "\"!# #$ % $ & % ' (*) % +-,. $ &/ 0 1 2 3% 41 0 + 5 % 1 &/ !# #%#67$/ 18!# #% % #% ' \"% 9,: $ &/ %<;=,> '? \"( % $@ \"!\" 1A B% \" 1 0 %C + ,: AD8& ,. \"%#6< E+F$ 1 +/& !# \"%31 & $ &/ % ) % + 1 -G &E H.,> JI/(*1 0 (K / \" L ,:!# M *G 1N O% $@ #!#,>PE!# ,:1 %#QORS' \" ,: ( $ & T #%4 \"U/!# # +V%CI/%C # 2! $E !\",: JI86WH. # !\"IV;=,:H:HX+ \" ,.1 Q Y E+/ \" = ' #% !#1 E+/,: ,:1 %#6E ' %CI %C \" Z;=,:H:H[% ' + H:1N +\\6E ' & %=+/ \"( +/,. ] ' O %C;O \" 6 ,: 41 + \" ^ 1],: M$ 15LN W ' _1 ) % \" LN + H. # !\"I 1 0 ' \"% & H> %#Q ` ' ,:% $E $@ < \"U M,: #% M #! ' ,.D8& 0 1 +/I/ E M,:! H:H>I ,: % \" ,: E+< # M15L ,: = 1 $ 1 $@ \" 1 %[,: 1X ' aD8& I<$ H. 4 %^ D8& ,> + )8Ib ' 4!#& \" H:1 +\\QMR? 9 \"U M,: 4 K;a1 KI/$@ #% 1 0 1 $ %#c< ' P %C d+/ 1 $ %X 0 ! ,.1 1 0 ' d & $ H: #%a,. E+/1 0 % ' ,:1 e6 E+ ' % #!\"1 E+f+/ 1 $ %g & $ H: #%9) % +A1 A ' h,: M$@1 !# 31 0 ' #,> !\"1 8 # 8 Q[RV O + + #% %W ' X$ 1 ) H: # M%71 0 + \" \" M,: ,. ];=' # 9H:1N + % ' + +/,: ,:%i # +/ +\\6 ;=' \" =,: 9 ' =D8& I4$ H. 9 1 ,: % \" _ 1 $ %#6 E+b' 1 ;j g& ! ' 1 0 ' TH:1N +?% ' 1 & H.+-)@ 4% ' +' $@1 ,: <,. ' k$ H. \\Q-R? k$ #% # 8 g A H. 1 ,> ' 0 1 M !\"!#1 M$ H.,:% ' ,. ' ,:% E+9 \"U/$@ \" ,: M # 8 H #L ,.+ # !# X 'E 5 a,> i! M !\" XD8& ,:! l H:Ig +9! )/ ,: g ' <%CI/%C # m)E ! lk,: 8 1g ' <& % 0 & He1 $@ \" ,: 9 Q 1. INTRODUCTION n \";o $ $ H:,.!# ,:1 %4 'E 5 T g& %C T+/ H_;=,> 'VL %C T /& g)@ \" % 1 0 ,: ( $ & i%C M%i d)@ #!#1 M,: M1 X!\"1 M M1 \\Q ` ' #% ],: !#H:&E+ d $ ( $ H:,.!# ,:1 %T ' T$ 18!# \"% %4+ 0 1 % k H:H_ # g)@ #+ + #+b% \" % 1 %#6 $ $ H.,:! 5 ,:1 %[ ' ^ g& %C e!#1 #H. aP E !#,. H + 0 # #+ %#6 E+ $ ( $ H:,.!# ,:1 %^ 'E 5 [ g& %C \\ k E _,: $ &/ 0 1] p XLN \" I Hq 5 i /& g)@ \" 1 0 #1 (J$@1 % ,> ,.1 ,: 4+ \"L/,:!# \"%#QW F \";r!#H. % %i1 0 + 5 < k \" M # 8 %CI/%C # s,:%X # M \" ,: 4,: k #% $@1 % 1T ' #% < $ $ H.,:! 5 ,:1 %#Q ` ' #% %CI/%C # M%]$ 15L ,.+ ' T% M l ,: E+ 1 0 ,: 0 %C & ! & 13%C 9( ) % +M $ $ H.,:! 5 ,:1 %i 'E a+ 5 ) % = k \" M # 8 i%CI/%C # M%W' 5L $/ 15L/,.+/ + 0 1 h+ b$ 18!# \"% % ,. V $ $ H:,:! 5 ,.1 %#Qr m%C t+ k E \" -& % \"%b $ $ H:,.!# ,:1 /(JH: #L #Hg% # k 8 ,.!\"% 1u k l ?,: 8 #H:H.,>( # 8 + \"!#,:% ,.1 % )@1 &/ < #% 1 &/ !# 9 H:H.18!# ,:1 \\Q ` I/$ ,:! H:H>I86v ' \"( % 1 & !# \"%3,: wD8& #%C ,:1 F,: !#H:&E+/ %C 1 6]$ 18!# \"% % 1 3!\"I/!#H: #%#6] E+ ) +/;=,.+/ '\\Q x &/ X+/ #% ,: %X' LN ])@ \" # 3,: /yE& \" !# +M' L/,:H>IM)8IM% #L \" H@% $@ #!\",:P ! $ $ H:,:! ,:1 %#QTzK b$E 5 ,:!#& H. 6v;a g'E LN T%C &E+/,: +b $ $ H:,:! ,:1 'E 5 $@ \" 0 1 M%< # M1 4 ,q M15LN \" )E 5 H. PE #H.+{ ,:LN # { \" 1 K;a # 8 KI3) ,:1 (*% # % 1 %d # g)@ #+ + #+3,: ! ' % 1 Hq+/,: \" | % & , 0 1 QiRV 'E LN a H:% 1];a1 lN #+ ;=,> '4 4 $ $ H:,.!# ,:1 g ' ^!#1 H:H: #!\" %7 #!#1 ,:%C( % !# d+ 5 0 1 s M +/L !# #+g ,> $ Hq d )@1 & W ' a$@1 % ,> ,:1 %i1 0 # # 4I & ,> % E+ ' <,:%<!\"1 !\" \" + ;=,> 'b ' T,. 8 #H:H:,: # 8 +/,:%C( % # M,: E 5 ,:1 k1 0 ' ,:%O,: 0 1 k 5 ,.1 M 1g % \" ,: #%X1 0 1 & +M%C ,:1 % ! ' ;=,> ' +/,:}v # 8 d #D & ,> # M # 8 %#QWRV T H.% 1k)@ # ,: ,: k \"U/$@ \" ,: M # 8 HX }v1 M 1? k E P % 'f #% $ ,> 1 Ir+ ? %h ,: E+/,.!# 1 ]1 0 ' <$/ #% # !\" 1 0 1 U/,. %X,: 4 #% \" L 1 ,> Q ]H:H71 0 ' \"% 9 $ $ H:,:! ,:1 % 4! ' !\" \" ,:~# +-) I hH. 9 & 9( )@ \" W1 0 $ & % ' (*)E % + + ]% 1 & !\" #%7,: 4;=' ,:! '4 ' O+ 5 < ,:L H8 ! 9)@ X' ,: '9 E+4& $/ + ,:!\" ) H: Q[i ! 'M1 0 ' #% = $ $ H:,:! 5 ,.1 %_,:% #% $@1 % ,:) H: 0 1 d M1 ,> 1 ,. 4 ' ,.%O+ T 19+ #!\" X!\" ,> ,:! He% ,> &E 5( ,:1 %#Qi & ,: ' #% #LN # 8 %#68 ' <+ #%X!# ,: !\" % 6E E+ ,> <,:%< 4& ! ' M1 ,: M$@1 d 'E 5 #H: #L 8 + 5 k \" +/ #H:,.L \" + ,: B ,: M #H>I 0 % ' ,:1 eQ zK { ' M ]o%CI/%C # 67 V \"U/$ \"% % ,.1 B1 0 ' h #H. ,:LN ,: M$@1 !# 31 0 1 & $ & 9 \"LN # 8 %9,:%k! $/ & +f %k G 18 g% $@ \"!#,>PE! 5 ,.1 eQ ` ' d%CI/%C # j 4& %C i H>;X #I/%W Ig 1 k U/,: M,:~# ' k 1 H=+ #H:,:LN \" #+rG 1N vQ7 & ,: ,: M #%g1 0 %C #% %#6a ' h,: $ & \"%M! A \"U/!# # #+A ' 3%CI/%C # ! $ !#,> KI8QAzJ B ' #% 3! % #%#6i ' 1 H>IM;X #IM 141 $@ \" ];=,: ' ,: h ' <G 1N k)@1 & E+/%a,:%O 1T% ' +k% 1 M 1 0 ' 4H:1N +\\Q] I/ E M,.!# H:H>I ! ' 181 % ,. h;=' \" 1h)@ \"%C <% ' #+ H.1 + + ' 1 ; 4& ! ' H:1N +b 1k% ' #+ ,.% M! ' H:H: # ,. 3$/ 1 ) H. \" Q ` ' ,:% $E $@ \" 9 U $ H:1 #%4% ! H. ) H: H:1N +A% ' #+ + ,: #! ' ,.D8& #% 0 1 9H. $ 18!\" #% % ,: M K;O1 l %#Q RV g)@ #H:,. \"LN g 'E 5 4G 18 ,:%T% $@ \"!#,>PE +-% #$E 5 #H>I 0 1 T ! '? $ $ H:,>( ! 5 ,.1 eQkz* T+ #% !\" ,:)@ #% ' 4 #H. ,:1 % ' ,:$V)@ \" J;O # # {L 5 ,:1 & % ! 'E 5 ( ! \" ,:%C ,.!\"%g1 0 B %C;O 9 E+{ ' 9& % 0 & H: \"% %3 , Q Q:67& ,:H:,> KI <1 0 'E 5 i %C;a \" Q ` '/& %#6N;O = M1 +/ #HEG 1N g %a <% \" _1 0v0 & !\" ,:1 %W 'E #H. < 4$E 5 M \" X1 0 ' 1 & $ & a 14,> %O& ,:H:,> KI8Q_ 1 X \"U M$ H. 6 ,: F k 8Ir $ $ H:,:! ,:1 %#6 %C;a \" %3 b1 H:Ir& % 0 & H], 0 ' \"Ir ,: M #H>I8Q ` ' \" 0 1 6 ' X&/ ,.H:,> KIg1 0 k %C;a \" O! M)@ d 0 & !\" ,:1 1 0 ' =Hq 5 # ! Ig,: LN1 H:LN #+g,: 9,> %i!\" # ,:1 \\Q_ ]H:% 1 68 ' X& ,:H:,: JIg1 0 %C;O \" ! b)@ T 0 & !\" ,:1 1 0 ' 1 &/ $ & ]L H:& Q] 1 M L H:& #% M1 <,: 8 \" #%C ,: 9 ' 1 ' \" %#Q ,:LN # b%C ,:%C ,:!#% )@1 & ] ' T!#1 %C 1 0 ! 'b$ 18!# \"% % ,. %C #$E+ ,> %9 % % 18!#,. +A% #H: #!\" ,:L ,> KI867,> 9,:%T$@1 % % ,:) H: k 1b!#1 M$ & M L H:& 0 1 ' U $@ \"!\" +FL H:& 0 1 1 H G 18 r;=' # F ' b%CI/%C #
,:% 1 $@ ,: f)@ #H:1 ;,> % ! $E !\",: JI8Q x L \" H:1N +S,:% +/ \" #!\" #+S;=' # ' =1 ) % \" LN #+3G 1N 9+/ 1 $ %a% ,: ,>PE! 8 H:I9)@ #H:15;r ' ,:%aL H:& QWe1 + % ' #+ + ,: T,:%O,: LN1 lN +k %X ;X #I9 14+/ ,:LN = ' d%CI/%C # o) ! lg 14 !\"!# #$ ) H: 4G 18 @Q zJ O% ' 1 & H.+3)@ < 1 +M 'E 5 X;=' ,:H: +/ 1 $ $ ,: & $ H: #%O;=,:H.H\\!# \" ,: H>I",
"title": ""
}
] |
[
{
"docid": "b1c759ad00874d93209ba22418f9f8b0",
"text": "Antibodies with protective activity are critical for vaccine efficacy. Affinity maturation increases antibody activity through multiple rounds of somatic hypermutation and selection in the germinal center. Identification of HIV-1 specific and influenza-specific antibody developmental pathways, as well as characterization of B cell and virus co-evolution in patients, has informed our understanding of antibody development. In order to counteract HIV-1 and influenza viral diversity, broadly neutralizing antibodies precisely target specific sites of vulnerability and require high levels of affinity maturation. We present immunization strategies that attempt to recapitulate these natural processes and guide the affinity maturation process.",
"title": ""
},
{
"docid": "3580c05a6564e7e09c6577026da69fe9",
"text": "Inpainting based image compression approaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression. The major challenge in these compression models is to find a small set of descriptive supporting points, which allow for an accurate reconstruction of the original image. It turns out in practice that this is a challenging problem even for the simplest Laplacian interpolation model. In this paper, we revisit the Laplacian interpolation compression model and introduce two fast algorithms, namely successive preconditioning primal dual algorithm and the recently proposed iPiano algorithm, to solve this problem efficiently. Furthermore, we extend the Laplacian interpolation based compression model to a more general form, which is based on principles from bi-level optimization. We investigate two different variants of the Laplacian model, namely biharmonic interpolation and smoothed Total Variation regularization. Our numerical results show that significant improvements can be obtained from the biharmonic interpolation model, and it can recover an image with very high quality from only 5% pixels.",
"title": ""
},
{
"docid": "e9ce26c20369103f1f3549e921702cbf",
"text": "Vehicle navigation is one of the most important applications in the era of navigation which is mostly used by drivers. Therefore the efficiency of the maps given to the drivers has a great importance in the navigation system. In this paper we proposed a very efficient system which uses the GPS and earth maps to help the driver in navigation by robust display of the current position of the vehicle on a displayed map. The main aim of this project is designing a system which is capable of continuous monitoring of path of the vehicle on PC with Google Earth Application. Here the important issue is displaying the map on several various scales which are adopted by the users. The heart elements in the implementation of this project are GPS, GSM and MCU. The GPS-GSM integrated structure is designed to track the vehicles by using Google earth application. The micro controller is used to receive data from GPS and to transfer the latitude and longitude to the PC to map by using the VB.Net language and this map is generated using Google Earth information.",
"title": ""
},
{
"docid": "ad8b60be0abf430fa38c22b39f074df2",
"text": "Social media is playing an increasingly vital role in information dissemination. But with dissemination being more distributed, content often makes multiple hops, and consequently has opportunity to change. In this paper we focus on content that should be changing the least, namely quoted text. We find changes to be frequent, with their likelihood depending on the authority of the copied source and the type of site that is copying. We uncover patterns in the rate of appearance of new variants, their length, and popularity, and develop a simple model that is able to capture them. These patterns are distinct from ones produced when all copies are made from the same source, suggesting that information is evolving as it is being processed collectively in online social media.",
"title": ""
},
{
"docid": "35a0044724854f6fabeb777f80c8acd8",
"text": "Liposuction is one of the most commonly performed aesthetic procedures. It is performed worldwide as an outpatient procedure. However, the complications are underestimated and underreported by caregivers. We present a case of delayed diagnosis of bilothorax secondary to liver and gallbladder injury after tumescent liposuction. A 26-year-old female patient was transferred to our emergency department from an aesthetic clinic with worsening dyspnea, tachypnea and fatigue. She had undergone extensive liposuction of the thighs, buttocks, back and abdomen 5 days prior to presentation. A chest X-ray showed significant right-sided pleural effusion. Thoracentesis was performed and drained bilious fluid. CT scan of the abdomen revealed pleural, liver and gall bladder injury. An exploratory laparoscopy confirmed the findings, the collections were drained; cholecystectomy and intraoperative cholangiogram were performed. The patient did very well postoperatively and was discharged home in 2 days. Even though liposuction is considered a simple office-based procedure, its complications can be fatal. The lack of strict laws that exclusively place this procedure in the hands of medical professionals allow these procedures to still be done by less experienced hands and in outpatient-based settings. Our case serves to highlight yet another unique but potentially fatal complication of liposuction. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "37a8ea1b792466c6e39709879e7a7b41",
"text": "The lightning impulse withstand voltage for an oil-immersed power transformer is determined by the value of the lightning surge overvoltage generated at the transformer terminal. This overvoltage value has been conventionally obtained through lightning surge analysis using the electromagnetic transients program (EMTP), where the transformer is often simulated by a single lumped capacitance. However, since high frequency surge overvoltages ranging from several kHz to several MHz are generated in an actual system, a transformer circuit model capable of simulating the range up to this high frequency must be developed for further accurate analysis. In this paper, a high frequency circuit model for an oil-immersed transformer was developed and its validity was verified through comparison with the measurement results on the model winding actually produced. Consequently, it emerged that a high frequency model with three serially connected LC parallel circuits could adequately simulate the impedance characteristics of the winding up to a high frequency range of several MHz. Following lightning surge analysis for a 500 kV substation using this high frequency model, the peak value of the waveform was evaluated as lower than that simulated by conventional lumped capacitance even though the front rising was steeper. This phenomenon can be explained by the charging process of the capacitance circuit inside the transformer. Furthermore, the waveform analyzed by each model was converted into an equivalent standard lightning impulse waveform and the respective peak values were compared. As a result, the peak value obtained by the lumped capacitance simulation was evaluated as relatively higher under the present analysis conditions.",
"title": ""
},
{
"docid": "601318db5ca75c76cd44da78db9f4147",
"text": "Many accidents were happened because of fast driving, habitual working overtime or tired spirit. This paper presents a solution of remote warning for vehicles collision avoidance using vehicular communication. The development system integrates dedicated short range communication (DSRC) and global position system (GPS) with embedded system into a powerful remote warning system. To transmit the vehicular information and broadcast vehicle position; DSRC communication technology is adopt as the bridge. The proposed system is divided into two parts of the positioning and vehicular units in a vehicle. The positioning unit is used to provide the position and heading information from GPS module, and furthermore the vehicular unit is used to receive the break, throttle, and other signals via controller area network (CAN) interface connected to each mechanism. The mobile hardware are built with an embedded system using X86 processor in Linux system. A vehicle is communicated with other vehicles via DSRC in non-addressed protocol with wireless access in vehicular environments (WAVE) short message protocol. From the position data and vehicular information, this paper provided a conflict detection algorithm to do time separation and remote warning with error bubble consideration. And the warning information is on-line displayed in the screen. This system is able to enhance driver assistance service and realize critical safety by using vehicular information from the neighbor vehicles. Keywords—Dedicated short range communication, GPS, Control area network, Collision avoidance warning system.",
"title": ""
},
{
"docid": "f271fbf2cc674bd1fa7d5f0c8149ced4",
"text": "A wide range of inconsistencies can arise during requirements engineering as goals and requirements are elicited from multiple stakeholders. Resolving such inconsistencies sooner or later in the process is a necessary condition for successful development of the software implementing those requirements. The paper first reviews the main types of inconsistency that can arise during requirements elaboration, defining them in an integrated framework and exploring their interrelationships. It then concentrates on the specific case of conflicting formulations of goals and requirements among different stakeholder viewpoints or within a single viewpoint. A frequent, weaker form of conflict called divergence is introduced and studied in depth. Formal techniques and heuristics are proposed for detecting conflicts and divergences from specifications of goals/ requirements and of domain properties. Various techniques are then discussed for resolving conflicts and divergences systematically by introduction of new goals or by transformation of specifications of goals/objects towards conflict-free versions. Numerous examples are given throughout the paper to illustrate the practical relevance of the concepts and techniques presented. The latter are discussed in the framework of the KAOS methodology for goal-driven requirements engineering. Index Terms Goal-driven requirements engineering, divergent requirements, conflict management, viewpoints, specification transformation, lightweight formal methods. ,((( 7UDQVDFWLRQV RQ 6RIWZDUH (QJLQHHULQJ 6SHFLDO ,VVXH RQ 0DQDJLQJ ,QFRQVLVWHQF\\ LQ 6RIWZDUH 'HYHORSPHQW 1RY",
"title": ""
},
{
"docid": "9a332d9ffe0e08cc688a8644de736202",
"text": "Applications are increasingly using XML to represent semi-structured data and, consequently, a large amount of XML documents is available worldwide. As XML documents evolve over time, comparing XML documents to understand their evolution becomes fundamental. The main focus of existing research for comparing XML documents resides in identifying syntactic changes. However, a deeper notion of the change meaning is usually desired. This paper presents an inference-based XML evolution approach using Prolog to deal with this problem. Differently from existing XML diff approaches, our approach composes multiple syntactic changes, which usually have a common purpose, to infer semantic changes. We evaluated our approach through ten versions of an employment XML document. In this evaluation, we could observe that each new version introduced syntactic changes that could be summarized into semantic changes.",
"title": ""
},
{
"docid": "c35fa79bd405ec0fb6689d395929c055",
"text": "This study examines the potential profit of bull flag technical trading rules using a template matching technique based on pattern recognition for the Nasdaq Composite Index (NASDAQ) and Taiwan Weighted Index (TWI). To minimize measurement error due to data snooping, this study performed a series of experiments to test the effectiveness of the proposed method. The empirical results indicated that all of the technical trading rules correctly predict the direction of changes in the NASDAQ and TWI. This finding may provide investors with important information on asset allocation. Moreover, better bull flag template price fit is associated with higher average return. The empirical results demonstrated that the average return of trading rules conditioned on bull flag significantly better than buying every day for the study period, especially for TWI. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cb2917b8e6ea5413ef25bb241ff17d1f",
"text": "can be found at: Journal of Language and Social Psychology Additional services and information for http://jls.sagepub.com/cgi/alerts Email Alerts: http://jls.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://jls.sagepub.com/cgi/content/refs/23/4/447 SAGE Journals Online and HighWire Press platforms): (this article cites 16 articles hosted on the Citations",
"title": ""
},
{
"docid": "98a3216257c9c2358d2a70247b185cb9",
"text": "Deep Neural Networks (DNNs) have achieved impressive accuracy in many application domains including im-age classification. Training of DNNs is an extremely compute-intensive process and is solved using variants of the stochastic gradient descent (SGD) algorithm. A lot of recent research has focused on improving the performance of DNN training. In this paper, we present optimization techniques to improve the performance of the data parallel synchronous SGD algorithm using the Torch framework: (i) we maintain data in-memory to avoid file I/O overheads, (ii) we propose optimizations to the Torch data parallel table framework that handles multi-threading, and (iii) we present MPI optimization to minimize communication overheads. We evaluate the performance of our optimizations on a Power 8 Minsky cluster with 64 nodes and 256 NVidia Pascal P100 GPUs. With our optimizations, we are able to train 90 epochs of the ResNet-50 model on the Imagenet-1k dataset using 256 GPUs in just 48 minutes. This significantly improves on the previously best known performance of training 90 epochs of the ResNet-50 model on the same dataset using the same number of GPUs in 65 minutes. To the best of our knowledge, this is the best known training performance demonstrated for the Imagenet-1k dataset using 256 GPUs.",
"title": ""
},
{
"docid": "654592f46fbc578c756cddf4887eafb6",
"text": "We investigate the vulnerability of convolutional neural network (CNN) based face-recognition (FR) systems to presentation attacks (PA) performed using custom-made silicone masks. Previous works have studied the vulnerability of CNN-FR systems to 2D PAs such as print-attacks, or digitalvideo replay attacks, and to rigid 3D masks. This is the first study to consider PAs performed using custom-made flexible silicone masks. Before embarking on research on detecting a new variety of PA, it is important to estimate the seriousness of the threat posed by the type of PA. In this work we demonstrate that PAs using custom silicone masks do pose a serious threat to state-of-the-art FR systems. Using a new dataset based on six custom silicone masks, we show that the vulnerability of each FR system in this study is at least 10 times higher than its false match rate. We also propose a simple but effective presentation attack detection method, based on a low-cost thermal camera.",
"title": ""
},
{
"docid": "112ce03ab823ab75d81a47048cf003cd",
"text": "Data generated on Twitter has become a rich source for various data mining tasks. Those data analysis tasks that are dependent on the tweet semantics, such as sentiment analysis, emotion mining, and rumor detection among others, suffer considerably if the tweet is not credible, not real, or spam. In this paper, we perform an extensive analysis on credibility of Arabic content on Twitter. We also build a classification model (CAT) to automatically predict the credibility of a given Arabic tweet. Of particular originality is the inclusion of features extracted directly or indirectly from the author’s profile and timeline. To train and test CAT, we annotated for credibility a data set of 9, 000 Arabic tweets that are topic independent. CAT achieved consistent improvements in predicting the credibility of the tweets when compared to several baselines and when compared to the state-of-the-art approach with an improvement of 21% in weighted average Fmeasure. We also conducted experiments to highlight the importance of the userbased features as opposed to the contentbased features. We conclude our work with a feature reduction experiment that highlights the best indicative features of credibility.",
"title": ""
},
{
"docid": "3ec63f1c1f74c5d11eaa9d360ceaac55",
"text": "High-level shape understanding and technique evaluation on large repositories of 3D shapes often benefit from additional information known about the shapes. One example of such information is the semantic segmentation of a shape into functional or meaningful parts. Generating accurate segmentations with meaningful segment boundaries is, however, a costly process, typically requiring large amounts of user time to achieve high quality results. In this paper we present an active learning framework for large dataset segmentation, which iteratively provides the user with new predictions by training new models based on already segmented shapes. Our proposed pipeline consists of three novel components. First, we a propose a fast and relatively accurate feature-based deep learning model to provide datasetwide segmentation predictions. Second, we propose an information theory measure to estimate the prediction quality and for ordering subsequent fast and meaningful shape selection. Our experiments show that such suggestive ordering helps reduce users time and effort, produce high quality predictions, and construct a model that generalizes well. Finally, we provide effective segmentation refinement features to help the user quickly correct any incorrect predictions. We show that our framework is more accurate and in general more efficient than state-of-the-art, for massive dataset segmentation with while also providing consistent segment boundaries.",
"title": ""
},
{
"docid": "2f2c36452ab45c4234904d9b11f28eb7",
"text": "Bitcoin is a potentially disruptive new crypto-currency based on a decentralized opensource protocol which is gradually gaining popularity. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the restrictions on the rate of transaction processing in Bitcoin as a function of both the bandwidth available to nodes and the network delay, both of which lower the efficiency of Bitcoin’s transaction processing. The security analysis done by Bitcoin’s creator Satoshi Nakamoto [12] assumes that block propagation delays are negligible compared to the time between blocks—an assumption that does not hold when the protocol is required to process transactions at high rates. We improve upon the original analysis and remove this assumption. Using our results, we are able to give bounds on the number of transactions per second the protocol can handle securely. Building on previously published measurements by Decker and Wattenhofer [5], we show these bounds are currently more restrictive by an order of magnitude than the bandwidth needed to stream all transactions. We additionally show how currently planned improvements to the protocol, namely the use of transaction hashes in blocks (instead of complete transaction records), will dramatically alleviate these restrictions. Finally, we present an easily implementable modification to the way Bitcoin constructs its main data structure, the blockchain, that immensely improves security from attackers, especially when the network operates at high rates. This improvement allows for further increases in the number of transactions processed per second. We show that with our proposed modification, significant speedups can be gained in confirmation time of transactions as well. The block generation rate can be securely increased to more than one block per second – a 600 fold speedup compared to today’s rate, while still allowing the network to processes many transactions per second.",
"title": ""
},
{
"docid": "2c48c5f63bafff889f3144b62751f97e",
"text": "This project explores approaches to learning the SQuAD dataset. It introduces a simple baseline model using an encoder-decoder architecture and shows how pointer networks and coattention techniques result in significant improvements over the baseline. The best model, combining coattention encoding with a pointer network decoder, reaches an F1 score of 65.036% and EM score of 53.205% on the SQuAD test set.",
"title": ""
},
{
"docid": "f8eff785cd8691392d41d3c54f72ae42",
"text": "The last ten years have seen a proliferation of introductory programming environments designed for learners across the K-12 spectrum. These environments include visual block-based tools, text-based languages designed for novices, and, increasingly, hybrid environments that blend features of block-based and text-based programming. This paper presents results from a quasi-experimental study investigating the affordances of a hybrid block/text programming environment relative to comparable block-based and textual versions in an introductory high school computer science class. The analysis reveals the hybrid environment demonstrates characteristics of both ancestors while outperforming the block-based and text-based versions in certain dimensions. This paper contributes to our understanding of the design of introductory programming environments and the design challenge of creating and evaluating novel representations for learning.",
"title": ""
},
{
"docid": "bfd57465a5d6f85fb55ffe13ef79f3a5",
"text": "We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.",
"title": ""
},
{
"docid": "5645ebc42dcd7d06866667233a0b1f67",
"text": "In recent years, the rapid growth of information technology and digital communication has become very important to secure information transmission between the sender and receiver. Therefore, steganography introduces strongly to hide information and to communicate a secret data in an appropriate multimedia carrier, e.g., image, audio and video files. In this paper, a new algorithm for image steganography has been proposed to hide a large amount of secret data presented by secret color image. This algorithm is based on different size image segmentations (DSIS) and modified least significant bits (MLSB), where the DSIS algorithm has been applied to embed a secret image randomly instead of sequentially; this approach has been applied before embedding process. The number of bit to be replaced at each byte is non uniform, it bases on byte characteristics by constructing an effective hypothesis. The simulation results justify that the proposed approach is employed efficiently and satisfied high imperceptible with high payload capacity reached to four bits per byte.",
"title": ""
}
] |
scidocsrr
|
d20311cd85785a8283b2c0a956867149
|
Smart City as a Service (SCaaS): A Future Roadmap for E-Government Smart City Cloud Computing Initiatives
|
[
{
"docid": "0521f79f13cdbe05867b5db733feac16",
"text": "This conceptual paper discusses how we can consider a particular city as a smart one, drawing on recent practices to make cities smart. A set of the common multidimensional components underlying the smart city concept and the core factors for a successful smart city initiative is identified by exploring current working definitions of smart city and a diversity of various conceptual relatives similar to smart city. The paper offers strategic principles aligning to the three main dimensions (technology, people, and institutions) of smart city: integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement.",
"title": ""
},
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: m.batty@ucl.ac.uk 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
},
{
"docid": "8654b5134dadc076a6298526e60f66fb",
"text": "Ideas competitions appear to be a promising tool for crowdsourcing and open innovation processes, especially for business-to-business software companies. active participation of potential lead users is the key to success. Yet a look at existing ideas competitions in the software field leads to the conclusion that many information technology (It)–based ideas competitions fail to meet requirements upon which active participation is established. the paper describes how activation-enabling functionalities can be systematically designed and implemented in an It-based ideas competition for enterprise resource planning software. We proceeded to evaluate the outcomes of these design measures and found that participation can be supported using a two-step model. the components of the model support incentives and motives of users. Incentives and motives of the users then support the process of activation and consequently participation throughout the ideas competition. this contributes to the successful implementation and maintenance of the ideas competition, thereby providing support for the development of promising innovative ideas. the paper concludes with a discussion of further activation-supporting components yet to be implemented and points to rich possibilities for future research in these areas.",
"title": ""
}
] |
[
{
"docid": "2e9f6ac770ddeb9bbc50d9c55b4131f9",
"text": "IEEE 802.15.4 standard for Low Power Wireless Personal Area Networks (LoWPANs) is emerging as a promising technology to bring envisioned ubiquitous paragon, into realization. Considerable efforts are being carried on to integrate LoWPANs with other wired and wireless IP networks, in order to make use of pervasive nature and existing infrastructure associated with IP technologies. Designing a security solution becomes a challenging task as this involves threats from wireless domain of resource constrained devices as well as from extremely mature IP domain. In this paper we have i) identified security threats and requirements for LoWPANs ii) analyzed current security solutions and identified their shortcomings, iii) proposed a generic security framework that can be modified according to application requirements to provide desired level of security. We have also given example implementation scenario of our proposed framework for resource and security critical applications.",
"title": ""
},
{
"docid": "87a319361ad48711eff002942735258f",
"text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned",
"title": ""
},
{
"docid": "1ed93d114804da5714b7b612f40e8486",
"text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.",
"title": ""
},
{
"docid": "3e98e6e61992d73d4b62cbf0b4e8fac2",
"text": "Privacy decision making has been investigated in the Information Systems literature using two contrasting frameworks. A first framework has largely focused on deliberative, rational processes by which individuals weigh the expected benefits of privacy allowances and disclosure against their resulting costs. Under this framework, consumer privacy decision making is broadly constructed as driven by stable, and therefore predictable, individual preferences for privacy. More recently, a second framework has leveraged theories and results from behavioral decision research to construe privacy decision making as a process in which cognitive heuristics and biases often occur, and individuals are significantly influenced by non-normative factors in choosing what to reveal or to protect about themselves. In three experiments, we combine and contrast these two perspectives by evaluating the impact of changes in objective risk of disclosure (normative factors), and the impact of changes in relative, and in particular reference-dependent, perceptions of risk (non-normative factors) on individual privacy decision making. We find that both relative and objective risks can impact individual privacy decisions. However, and surprisingly, we find that in experiments more closely modeled on real world contexts, and in experiments that capture actual privacy decisions as opposed to hypothetical choices, relative risk is a more pronounced driver of privacy decisions compared to objective risk. Our results suggest that while normative factors can influence consumers’ self-predicted, hypothetical behavior, nonnormative factors may sometimes be more important and consistent drivers of actual privacy choices.",
"title": ""
},
{
"docid": "11d3dc9169c914bfdff66d1d9afddfaf",
"text": "As most modern cryptographic Radio Frequency Identification (RFID) devices are based on ciphers that are secure from a purely theoretical point of view, e.g., (Triple-)DES or AES, adversaries have been adopting new methods to extract secret information and cryptographic keys from contactless smartcards: Side-Channel Analysis (SCA) targets the physical implementation of a cipher and allows to recover secret keys by exploiting a side-channel, for instance, the electro-magnetic (EM) emanation of an Integrated Circuit (IC). In this paper we present an analog demodulator specifically designed for refining the SCA of contactless smartcards. The customized analogue hardware increases the quality of EM measurements, facilitates the processing of the side-channel leakage and can serve as a plug-in component to enhance any existing SCA laboratory. Employing it to obtain power profiles of several real-world cryptographic RFIDs, we demonstrate the effectiveness of our measurement setup and evaluate the improvement of our new analog technique compared to previously proposed approaches. Using the example of the popular Mifare DESFire MF3ICD40 contactless smartcard, we show that commercial RFID devices are susceptible to the proposed SCA methods. The security analyses presented in this paper do not require expensive equipment and demonstrate that SCA poses a severe threat to many real-world systems. This novel attack vector has to be taken into account when employing contactless smartcards in security-sensitive applications, e.g., for wireless payment or identification.",
"title": ""
},
{
"docid": "d612aeb7f7572345bab8609571f4030d",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8",
"text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …",
"title": ""
},
{
"docid": "16e6acd62753e8c0c206bde20f3cbe52",
"text": "In this paper we focus our attention on the comparison of various lemmatization and stemming algorithms, which are often used in nature language processing (NLP). Sometimes these two techniques are considered to be identical, but there is an important difference. Lemmatization is generally more utilizable, because it produces the basic word form which is required in many application areas (i.e. cross-language processing and machine translation). However, lemmatization is a difficult task especially for highly inflected natural languages having a lot of words for the same normalized word form. We present a novel lemmatization algorithm which utilizes the multilingual semantic thesaurus Eurowordnet (EWN). We describe the algorithm in detail and compare it with other widely used algorithms for word normalization on two different corpora. We present promising results obtained by our EWN-based lemmatization approach in comparison to other techniques. We also discuss the influence of the word normalization on classification task in general. In overall, the performance of our method is good and it achieves similar precision and recall in comparison with other word normalization methods. However, our experiments indicate that word normalization does not affect the text classification task significantly.",
"title": ""
},
{
"docid": "e9b78d6f0fd98d5ee27bc08864cdb6a1",
"text": "Mathematical models play a pivotal role in understanding and designing advanced low-power wireless systems. However, the distributed and uncoordinated operation of traditional multi-hop low-power wireless protocols greatly complicates their accurate modeling. This is mainly because these protocols build and maintain substantial network state to cope with the dynamics of low-power wireless links. Recent protocols depart from this design by leveraging synchronous transmissions (ST), whereby multiple nodes simultaneously transmit towards the same receiver, as opposed to pair wise link-based transmissions (LT). ST improve the one-hop packet reliability to an extent that efficient multi-hop protocols with little network state are feasible. This paper studies whether ST also enable simple yet accurate modeling of these protocols. Our contribution to this end is two-fold. First, we show, through experiments on a 139-node test bed, that characterizing packet receptions and losses as a sequence of independent and identically distributed (i.i.d.) Bernoulli trials-a common assumption in protocol modeling but often illegitimate for LT-is largely valid for ST. We then show how this finding simplifies the modeling of a recent ST-based protocol, by deriving (i) sufficient conditions for probabilistic guarantees on the end-to-end packet reliability, and (ii) a Markovian model to estimate the long-term energy consumption. Validation using test bed experiments confirms that our simple models are also highly accurate, for example, the model error in energy against real measurements is 0.25%, a figure never reported before in the related literature.",
"title": ""
},
{
"docid": "5aa8fb560e7d5c2621054da97c30ffec",
"text": "PURPOSE\nThe aim of this meta-analysis was to evaluate different methods for guided bone regeneration using collagen membranes and particulate grafting materials in implant dentistry.\n\n\nMATERIALS AND METHODS\nAn electronic database search and hand search were performed for all relevant articles dealing with guided bone regeneration in implant dentistry published between 1980 and 2014. Only randomized clinical trials and prospective controlled studies were included. The primary outcomes of interest were survival rates, membrane exposure rates, bone gain/defect reduction, and vertical bone loss at follow-up. A meta-analysis was performed to determine the effects of presence of membrane cross-linking, timing of implant placement, membrane fixation, and decortication.\n\n\nRESULTS\nTwenty studies met the inclusion criteria. Implant survival rates were similar between simultaneous and subsequent implant placement. The membrane exposure rate of cross-linked membranes was approximately 30% higher than that of non-cross-linked membranes. The use of anorganic bovine bone mineral led to sufficient newly regenerated bone and high implant survival rates. Membrane fixation was weakly associated with increased vertical bone gain, and decortication led to higher horizontal bone gain (defect depth).\n\n\nCONCLUSION\nGuided bone regeneration with particulate graft materials and resorbable collagen membranes is an effective technique for lateral alveolar ridge augmentation. Because implant survival rates for simultaneous and subsequent implant placement were similar, simultaneous implant placement is recommended when possible. Additional techniques like membrane fixation and decortication may represent beneficial implications for the practice.",
"title": ""
},
{
"docid": "ffccfdc91a1c0b30cf98d0461149580b",
"text": "This paper presents design guidelines for ultra-low power Low Noise Amplifier (LNA) design by comparing input matching, gain, and noise figure (NF) characteristics of common-source (CS) and common-gate (CG) topologies. A current-reused ultra-low power 2.2 GHz CG LNA is proposed and implemented based on 0.18 um CMOS technology. Measurement results show 13.9 dB power gain, 5.14 dB NF, and −9.3 dBm IIP3, respectively, while dissipating 140 uA from a 1.5 V supply, which shows best figure of merit (FOM) among all published ultra-low power LNAs.",
"title": ""
},
{
"docid": "c3b05f287192be94c6f3ea5a13d6ec5d",
"text": "Existing eye gaze tracking systems typically require an explicit personal calibration process in order to estimate certain person-specific eye parameters. For natural human computer interaction, such a personal calibration is often cumbersome and unnatural. In this paper, we propose a new probabilistic eye gaze tracking system without explicit personal calibration. Unlike the traditional eye gaze tracking methods, which estimate the eye parameter deterministically, our approach estimates the probability distributions of the eye parameter and the eye gaze, by combining image saliency with the 3D eye model. By using an incremental learning framework, the subject doesn't need personal calibration before using the system. His/her eye parameter and gaze estimation can be improved gradually when he/she is naturally viewing a sequence of images on the screen. The experimental result shows that the proposed system can achieve less than three degrees accuracy for different people without calibration.",
"title": ""
},
{
"docid": "e13874aa8c3fe19bb2a176fd3a039887",
"text": "As a typical deep learning model, Convolutional Neural Network (CNN) has shown excellent ability in solving complex classification problems. To apply CNN models in mobile ends and wearable devices, a fully pipelined hardware architecture adopting a Row Processing Tree (RPT) structure with small memory resource consumption between convolutional layers is proposed. A modified Row Stationary (RS) dataflow is implemented to evaluate the RPT architecture. Under the the same work frequency requirement for these two architectures, the experimental results show that the RPT architecture reduces 91% on-chip memory and 75% DRAM bandwidth compared with the modified RS dataflow, but the throughput of the modified RS dataflow is 3 times higher than the our proposed RPT architecture. The RPT architecture can achieve 121fps at 100MHZ while processing a CNN including 4 convolutional layers.",
"title": ""
},
{
"docid": "f79def9a56be8d91c81385abfc6dbee7",
"text": "Computational Creativity is the AI subfield in which we study how to build computational models of creative thought in science and the arts. From an engineering perspective, it is des irable to have concrete measures for assessing the progress made from one version of a program to another, or for comparing and contras ting different software systems for the same creative task. We de scribe the Turing Test and versions of it which have been used in orde r to measure progress in Computational Creativity. We show th at the versions proposed thus far lack the important aspect of inte rac ion, without which much of the power of the Turing Test is lost. We a rgue that the Turing Test is largely inappropriate for the purpos es of evaluation in Computational Creativity, since it attempts to ho mogenise creativity into a single (human) style, does not take into ac count the importance of background and contextual information for a c eative act, encourages superficial, uninteresting advances in fro nt-ends, and rewards creativity which adheres to a certain style over tha t which creates something which is genuinely novel. We further argu e that although there may be some place for Turing-style tests for C omputational Creativity at some point in the future, it is curren tly untenable to apply any defensible version of the Turing Test. As an alternative to Turing-style tests, we introduce two de scriptive models for evaluating creative software, the FACE mode l which describes creative acts performed by software in terms of tu ples of generative acts, and the IDEA model which describes how such creative acts can have an impact upon an ideal audience, given id eal information about background knowledge and the software de v lopment process. While these models require further study and e l boration, we believe that they can be usefully applied to current sys ems as well as guiding further development of creative systems. 1 The Turing Test and Computational Creativity The Turing Test (TT), in which a computer and human are interr ogated, with the computer considered intelligent if the huma n interrogator is unable to distinguish between them, is principal ly a philosophical construct proposed by Alan Turing as a way of determ ining whether AI has achieved its goal of simulating intelligence [1]. The TT has provoked much discussion, both historical and contem porary, however this has principally been within the philosophy of A I: most AI researchers see it as a distraction from their goals, enco uraging a mere trickery of intelligence and ever more sophisticated n atural language front ends, as opposed to focussing on real problems. D espite the appeal of the (as yet unawarded) Loebner Prize, most subfi elds of AI have developed and follow their own evaluation criteri a and methodologies, which have little to do with the TT. 1 School of Informatics, University of Edinburgh, UK 2 Department of Computing, Imperial College, London, UK Computational Creativity (CC) is a subfield of AI, in which re searchers aim to model creative thought by building program s which can produce ideas and artefacts which are novel, surprising and valuable, either autonomously or in conjunction with humans. Th ere are three main motivations for the study of Computational Creat ivity: • to provide a computational perspective on human creativity , in order to help us to understand it (cognitive science); • to enable machines to be creative, in order to enhance our liv es in some way (engineering); and • to produce tools which enhance human creativity (aids for cr eative individuals). Creativity can be subdivided into everyday problem-solvin g, and the sort of creativity reserved for the truly great, in which a problem is solved or an object created that has a major impact on other people. These are respectively known as “little-c” (mundane) a nd “bigC” (eminent) creativity [2]. Boden [3] draws a similar disti nction in her view of creativity as search within a conceptual space, w h re “exploratory creativity” searches within the space, and “tran sformational creativity” involves expanding the space by breaking one or m e of the defining characteristics and creating a new conceptua l space. Boden sees transformational creativity as more surprising , i ce, according to the defining rules of the conceptual space, ideas w ithin this space could not have been found before. There are two notions of evaluation in CC: ( i) judgements which determine whether an idea or artefact is valuable or not (an e ssential criterion for creativity) – these judgements may be made int rnally by whoever produced the idea, or externally, by someone else and (ii ) judgements to determine whether a system is acting creativ ely or not. In the following discussion, by evaluation, we mean the latter judgement. Finding measures of evaluation of CC is an active area of research, both influenced by, and influencing, practical a nd theoretical aspects of CC. It is a particularly important area, s ince such measures suggest ways of defining progress in the field, 3 as well as strongly guiding program design. While tests of creativity in humans are important for our understanding of creativity, they do n ot usually causehumans to be creative (creativity training programs, which train people to do well at such tests, notwithstanding). Way s in which CC is evaluated, on the other hand, will have a deep influence o future development of potentially creative programs. Clearl y, different modes of evaluation will be appropriate for the different mo tivations listed above. 3 The necessity for good measures of evaluation in CC is somewh at paralleled in the psychology of creativity: “Creativity is becoming a p opular topic in educational, economic and political circles throughout th e world – whether this popularity is just a passing fad or a lasting change in in terest in creativity and innovation will probably depend, in large part, on wh ether creativity assessment keeps pace with the rest of the field.” [4, p. 64] The Turing Test is of particular interest to CC for two reason s. Firstly, unlike the general situation in AI, the TT, or varia tions of it, arecurrently being used to evaluate candidate programs in CC. T hus, the TT is having a major influence on the development of CC. Thi s influence is usually neither noted nor questioned. Secondly , there are huge philosophical problems with using a test based on imita tion to evaluate competence in an area of thought which is based on or iginality. While there are varying definitions of creativity, t he majority consider some interpretation of novelty and utility to be es sential criteria. For instance, one of the commonalities found by Rothe nberg in a collection of international perspectives on creativit y is that “creativity involves thinking that is aimed at producing ideas o r products that are relatively novel” [5, p.2], and in CC the combin ation of novelty and usefulness is accepted as key (for instance, s ee [6] or [3]). In [4], Plucker and Makel list “similar, overlapping a nd possibly synonymous terms for creativity: imagination, ingenuity, innovation, inspiration, inventiveness, muse, novelty, originality, serendipity, talent and unique”. The term ‘imitation’ is simply antipodal to many of these terms. In the following sections, we firstly describe and discuss so me attempts to evaluate Computational Creativity using the Turi ng Test or versions of it ( §2), concluding that these attempts all omit the important aspect of interaction, and suggest the sort of directio n that a TT for a creative computer art system might follow. We then pres ent a series of arguments that the TT is inappropriate for measuring creativity in computers (or humans) in §3, and suggest that although there may be some place for Turing-style tests for Computational C reativity at some point in the future, it is currently untenable and impractical. As an alternative to Turing-style tests, in §4, we introduce two descriptive models for evaluating creative software, the F ACE model which describes creative acts performed by software in term s of tuples of generative acts, and the IDEA model which describes h ow such creative acts can have an impact upon an ideal audience, given ideal information about background knowledge and the softw are development process. We conclude our discussion in §5. 2 Attempts to evaluate Computational Creativity using the Turing Test or versions of it There have been several attempts to evaluate Computational Cre tivity using the Turing Test or versions of it. While these are us f l in terms of advancing our understanding of CC, they do not go f ar enough. In this section we discuss two such advances ( §2.1 and§2.2), and two further suggestions on using human creative behavio ur as a guide for evaluating Computational Creativity ( §2.3). We highlight the importance of interaction in §2.4. 2.1 Discrimination tests Pearce and Wiggins [7] assert for the need for objective, fal sifi ble measures of evaluation in cognitive musicology. They propo se the ‘discrimination test’, which is analogous to the TT, in whic subjects are played segments of both machine and human-generated mus ic and asked to distinguish between them. This might be in a part icular style, such as Bach’s music, or might be more general. The y also present one of the most considered analyses of whether Turin g-style tests such as the framework they propose might be appropriat e for evaluating Computational Creativity [7, §7]. While they do not directly refer to Boden’s exploratory creativity [3], instea d referring to Boden’s distinction between psychological (P-creativity , concerning ideas which are novel with resepct to a particular mind) and h istorical creativity (H-creativity, concerning ideas which are novel with respect to the whole of human history ), they do argue that much creative work is carried out within a particular style. They cite Garnham’s response ",
"title": ""
},
{
"docid": "34cab0c02d5f5ec5183bd63c01f932c7",
"text": "Autogynephilia is defined as a male’s propensity to be sexually aroused by the thought or image of himself as female. Autogynephilia explains the desire for sex reassignment of some maleto-female (MtF) transsexuals. It can be conceptualized as both a paraphilia and a sexual orientation. The concept of autogynephilia provides an alternative to the traditional model of transsexualism that emphasizes gender identity. Autogynephilia helps explain mid-life MtF gender transition, progression from transvestism to transsexualism, the prevalence of other paraphilias among MtF transsexuals, and late development of sexual interest in male partners. Hormone therapy and sex reassignment surgery can be effective treatments in autogynephilic transsexualism. The concept of autogynephilia can help clinicians better understand MtF transsexual clients who recognize a strong sexual component to their gender dysphoria. (Journal of Gay & Lesbian Psychotherapy, 8(1/2), 2004, pp. 69-87.)",
"title": ""
},
{
"docid": "9533193407869250854157e89d2815eb",
"text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.",
"title": ""
},
{
"docid": "75e5480b6a319e1c879eba50604a4f91",
"text": "Quantum circuits are time-dependent diagrams describing the process of quantum computation. Usually, a quantum algorithm must be mapped into a quantum circuit. Optimal synthesis of quantum circuits is intractable, and heuristic methods must be employed. With the use of heuristics, the optimality of circuits is no longer guaranteed. In this paper, we consider a local optimization technique based on templates to simplify and reduce the depth of nonoptimal quantum circuits. We present and analyze templates in the general case and provide particular details for the circuits composed of NOT, CNOT, and controlled-sqrt-of-NOT gates. We apply templates to optimize various common circuits implementing multiple control Toffoli gates and quantum Boolean arithmetic circuits. We also show how templates can be used to compact the number of levels of a quantum circuit. The runtime of our implementation is small, whereas the reduction in the number of quantum gates and number of levels is significant.",
"title": ""
},
{
"docid": "a7eec693523207e6a9547000c1fbf306",
"text": "Articulated hand tracking systems have been commonly used in virtual reality applications, including systems with human-computer interaction or interaction with game consoles. However, building an effective real-time hand pose tracker remains challenging. In this paper, we present a simple and efficient methodology for tracking and reconstructing 3d hand poses using a markered optical motion capture system. Markers were positioned at strategic points, and an inverse kinematics solver was incorporated to fit the rest of the joints to the hand model. The model is highly constrained with rotational and orientational constraints, allowing motion only within a feasible set. The method is real-time implementable and the results are promising, even with a low frame rate.",
"title": ""
},
{
"docid": "f56c5a623b29b88f42bf5d6913b2823e",
"text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.",
"title": ""
},
{
"docid": "4cff5279110ff2e45060f3ccec7d51ba",
"text": "Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)",
"title": ""
}
] |
scidocsrr
|
25a20b6cbaedbd10c78015e685277e68
|
Extracting hand grasp and motion for intent expression in mid-air shape deformation: A concrete and iterative exploration through a virtual pottery application
|
[
{
"docid": "201f6b0491ecab7bc89f7f18a4d11f25",
"text": "Gesture and speech combine to form a rich basis for human conversational interaction. To exploit these modalities in HCI, we need to understand the interplay between them and the way in which they support communication. We propose a framework for the gesture research done to date, and present our work on the cross-modal cues for discourse segmentation in free-form gesticulation accompanying speech in natural conversation as a new paradigm for such multimodal interaction. The basis for this integration is the psycholinguistic concept of the coequal generation of gesture and speech from the same semantic intent. We present a detailed case study of a gesture and speech elicitation experiment in which a subject describes her living space to an interlocutor. We perform two independent sets of analyses on the video and audio data: video and audio analysis to extract segmentation cues, and expert transcription of the speech and gesture data by microanalyzing the videotape using a frame-accurate videoplayer to correlate the speech with the gestural entities. We compare the results of both analyses to identify the cues accessible in the gestural and audio data that correlate well with the expert psycholinguistic analysis. We show that \"handedness\" and the kind of symmetry in two-handed gestures provide effective supersegmental discourse cues.",
"title": ""
},
{
"docid": "759207b77a14edb08b81cbd53def9960",
"text": "Computer Aided Design (CAD) typically involves tasks such as adjusting the camera perspective and assembling pieces in free space that require specifying 6 degrees of freedom (DOF). The standard approach is to factor these DOFs into 2D subspaces that are mapped to the x and y axes of a mouse. This metaphor is inherently modal because one needs to switch between subspaces, and disconnects the input space from the modeling space. In this paper, we propose a bimanual hand tracking system that provides physically-motivated 6-DOF control for 3D assembly. First, we discuss a set of principles that guide the design of our precise, easy-to-use, and comfortable-to-use system. Based on these guidelines, we describe a 3D input metaphor that supports constraint specification classically used in CAD software, is based on only a few simple gestures, lets users rest their elbows on their desk, and works alongside the keyboard and mouse. Our approach uses two consumer-grade webcams to observe the user's hands. We solve the pose estimation problem with efficient queries of a precomputed database that relates hand silhouettes to their 3D configuration. We demonstrate efficient 3D mechanical assembly of several CAD models using our hand-tracking system.",
"title": ""
}
] |
[
{
"docid": "aa23a546d17572f6b79c72832d83308b",
"text": "Leader opening and closing behaviors are assumed to foster high levels of employee exploration and exploitation behaviors, hence motivating employee innovative performance. Applying the ambidexterity theory of leadership for innovation, results revealed that leader opening and closing behaviors positively predicted employee exploration and exploitation behaviors, respectively, above and beyond the control variables. Moreover, results showed that employee innovative performance was significantly predicted by leader opening behavior, leader closing behavior, and the interaction between leaders’ opening and closing behaviors, above and beyond control variables.",
"title": ""
},
{
"docid": "21031b55206dd330852b8d11e8e6a84a",
"text": "To predict the most salient regions of complex natural scenes, saliency models commonly compute several feature maps (contrast, orientation, motion...) and linearly combine them into a master saliency map. Since feature maps have different spatial distribution and amplitude dynamic ranges, determining their contributions to overall saliency remains an open problem. Most state-of-the-art models do not take time into account and give feature maps constant weights across the stimulus duration. However, visual exploration is a highly dynamic process shaped by many time-dependent factors. For instance, some systematic viewing patterns such as the center bias are known to dramatically vary across the time course of the exploration. In this paper, we use maximum likelihood and shrinkage methods to dynamically and jointly learn feature map and systematic viewing pattern weights directly from eye-tracking data recorded on videos. We show that these weights systematically vary as a function of time, and heavily depend upon the semantic visual category of the videos being processed. Our fusion method allows taking these variations into account, and outperforms other stateof-the-art fusion schemes using constant weights over time. The code, videos and eye-tracking data we used for this study are available online.",
"title": ""
},
{
"docid": "e0776e4e73d63d75ba959972be601f6c",
"text": "Mini-batch stochastic gradient methods are state of the art for distributed training of deep neural networks. In recent years, a push for efficiency for large-scale applications has lead to drastically large mini-batch sizes. However, two significant roadblocks remain for such large-batch variants. On one hand, increasing the number of workers introduces communication bottlenecks, and efficient algorithms need to be able to adapt to the changing computation vs. communication tradeoffs in heterogeneous systems. On the other hand, independent of communication, large-batch variants do not generalize well. We argue that variants of recently proposed local SGD, which performs several update steps on a local model before communicating with other workers can solve both these problems. Our experiments show performance gains in training efficiency, scalability, and adaptivity to the underlying system resources. We propose a variant, postlocal SGD that significantly improves the generalization performance of large batch sizes while reducing communication. Additionally, post-local SGD converges to flatter minima as opposed to large-batch methods, which can be understood by relating of local SGD to noise injection. Thus, local SGD is an enticing alternative to large-batch SGD.",
"title": ""
},
{
"docid": "5d1f3dbce3f5d33b4d0b251da060cab6",
"text": "Cyber-Physical Systems (CPS) is an exciting emerging research area that has drawn the attention of many researchers. Although the question of \"What is a CPS?\" remains open, widely recognized and accepted attributes of a CPS include timeliness, distributed, reliability, fault-tolerance, security, scalability and autonomous. In this paper, a CPS definition is given and a prototype architecture is proposed. It is argued that this architecture captures the essential attributes of a CPS and lead to identification of many research challenges.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
},
{
"docid": "7f390d8dfd98d03ad4e7b56948c8adce",
"text": "Recent advances in deep learning have enabled the extraction of high-level features from raw sensor data which has opened up new possibilities in many different fields, including computer generated choreography. In this paper we present a system chorrnn for generating novel choreographic material in the nuanced choreographic language and style of an individual choreographer. It also shows promising results in producing a higher level compositional cohesion, rather than just generating sequences of movement. At the core of chor-rnn is a deep recurrent neural network trained on raw motion capture data and that can generate new dance sequences for a solo dancer. Chor-rnn can be used for collaborative human-machine choreography or as a creative catalyst, serving as inspiration for a choreographer.",
"title": ""
},
{
"docid": "2d0a82799d75c08f288d1105280a6d60",
"text": "The increasing complexity of deep learning architectures is resulting in training time requiring weeks or even months. This slow training is due in part to \"vanishing gradients,\" in which the gradients used by back-propagation are extremely large for weights connecting deep layers (layers near the output layer), and extremely small for shallow layers (near the input layer), this results in slow learning in the shallow layers. Additionally, it has also been shown that in highly non-convex problems, such as deep neural networks, there is a proliferation of high-error low curvature saddle points, which slows down learning dramatically [1]. In this paper, we attempt to overcome the two above problems by proposing an optimization method for training deep neural networks which uses learning rates which are both specific to each layer in the network and adaptive to the curvature of the function, increasing the learning rate at low curvature points. This enables us to speed up learning in the shallow layers of the network and quickly escape high-error low curvature saddle points. We test our method on standard image classification datasets such as MNIST, CIFAR10 and ImageNet, and demonstrate that our method increases accuracy as well as reduces the required training time over standard algorithms.",
"title": ""
},
{
"docid": "6f370d729b8e8172b218071af89af7ad",
"text": "In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.",
"title": ""
},
{
"docid": "587c6f30cda5f45a6b43d55197d2ed40",
"text": "We present a mechanism that puts users in the center of control and empowers them to dictate the access to their collections of data. Revisiting the fundamental mechanisms in security for providing protection, our solution uses capabilities, access lists, and access rights following well-understood formal notions for reasoning about access. This contribution presents a practical, correct, auditable, transparent, distributed, and decentralized mechanism that is well-matched to the current emerging environments including Internet of Things, smart city, precision medicine, and autonomous cars. It is based on well-tested principles and practices used in distributed authorization, cryptocurrencies, and scalable computing.",
"title": ""
},
{
"docid": "c5f1d5fc5c5161bc9795cdc0362b8ca7",
"text": "Bayesian optimization has become a successful tool for optimizing the hyperparameters of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.",
"title": ""
},
{
"docid": "cbe70e9372d1588f075d2037164b3077",
"text": "Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.",
"title": ""
},
{
"docid": "0fdf2d74929fed2d4fe401afbf81d1d6",
"text": "The nasal surface is made up of several concave and convex surfaces separated from one another by ridges and valleys. Gonzalez-Ulloa has designated the nose an aesthetic unit of the face. These smaller parts (tip, dorsum, sidewalls, alar lobules, and soft triangles) may be called topographic subunits. When a large part of a subunit has been lost, replacing the entire subunit rather than simply patching the defect often gives a superior result. This subunit approach to nasal reconstruction causes unsatisfactory border scars of flaps to mimic the normal shadowed valleys and lighted ridges of the nasal surface. Furthermore, as trapdoor contraction occurs, the entire reconstructed subunit bulges in a way that simulates the normal contour of a nasal tip, dorsal hump, or alar lobule. Photographs show five patients in whom this principle was followed and one in whom it was not.",
"title": ""
},
{
"docid": "d6b213889ba6073b0987852e31b98c6a",
"text": "Nowadays, large volumes of multimedia data are outsourced to the cloud to better serve mobile applications. Along with this trend, highly correlated datasets can occur commonly, where the rich information buried in correlated data is useful for many cloud data generation/dissemination services. In light of this, we propose to enable a secure and efficient cloud-assisted image sharing architecture for mobile devices, by leveraging outsourced encrypted image datasets with privacy assurance. Different from traditional image sharing, we aim to provide a mobile-friendly design that saves the transmission cost for mobile clients, by directly utilizing outsourced correlated images to reproduce the image of interest inside the cloud for immediate dissemination. First, we propose a secure and efficient index design that allows the mobile client to securely find from encrypted image datasets the candidate selection pertaining to the image of interest for sharing. We then design two specialized encryption mechanisms that support secure image reproduction from encrypted candidate selection. We formally analyze the security strength of the design. Our experiments explicitly show that both the bandwidth and energy consumptions at the mobile client can be saved, while achieving all service requirements and security guarantees.",
"title": ""
},
{
"docid": "3c735e32191db854bbf39b9ba17b8c2b",
"text": "While many image colorization algorithms have recently shown the capability of producing plausible color versions from gray-scale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model: through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed network includes two branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more realistic and finer results compared to the colorization state-of-the-art. Jiaojiao Zhao Universiteit van Amsterdam, Amsterdam, the Netherlands E-mail: j.zhao3@uva.nl Jungong Han Lancaster University, Lancaster, UK E-mail: jungonghan77@gmail.com Ling Shao Inception Institute of Artificial Intelligence, Abu Dhabi, UAE E-mail: ling.shao@ieee.org Cees G. M. Snoek Universiteit van Amsterdam, Amsterdam, the Netherlands E-mail: cgmsnoek@uva.nl",
"title": ""
},
{
"docid": "17f719b2bfe2057141e367afe39d7b28",
"text": "Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.",
"title": ""
},
{
"docid": "145ffb422e1fd1f4cd6b10ce7837495f",
"text": "In this work, we explore the problem of generating fantastic special-effects for the typography. It is quite challenging due to the model diversities to illustrate varied text effects for different characters. To address this issue, our key idea is to exploit the analytics on the high regularity of the spatial distribution for text effects to guide the synthesis process. Specifically, we characterize the stylized patches by their normalized positions and the optimal scales to depict their style elements. Our method first estimates these two features and derives their correlation statistically. They are then converted into soft constraints for texture transfer to accomplish adaptive multi-scale texture synthesis and to make style element distribution uniform. It allows our algorithm to produce artistic typography that fits for both local texture patterns and the global spatial distribution in the example. Experimental results demonstrate the superiority of our method for various text effects over conventional style transfer methods. In addition, we validate the effectiveness of our algorithm with extensive artistic typography library generation.",
"title": ""
},
{
"docid": "16b64bf865bae192b604faaf6f916ff1",
"text": "Recurrent Neural Networks (RNNs) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state of the art by a large margin.1",
"title": ""
},
{
"docid": "be9b4827de5d58197e0611fdd69ee953",
"text": "Recent research on the mechanism underlying the interaction of bacterial pathogens with their host has shifted the focus to secreted microbial proteins affecting the physiology and innate immune response of the target cell. These proteins either traverse the plasma membrane via specific entry pathways involving host cell receptors or are directly injected via bacterial secretion systems into the host cell, where they frequently target mitochondria. The import routes of bacterial proteins are mostly unknown, whereas the effect of mitochondrial targeting by these proteins has been investigated in detail. For a number of them, classical leader sequences recognized by the mitochondrial protein import machinery have been identified. Bacterial outer membrane beta-barrel proteins can also be recognized and imported by mitochondrial transporters. Besides an obvious importance in pathogenicity, understanding import of bacterial proteins into mitochondria has a highly relevant evolutionary aspect, considering the endosymbiotic, proteobacterial origin of mitochondria. The review covers the current knowledge on the mitochondrial targeting and import of bacterial pathogenicity factors.",
"title": ""
},
{
"docid": "a99ecc9adbfc1c74c06051d6d2b77c7d",
"text": "In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work.",
"title": ""
},
{
"docid": "235fc12dc2f741dacede5f501b028cd3",
"text": "Self-adaptive software is capable of evaluating and changing its own behavior, whenever the evaluation shows that the software is not accomplishing what it was intended to do, or when better functionality or performance may be possible. The topic of system adaptivity has been widely studied since the mid-60s and, over the past decade, several application areas and technologies relating to self-adaptivity have assumed greater importance. In all these initiatives, software has become the common element that introduces self-adaptability. Thus, the investigation of systematic software engineering approaches is necessary, in order to develop self-adaptive systems that may ideally be applied across multiple domains. The main goal of this study is to review recent progress on self-adaptivity from the standpoint of computer sciences and cybernetics, based on the analysis of state-of-the-art approaches reported in the literature. This review provides an over-arching, integrated view of computer science and software engineering foundations. Moreover, various methods and techniques currently applied in the design of self-adaptive systems are analyzed, as well as some European research initiatives and projects. Finally, the main bottlenecks for the effective application of self-adaptive technology, as well as a set of key research issues on this topic, are precisely identified, in order to overcome current constraints on the effective application of self-adaptivity in its emerging areas of application. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
875a0c8b9996acd05f79d9ee24fd7ab4
|
Reactors: A Case for Predictable, Virtualized OLTP Actor Database Systems
|
[
{
"docid": "9f45eff73f8e11306a240890b4db5eaf",
"text": "Distributed storage systems run transactions across machines to ensure serializability. Traditional protocols for distributed transactions are based on two-phase locking (2PL) or optimistic concurrency control (OCC). 2PL serializes transactions as soon as they conflict and OCC resorts to aborts, leaving many opportunities for concurrency on the table. This paper presents ROCOCO, a novel concurrency control protocol for distributed transactions that outperforms 2PL and OCC by allowing more concurrency. ROCOCO executes a transaction as a collection of atomic pieces, each of which commonly involves only a single server. Servers first track dependencies between concurrent transactions without actually executing them. At commit time, a transaction’s dependency information is sent to all servers so they can re-order conflicting pieces and execute them in a serializable order. We compare ROCOCO to OCC and 2PL using a scaled TPC-C benchmark. ROCOCO outperforms 2PL and OCC in workloads with varying degrees of contention. When the contention is high, ROCOCO’s throughput is 130% and 347% higher than that of 2PL and OCC.",
"title": ""
}
] |
[
{
"docid": "a5e23ca50545378ef32ed866b97fd418",
"text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.",
"title": ""
},
{
"docid": "7acdc25c20b4aa16fc3391cb878a9577",
"text": "Recurrent Neural Networks (RNNs) have long been recognized for their potential to model complex time series. However, it remains to be determined what optimization techniques and recurrent architectures can be used to best realize this potential. The experiments presented take a deep look into Hessian free optimization, a powerful second order optimization method that has shown promising results, but still does not enjoy widespread use. This algorithm was used to train to a number of RNN architectures including standard RNNs, long short-term memory, multiplicative RNNs, and stacked RNNs on the task of character prediction. The insights from these experiments led to the creation of a new multiplicative LSTM hybrid architecture that outperformed both LSTM and multiplicative RNNs. When tested on a larger scale, multiplicative LSTM achieved character level modelling results competitive with the state of the art for RNNs using very different methodology.",
"title": ""
},
{
"docid": "b8573915765b33e1d57f34f7756cc235",
"text": "Data mining is the process of finding correlations in the relational databases. There are different techniques for identifying malicious database transactions. Many existing approaches which profile is SQL query structures and database user activities to detect intrusion, the log mining approach is the automatic discovery for identifying anomalous database transactions. Mining of the Data is very helpful to end users for extracting useful business information from large database. Multi-level and multi-dimensional data mining are employed to discover data item dependency rules, data sequence rules, domain dependency rules, and domain sequence rules from the database log containing legitimate transactions. Database transactions that do not comply with the rules are identified as malicious transactions. The log mining approach can achieve desired true and false positive rates when the confidence and support are set up appropriately. The implemented system incrementally maintain the data dependency rule sets and optimize the performance of the intrusion detection process.",
"title": ""
},
{
"docid": "a8b5f7a5ab729a7f1664c5a22f3b9d9b",
"text": "The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.",
"title": ""
},
{
"docid": "a7fe6b1ba27c13c95d1a48ca401e25fd",
"text": "BACKGROUND\nselecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD).\n\n\nORDINAL-TO-INTERVAL SCALE CONVERSION EXAMPLE\na breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests.\n\n\nRESULTS\nthe sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable.\n\n\nCONCLUSION\nby using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.",
"title": ""
},
{
"docid": "6a252976282ba1d0d354d8a86d0c49f1",
"text": "Ethics of brain emulations Whole brain emulation attempts to achieve software intelligence by copying the function of biological nervous systems into software. This paper aims at giving an overview of the ethical issues of the brain emulation approach, and analyse how they should affect responsible policy for developing the field. Animal emulations have uncertain moral status, and a principle of analogy is proposed for judging treatment of virtual animals. Various considerations of developing and using human brain emulations are discussed. Introduction Whole brain emulation (WBE) is an approach to achieve software intelligence by copying the functional structure of biological nervous systems into software. Rather than attempting to understand the high-level processes underlying perception, action, emotions and intelligence, the approach assumes that they would emerge from a sufficiently close imitation of the low-level neural functions, even if this is done through a software process. (Sandberg 2013) of brain emulations have been discussed, little analysis of the ethics of the project so far has been done. The main questions of this paper are to what extent brain emulations are moral patients, and what new ethical concerns are introduced as a result of brain emulation technology. The basic idea is to take a particular brain, scan its structure in detail at some resolution, construct a software model of the physiology that is so faithful to the original that, when run on appropriate hardware, it will have an internal causal structure that is essentially the same as the original brain. All relevant functions on some level of description are present, and higher level functions supervene from these. While at present an unfeasibly ambitious challenge, the necessary computing power and various scanning methods are rapidly developing. Large scale computational brain models are a very active research area, at present reaching the size of mammalian nervous systems. al. 2012) WBE can be viewed as the logical endpoint of current trends in computational neuroscience and systems biology. Obviously the eventual feasibility depends on a number of philosophical issues (physicalism, functionalism, non-organicism) and empirical facts (computability, scale separation, detectability, scanning and simulation tractability) that cannot be predicted beforehand; WBE can be viewed as a program trying to test them empirically. (Sandberg 2013) Early projects are likely to merge data from multiple brains and studies, attempting to show that this can produce a sufficiently rich model to produce nontrivial behaviour but not attempting to emulate any particular individual. However, …",
"title": ""
},
{
"docid": "139859fa0f16125f1066c55b9d3cc0d4",
"text": "Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end StructureAware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10% relative improvement over the state-of-theart ConvE in terms of HITS@1, HITS@3 and HITS@10.",
"title": ""
},
{
"docid": "c746d527ed6112760f7b047c922a0d46",
"text": "New performance leaps has been achieved with multiprogramming and multi-core systems. Present parallel programming techniques and environment needs significant changes in programs to accomplish parallelism and also constitute complex, confusing and error-prone constructs and rules. Intel Cilk Plus is a C based computing system that presents a straight forward and well-structured model for the development, verification and analysis of multicore and parallel programming. In this article, two programs are developed using Intel Cilk Plus. Two sequential sorting programs in C/C++ language are converted to multi-core programs in Intel Cilk Plus framework to achieve parallelism and better performance. Converted program in Cilk Plus is then checked for various conditions using tools of Cilk and after that, comparison of performance and speedup achieved over the single-core sequential program is discussed and reported.",
"title": ""
},
{
"docid": "ffccfdc91a1c0b30cf98d0461149580b",
"text": "This paper presents design guidelines for ultra-low power Low Noise Amplifier (LNA) design by comparing input matching, gain, and noise figure (NF) characteristics of common-source (CS) and common-gate (CG) topologies. A current-reused ultra-low power 2.2 GHz CG LNA is proposed and implemented based on 0.18 um CMOS technology. Measurement results show 13.9 dB power gain, 5.14 dB NF, and −9.3 dBm IIP3, respectively, while dissipating 140 uA from a 1.5 V supply, which shows best figure of merit (FOM) among all published ultra-low power LNAs.",
"title": ""
},
{
"docid": "a8695230b065ae2e4c5308dfe4f8c10e",
"text": "The paper describes a solution for the Yandex Personalized Web Search Challenge. The goal of the challenge is to rerank top ten web search query results to bring most personally relevant results on the top, thereby improving the search quality. The paper focuses on feature engineering for learning to rank in web search, including a novel pair-wise feature, shortand long-term personal navigation features. The paper demonstrates that point-wise logistic regression can achieve the stat-of-the-art performance in terms of normalized discounted cumulative gain with capability to scale up.",
"title": ""
},
{
"docid": "f77d44a34563be204ef04a2ac2041901",
"text": "We introduce a tree-structured attention neural network for sentences and small phrases and apply it to the problem of sentiment classification. Our model expands the current recursive models by incorporating structural information around a node of a syntactic tree using both bottomup and top-down information propagation. Also, the model utilizes structural attention to identify the most salient representations during the construction of the syntactic tree. To our knowledge, the proposed models achieve state of the art performance on the Stanford Sentiment Treebank dataset.",
"title": ""
},
{
"docid": "c3f2726c10ebad60d715609f15b67b43",
"text": "Sleep-waking cycles are fundamental in human circadian rhythms and their disruption can have consequences for behaviour and performance. Such disturbances occur due to domestic or occupational schedules that do not permit normal sleep quotas, rapid travel across multiple meridians and extreme athletic and recreational endeavours where sleep is restricted or totally deprived. There are methodological issues in quantifying the physiological and performance consequences of alterations in the sleep-wake cycle if the effects on circadian rhythms are to be separated from the fatigue process. Individual requirements for sleep show large variations but chronic reduction in sleep can lead to immuno-suppression. There are still unanswered questions about the sleep needs of athletes, the role of 'power naps' and the potential for exercise in improving the quality of sleep.",
"title": ""
},
{
"docid": "10ef865d0c70369d64c900fb46a1399d",
"text": "This work introduces a set of scalable algorithms to identify patterns of human daily behaviors. These patterns are extracted from multivariate temporal data that have been collected from smartphones. We have exploited sensors that are available on these devices, and have identified frequent behavioral patterns with a temporal granularity, which has been inspired by the way individuals segment time into events. These patterns are helpful to both end-users and third parties who provide services based on this information. We have demonstrated our approach on two real-world datasets and showed that our pattern identification algorithms are scalable. This scalability makes analysis on resource constrained and small devices such as smartwatches feasible. Traditional data analysis systems are usually operated in a remote system outside the device. This is largely due to the lack of scalability originating from software and hardware restrictions of mobile/wearable devices. By analyzing the data on the device, the user has the control over the data, i.e., privacy, and the network costs will also be removed.",
"title": ""
},
{
"docid": "38036ea0a6f79ff62027e8475859acb9",
"text": "The constantly increasing demand for nutraceuticals is paralleled by a more pronounced request for natural ingredients and health-promoting foods. The multiple functional properties of cactus pear fit well this trend. Recent data revealed the high content of some chemical constituents, which can give added value to this fruit on a nutritional and technological functionality basis. High levels of betalains, taurine, calcium, magnesium, and antioxidants are noteworthy.",
"title": ""
},
{
"docid": "667a2ea2b8ed7d2c709f04d8cd6617c6",
"text": "Knowledge centric activities of developing new products and services are becoming the primary source of sustainable competitive advantage in an era characterized by short product life cycles, dynamic markets and complex processes. We Ž . view new product development NPD as a knowledge-intensive activity. Based on a case study in the consumer electronics Ž . industry, we identify problems associated with knowledge management KM in the context of NPD by cross-functional collaborative teams. We map these problems to broad Information Technology enabled solutions and subsequently translate these into specific system characteristics and requirements. A prototype system that meets these requirements developed to capture and manage tacit and explicit process knowledge is further discussed. The functionalities of the system include functions for representing context with informal components, easy access to process knowledge, assumption surfacing, review of past knowledge, and management of dependencies. We demonstrate the validity our proposed solutions using scenarios drawn from our case study. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "46dc618a779bd658bfa019117c880d3a",
"text": "The concept and deployment of Internet of Things (IoT) has continued to develop momentum over recent years. Several different layered architectures for IoT have been proposed, although there is no consensus yet on a widely accepted architecture. In general, the proposed IoT architectures comprise three main components: an object layer, one or more middle layers, and an application layer. The main difference in detail is in the middle layers. Some include a cloud services layer for managing IoT things. Some propose virtual objects as digital counterparts for physical IoT objects. Sometimes both cloud services and virtual objects are included.In this paper, we take a first step toward our eventual goal of developing an authoritative family of access control models for a cloud-enabled Internet of Things. Our proposed access-control oriented architecture comprises four layers: an object layer, a virtual object layer, a cloud services layer, and an application layer. This 4-layer architecture serves as a framework to build access control models for a cloud-enabled IoT. Within this architecture, we present illustrative examples that highlight some IoT access control issues leading to a discussion of needed access control research. We identify the need for communication control within each layer and across adjacent layers (particularly in the lower layers), coupled with the need for data access control (particularly in the cloud services and application layers).",
"title": ""
},
{
"docid": "979a3ca422e92147b25ca1b8e8ff9e5a",
"text": "Open Information Extraction (Open IE) is a promising approach for unrestricted Information Discovery (ID). While Open IE is a highly scalable approach, allowing unsupervised relation extraction from open domains, it currently has some limitations. First, it lacks the expressiveness needed to properly represent and extract complex assertions that are abundant in text. Second, it does not consolidate the extracted propositions, which causes simple queries above Open IE assertions to return insufficient or redundant information. To address these limitations, we propose in this position paper a novel representation for ID – Propositional Knowledge Graphs (PKG). PKGs extend the Open IE paradigm by representing semantic inter-proposition relations in a traversable graph. We outline an approach for constructing PKGs from single and multiple texts, and highlight a variety of high-level applications that may leverage PKGs as their underlying information discovery and representation framework.",
"title": ""
},
{
"docid": "9e8cf31a711a77fa5c5dcc932473dc27",
"text": "The opening book is an important component of a chess engine, and thus computer chess programmers have been developing automated methods to improve the quality of their books. For chess, which has a very rich opening theory, large databases of highquality games can be used as the basis of an opening book, from which statistics relating to move choices from given positions can be collected. In order to nd out whether the opening books used by modern chess engines in machine versus machine competitions are \\comparable\" to those used by chess players in human versus human competitions, we carried out analysis on 26 test positions using statistics from two opening books one compiled from humans’ games and the other from machines’ games. Our analysis using several nonparametric measures, shows that, overall, there is a strong association between humans’ and machines’ choices of opening moves when using a book to guide their choices.",
"title": ""
},
{
"docid": "8c0cbfc060b3a6aa03fd8305baf06880",
"text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.",
"title": ""
},
{
"docid": "a603c55eb09d858c629a71ab9285a1d1",
"text": "We propose a neural network method for turning emotion into art. Our approach relies on a class-conditioned generative adversarial network trained on a dataset of modern artworks labeled with emotions. We generate this dataset through a large-scale user study of art perception with human subjects. Preliminary results show our framework generates images which, apart from aesthetically appealing, exhibit various features associated with the emotions they are conditioned on.",
"title": ""
}
] |
scidocsrr
|
cdcaf30c3aa61f157db6005ad9a8559e
|
Social Comparison 2.0: Examining the Effects of Online Profiles on Social-Networking Sites
|
[
{
"docid": "409f3b2768a8adf488eaa6486d1025a2",
"text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.",
"title": ""
}
] |
[
{
"docid": "a727d28ed4153d9d9744b3e2b5e47251",
"text": "Darts is enjoyed both as a pub game and as a professional competitive activity.Yet most players aim for the highest scoring region of the board, regardless of their level of skill. By modelling a dart throw as a two-dimensional Gaussian random variable, we show that this is not always the optimal strategy.We develop a method, using the EM algorithm, for a player to obtain a personalized heat map, where the bright regions correspond to the aiming locations with high (expected) pay-offs. This method does not depend in any way on our Gaussian assumption, and we discuss alternative models as well.",
"title": ""
},
{
"docid": "253a4482b462b134f915d89cbc57577a",
"text": "Ontology is one of the essential topics in the scope of an important area of current computer science and Semantic Web. Ontologies present well defined, straightforward and standardized form of the repositories (vast and reliable knowledge) where it can be interoperable and machine understandable. There are many possible utilization of ontologies from automatic annotation of web resources to domain representation and reasoning task. Ontology is an effective conceptualism used for the semantic web. However there is none of the research try to construct an ontology from Islamic knowledge which consist of Holy Quran, Hadiths and etc. Therefore as a first stage, in this paper we try to propose a simple methodology in order to extract a concept based on Al-Quran. Finally, we discuss about the experiment that have been conducted.",
"title": ""
},
{
"docid": "f6c6b1a9f3e21cc860e20860551d6c1d",
"text": "A longitudinal study of self-esteem in 22 adolescents with cerebral palsy is reported. The subjects were matched with nondisabled adolescents by age, sex, IQ, and school. Seven years later, 39 of the 44 subjects (mean age = 22.8 years) completed the Tennessee Self-Concept Scale (Roid & Fitts, 1988), the Social Support Inventory (McCubbin, Patterson, Rossman, & Cooke, 1982), and a demographic questionnaire with some open-ended questions. As adolescents, the girls with cerebral palsy scored significantly lower than the other groups on physical, social, and personal self-esteem; however, as adults, these subjects were no longer significantly different from the other groups. Male subjects with cerebral palsy had self-esteem scores similar to those of the nondisabled groups in both adolescence and adulthood. Demographic information is summarized. The factors that the subjects identified as leading to changes in self-esteem were relationships and experiences. The low self-esteem scores indicate that psychosocial occupational therapy intervention with adolescent girls with cerebral palsy and with some adults with cerebral palsy would be appropriate.",
"title": ""
},
{
"docid": "ecc31d1d7616e014a3a032d14e149e9b",
"text": "It has been proposed that sexual stimuli will be processed in a comparable manner to other evolutionarily meaningful stimuli (such as spiders or snakes) and therefore elicit an attentional bias and more attentional engagement (Spiering and Everaerd, In E. Janssen (Ed.), The psychophysiology of sex (pp. 166-183). Bloomington: Indiana University Press, 2007). To investigate early and late attentional processes while looking at sexual stimuli, heterosexual men (n = 12) viewed pairs of sexually preferred (images of women) and sexually non-preferred images (images of girls, boys or men), while eye movements were measured. Early attentional processing (initial orienting) was assessed by the number of first fixations and late attentional processing (maintenance of attention) was assessed by relative fixation time. Results showed that relative fixation time was significantly longer for sexually preferred stimuli than for sexually non-preferred stimuli. Furthermore, the first fixation was more often directed towards the preferred sexual stimulus, when simultaneously presented with a non-sexually preferred stimulus. Thus, the current study showed for the first time an attentional bias to sexually relevant stimuli when presented simultaneously with sexually irrelevant pictures. This finding, along with the discovery that heterosexual men maintained their attention to sexually relevant stimuli, highlights the importance of investigating early and late attentional processes while viewing sexual stimuli. Furthermore, the current study showed that sexually relevant stimuli are favored by the human attentional system.",
"title": ""
},
{
"docid": "38c96356f5fd3daef5f1f15a32971b57",
"text": "Recommendation systems make suggestions about artifacts to a user. For instance, they may predict whether a user would be interested in seeing a particular movie. Social recomendation methods collect ratings of artifacts from many individuals and use nearest-neighbor techniques to make recommendations to a user concerning new artifacts. However, these methods do not use the significant amount of other information that is often available about the nature of each artifact -such as cast lists or movie reviews, for example. This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. We show that our method outperforms an existing social-filtering method in the domain of movie recommendations on a dataset of more than 45,000 movie ratings collected from a community of over 250 users. Introduction Recommendations are a part of everyday life. We usually rely on some external knowledge to make informed decisions about a particular artifact or action, for instance when we are going to see a movie or going to see a doctor. This knowledge can be derived from social processes. At other times, our judgments may be based on available information about an artifact and our known preferences. There are many factors which may influence a person in making choices, and ideally one would like to model as many of these factors as possible in a recommendation system. There are some general approaches to this problem. In one approach, the user of the system provides ratings of some artifacts or items. The system makes informed guesses about other items the user may like based on ratings other users have provided. This is the framework for social-filtering methods (Hill, Stead, Rosenstein Furnas 1995; Shardanand & Maes 1995). In a second approach, the system accepts information describing the nature of an item, and based on a sample of the user’s preferences, learns to predict which items the user will like (Lang 1995; Pazzani, Muramatsu, & Billsus 1996). We will call this approach content-based filtering, as it does not rely on social information (in the form of other users’ ratings). Both social and content-based filtering can be cast as learning problems: the objective is to *Department of Computer Science, Rutgers University, Piscataway, NJ 08855 We would like to thank Susan Dumais for useful discussions during the early stages of this work. Copyright ~)1998, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. learn a function that can take a description of a user and an artifact and predict the user’s preferences concerning the artifact. Well-known recommendation systems like Recommender (Hill, Stead, Rosenstein & Furnas 1995) and Firefly (http: //www.firefly.net) (Shardanand & Maes 1995) are based on social-filtering principles. Recommender, the baseline system used in the work reported here, recommends as yet unseen movies to a user based on his prior ratings of movies and their similarity to the ratings of other users. Social-filtering systems perform well using only numeric assessments of worth, i.e., ratings. However, social-filtering methods leave open the question of what role content can play in the recommen-",
"title": ""
},
{
"docid": "819f6b62eb3f8f9d60437af28c657935",
"text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.",
"title": ""
},
{
"docid": "3012eafa396cc27e8b05fd71dd9bc13b",
"text": "An assessment of Herman and Chomsky’s 1988 five-filter propaganda model suggests it is mainly valuable for identifying areas in which researchers should look for evidence of collaboration (whether intentional or otherwise) between mainstream media and the propaganda aims of the ruling establishment. The model does not identify methodologies for determining the relative weight of independent filters in different contexts, something that would be useful in its future development. There is a lack of precision in the characterization of some of the filters. The model privileges the structural factors that determine propagandized news selection, and therefore eschews or marginalizes intentionality. This paper extends the model to include the “buying out” of journalists or their publications by intelligence and related special interest organizations. It applies the extended six-filter model to controversies over reporting by The New York Times of the build-up towards the US invasion of Iraq in 2003, the issue of weapons of mass destruction in general, and the reporting of The New York Times correspondent Judith Miller in particular, in the context of broader critiques of US mainstream media war coverage. The controversies helped elicit evidence of the operation of some filters of the propaganda model, including dependence on official sources, fear of flak, and ideological convergence. The paper finds that the filter of routine news operations needs to be counterbalanced by its opposite, namely non-routine abuses of standard operating procedures. While evidence of the operation of other filters was weaker, this is likely due to difficulties of observability, as there are powerful deductive reasons for maintaining all six filters within the framework of media propaganda analysis.",
"title": ""
},
{
"docid": "ae2da83aaab6c272cdd6f2847e0801be",
"text": "In this work, we propose CyberKrisi, a machine learning based framework for cyber physical farming. IT based farming is very young and emerging with numerous IoT devices such as wireless sensors, surveillance cameras, drones and weather stations. These devices produce large amounts of data about crop, soil, fertilization, irrigation as well as environment. We exploit this data to assess crop performance and compute crop forecasts. We envision an IoT gateway and machine learning gateway in the vicinity of farm land which performs predictions and recommendations as well as relays this data to cloud. Our contribution are twofold: first, we show an application framework for farmers to provide an interface in understanding Farm data. Second, we built a prototype to provide illiterate Farmers an interactive experience with Farm land.",
"title": ""
},
{
"docid": "d6f1278ccb6de695200411137b85b89a",
"text": "The complexity of information systems is increasing in recent years, leading to increased effort for maintenance and configuration. Self-adaptive systems (SASs) address this issue. Due to new computing trends, such as pervasive computing, miniaturization of IT leads to mobile devices with the emerging need for context adaptation. Therefore, it is beneficial that devices are able to adapt context. Hence, we propose to extend the definition of SASs and include context adaptation. This paper presents a taxonomy of self-adaptation and a survey on engineering SASs. Based on the taxonomy and the survey, we motivate a new perspective on SAS including context adaptation.",
"title": ""
},
{
"docid": "cb47cc2effac1404dd60a91a099699d1",
"text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.",
"title": ""
},
{
"docid": "764f05288ff0a0bbf77f264fcefb07eb",
"text": "Recent advances in energy harvesting have been intensified due to urgent needs of portable, wireless electronics with extensive life span. The idea of energy harvesting is applicable to sensors that are placed and operated on some entities for a long time, or embedded into structures or human bodies, in which it is troublesome or detrimental to replace the sensor module batteries. Such sensors are commonly called “self-powered sensors.” The energy harvester devices are capable of capturing environmental energy and supplanting the battery in a standalone module, or working along with the battery to extend substantially its life. Vibration is considered one of the most high power and efficient among other ambient energy sources, such as solar energy and temperature difference. Piezoelectric and electromagnetic devices are mostly used to convert vibration to ac electric power. For vibratory harvesting, a delicately designed power conditioning circuit is required to store as much as possible of the device-output power into a battery. The design for this power conditioning needs to be consistent with the electric characteristics of the device and battery to achieve maximum power transfer and efficiency. This study offers an overview on various power conditioning electronic circuits designed for vibratory harvester devices and their applications to self-powered sensors. Comparative comments are provided in terms of circuit topology differences, conversion efficiencies and applicability to a sensor module.",
"title": ""
},
{
"docid": "7f6b4a74f88d5ae1a4d21948aac2e260",
"text": "The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.",
"title": ""
},
{
"docid": "f8821f651731943ce1652bc8a1d2c0d6",
"text": "business units and thus not even practiced in a cohesive, coherent manner. In the worst cases, busy business unit executives trade roving bands of developers like Pokémon cards in a fifth-grade classroom (in an attempt to get ahead). Suffice it to say, none of this is good. The disconnect between security and development has ultimately produced software development efforts that lack any sort of contemporary understanding of technical security risks. Today's complex and highly connected computing environments trigger myriad security concerns, so by blowing off the idea of security entirely, software builders virtually guarantee that their creations will have way too many security weaknesses that could—and should—have been avoided. This article presents some recommendations for solving this problem. Our approach is born out of experience in two diverse fields: software security and information security. Central among our recommendations is the notion of using the knowledge inherent in information security organizations to enhance secure software development efforts. Don't stand so close to me Best practices in software security include a manageable number of simple activities that should be applied throughout any software development process (see Figure 1). These lightweight activities should start at the earliest stages of software development and then continue throughout the development process and into deployment and operations. Although an increasing number of software shops and individual developers are adopting the software security touchpoints we describe here as their own, they often lack the requisite security domain knowledge required to do so. This critical knowledge arises from years of observing system intrusions, dealing with malicious hackers, suffering the consequences of software vulnera-bilities, and so on. Put in this position , even the best-intended development efforts can fail to take into account real-world attacks previously observed on similar application architectures. Although recent books 1,2 are starting to turn this knowledge gap around, the science of attack is a novel one. Information security staff—in particular, incident handlers and vulnerability/patch specialists— have spent years responding to attacks against real systems and thinking about the vulnerabilities that spawned them. In many cases, they've studied software vulnerabili-ties and their resulting attack profiles in minute detail. However, few information security professionals are software developers (at least, on a full-time basis), and their solution sets tend to be limited to reactive techniques such as installing software patches, shoring up firewalls, updating intrusion detection signature databases, and the like. It's very rare to find information security …",
"title": ""
},
{
"docid": "afdc8b3e00a4fe39b281e17056d97664",
"text": "This demo presents the features of the Proactive Insights (PI) engine, which uses machine learning and artificial intelligence capabilities to automatically identify weaknesses in business processes, to reveal their root causes, and to give intelligent advice on how to improve process inefficiencies. We demonstrate the four PI elements covering Conformance, Machine Learning, Social, and Companion. The new insights are especially valuable for process managers and academics interested in BPM and process mining.",
"title": ""
},
{
"docid": "9014cf924884777f81c10e2a173fdf13",
"text": "We study chemical reactions with complex mechanisms under t wo assumptions: (i) intermediates are present in small amounts (this is the q uasi-steady-state hypothesis or QSS) and (ii) they are in equilibrium relations with substra tes (this is the quasiequilibrium hypothesis or QE). Under these assumptions, we prove the gen ralized mass action law together with the basic relations between kinetic factors, which are sufficient for the positivity of the entropy production but hold even without m icroreversibility, when the detailed balance is not applicable. Even though QE and QSS pr oduce useful approximations by themselves, only the combination of these assumptions ca n render the possibility beyond the “rarefied gas” limit or the “molecular chaos” hypotheses . We do not use any a priori form of the kinetic law for the chemical reactions and describe th eir equilibria by thermodynamic relations. The transformations of the intermediate compou nds can be described by the Markov kinetics because of their low density ( low density of elementary events ). This combination of assumptions was introduced by Michaelis and Menten in 1913. In 1952, Stueckelberg used the same assumptions for the gas kinetics and produced the remarkable semi-detailed balance relations between collision rates i n the Boltzmann equation that are weaker than the detailed balance conditions but are stil l sufficient for the Boltzmann H-theorem to be valid. Our results are obtained within the Mic haelis-Menten-Stueckelbeg conceptual framework.",
"title": ""
},
{
"docid": "f81dd0c86a7b45e743e4be117b4030c2",
"text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.",
"title": ""
},
{
"docid": "5ae890862d844ce03359624c3cb2012b",
"text": "Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd software architecture in practice second edition that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.",
"title": ""
},
{
"docid": "da302043eecd427e70c48c28df189aa3",
"text": "Recent advances in electronics and wireless communication technologies have enabled the development of large-scale wireless sensor networks that consist of many low-power, low-cost, and small-size sensor nodes. Sensor networks hold the promise of facilitating large-scale and real-time data processing in complex environments. Security is critical for many sensor network applications, such as military target tracking and security monitoring. To provide security and privacy to small sensor nodes is challenging, due to the limited capabilities of sensor nodes in terms of computation, communication, memory/storage, and energy supply. In this article we survey the state of the art in research on sensor network security.",
"title": ""
},
{
"docid": "1168c9e6ce258851b15b7e689f60e218",
"text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).",
"title": ""
},
{
"docid": "58156df07590448d89c2b8d4a46696ad",
"text": "Gene PmAF7DS confers resistance to wheat powdery mildew (isolate Bgt#211 ); it was mapped to a 14.6-cM interval ( Xgwm350 a– Xbarc184 ) on chromosome 7DS. The flanking markers could be applied in MAS breeding. Wheat powdery mildew (Pm) is caused by the biotrophic pathogen Blumeria graminis tritici (DC.) (Bgt). An ongoing threat of breakdown of race-specific resistance to Pm requires a continuous effort to discover new alleles in the wheat gene pool. Developing new cultivars with improved disease resistance is an economically and environmentally safe approach to reduce yield losses. To identify and characterize genes for resistance against Pm in bread wheat we used the (Arina × Forno) RILs population. Initially, the two parental lines were screened with a collection of 61 isolates of Bgt from Israel. Three Pm isolates Bgt#210 , Bgt#211 and Bgt#213 showed differential reactions in the parents: Arina was resistant (IT = 0), whereas Forno was moderately susceptible (IT = −3). Isolate Bgt#211 was then used to inoculate the RIL population. The segregation pattern of plant reactions among the RILs indicates that a single dominant gene controls the conferred resistance. A genetic map of the region containing this gene was assembled with DNA markers and assigned to the 7D physical bin map. The gene, temporarily designated PmAF7DS, was located in the distal region of chromosome arm 7DS. The RILs were also inoculated with Bgt#210 and Bgt#213. The plant reactions to these isolates showed high identity with the reaction to Bgt#211, indicating the involvement of the same gene or closely linked, but distinct single genes. The genomic location of PmAF7DS, in light of other Pm genes on 7DS is discussed.",
"title": ""
}
] |
scidocsrr
|
dd9275e0abc322020a02a0cccf6ceadf
|
Human Social Interaction Modeling Using Temporal Deep Networks
|
[
{
"docid": "efd8a99b6fac8ca416f4eb6d825a611b",
"text": "A variety of theoretical frameworks predict the resemblance of behaviors between two people engaged in communication, in the form of coordination, mimicry, or alignment. However, little is known about the time course of the behavior matching, even though there is evidence that dyads synchronize oscillatory motions (e.g., postural sway). This study examined the temporal structure of nonoscillatory actions-language, facial, and gestural behaviors-produced during a route communication task. The focus was the temporal relationship between matching behaviors in the interlocutors (e.g., facial behavior in one interlocutor vs. the same facial behavior in the other interlocutor). Cross-recurrence analysis revealed that within each category tested (language, facial, gestural), interlocutors synchronized matching behaviors, at temporal lags short enough to provide imitation of one interlocutor by the other, from one conversational turn to the next. Both social and cognitive variables predicted the degree of temporal organization. These findings suggest that the temporal structure of matching behaviors provides low-level and low-cost resources for human interaction.",
"title": ""
}
] |
[
{
"docid": "a0e5c8945212e8cde979b4c5decb71d0",
"text": "Cybercrime is a pervasive threat for today's Internet-dependent society. While the real extent and economic impact is hard to quantify, scientists and officials agree that cybercrime is a huge and still growing problem. A substantial fraction of cybercrime's overall costs to society can be traced to indirect opportunity costs, resulting from unused online services. This paper presents a parsimonious model that builds on technology acceptance research and insights from criminology to identify factors that reduce Internet users' intention to use online services. We hypothesize that avoidance of online banking, online shopping and online social networking is increased by cybercrime victimization and media reports. The effects are mediated by the perceived risk of cybercrime and moderated by the user's confidence online. We test our hypotheses using a structural equation modeling analysis of a representative pan-European sample. Our empirical results confirm the negative impact of perceived risk of cybercrime on the use of all three online service categories and support the role of cybercrime experience as an antecedent of perceived risk of cybercrime. We further show that more confident Internet users perceive less cybercriminal risk and are more likely to use online banking and online shopping, which highlights the importance of consumer education.",
"title": ""
},
{
"docid": "5c90cd6c4322c30efb90589b1a65192e",
"text": "The sure thing principle and the law of total probability are basic laws in classic probability theory. A disjunction fallacy leads to the violation of these two classical laws. In this paper, an Evidential Markov (EM) decision making model based on Dempster-Shafer (D-S) evidence theory and Markov modelling is proposed to address this issue and model the real human decision-making process. In an evidential framework, the states are extended by introducing an uncertain state which represents the hesitance of a decision maker. The classical Markov model can not produce the disjunction effect, which assumes that a decision has to be certain at one time. However, the state is allowed to be uncertain in the EM model before the final decision is made. An extra uncertainty degree parameter is defined by a belief entropy, named Deng entropy, to assignment the basic probability assignment of the uncertain state, which is the key to predict the disjunction effect. A classical categorization decision-making experiment is used to illustrate the effectiveness and validity of EM model. The disjunction effect can be well predicted ∗Corresponding author at Wen Jiang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address: jiangwen@nwpu.edu.cn, jiangwenpaper@hotmail.com Preprint submitted to Elsevier May 19, 2017 and the free parameters are less compared with the existing models.",
"title": ""
},
{
"docid": "260c12152d9bd38bd0fde005e0394e17",
"text": "On the initiative of the World Health Organization, two meetings on the Standardization of Reporting Results of Cancer Treatment have been held with representatives and members of several organizations. Recommendations have been developed for standardized approaches to the recording of baseline data relating to the patient, the tumor, laboratory and radiologic data, the reporting of treatment, grading of acute and subacute toxicity, reporting of response, recurrence and disease-free interval, and reporting results of therapy. These recommendations, already endorsed by a number of organizations, are proposed for international acceptance and use to make it possible for investigators to compare validly their results with those of others.",
"title": ""
},
{
"docid": "4621f0bd002f8bd061dd0b224f27977c",
"text": "Organisations increasingly perceive their employees as a great asset that needs to be cared for; however, at the same time, they view employees as one of the biggest potential threats to their cyber security. Employees are widely acknowledged to be responsible for security breaches in organisations, and it is important that these are given as much attention as are technical issues. A significant number of researchers have argued that non-compliance with information security policy is one of the major challenges facing organisations. This is primarily considered to be a human problem rather than a technical issue. Thus, it is not surprising that employees are one of the major underlying causes of breaches in information security. In this paper, academic literature and reports of information security institutes relating to policy compliance are reviewed. The objective is to provide an overview of the key challenges surrounding the successful implementation of information security policies. A further aim is to investigate the factors that may have an influence upon employees' behaviour in relation to information security policy. As a result, challenges to information security policy have been classified into four main groups: security policy promotion; noncompliance with security policy; security policy management and updating; and shadow security. Furthermore, the factors influencing behaviour have been divided into organisational and human factors. Ultimately, this paper concludes that continuously subjecting users to targeted awareness raising and dynamically monitoring their adherence to information security policy should increase the compliance level.",
"title": ""
},
{
"docid": "7baf37974303e6f83f52ff47c441387f",
"text": "We present a novel Bayesian model for semi-supervised part-of-speech tagging. Our model extends the Latent Dirichlet Allocation model and incorporates the intuition that words’ distributions over tags, p(t|w), are sparse. In addition we introduce a model for determining the set of possible tags of a word which captures important dependencies in the ambiguity classes of words. Our model outperforms the best previously proposed model for this task on a standard dataset.",
"title": ""
},
{
"docid": "7834cad6190a019c3b0086a3f0231182",
"text": "In modern train control systems, a moving train retrieves its location information through passive transponders called balises, which are placed on the sleepers of the track at regular intervals. When the train-borne antenna energizes them using tele-powering signals, balises backscatter preprogrammed telegrams, which carry information about the train's current location. Since the telegrams are static in the existing implementations, the uplink signals from the balises could be recorded by an adversary and then replayed at a different location of the track, leading to what is well-known as the replay attack. Such an attack, while the legitimate balise is still functional, introduces ambiguity to the train about its location, can impact the physical operations of the trains. For balise-to-train communication, we propose a new communication framework referred to as cryptographic random fountains (CRF), where each balise, instead of transmitting telegrams with fixed information, transmits telegrams containing random signals. A salient feature of CRF is the use of challenge-response based interaction between the train and the balise for communication integrity. We present a thorough security analysis of CRF to showcase its ability to mitigate sophisticated replay attacks. Finally, we also discuss the implementation aspects of our framework.",
"title": ""
},
{
"docid": "e350e4a5baf6a9c1b701b27aba5405f4",
"text": "When a detector sensitive to the target plume IR seeker is used for tracking airborne targets, the seeker tends to follow the target hot point which is a point farther away from the target exhaust and its fuselage. In order to increase the missile effectiveness, it is necessary to modify the guidance law by adding a lead bias command. The resulting guidance is known as target adaptive guidance (TAG). First, the pure proportional navigation guidance (PPNG) in 3-dimensional state is explained in a new point of view. The main idea is based on the distinction between angular rate vector and rotation vector conceptions. The current innovation is based on selection of line of sight (LOS) coordinates. A comparison between two available choices for LOS coordinates system is proposed. An improvement is made by adding two additional terms. First term includes a cross range compensator which is used to provide and enhance path observability, and obtain convergent estimates of state variables. The second term is new concept lead bias term, which has been calculated by assuming an equivalent acceleration along the target longitudinal axis. Simulation results indicate that the lead bias term properly provides terminal conditions for accurate target interception.",
"title": ""
},
{
"docid": "da5fc78a9a1be5125fe668ac4ca20ee5",
"text": "This letter proposes a groundbreaking approach in the remote-sensing community to simulating the digital surface model (DSM) from a single optical image. This novel technique uses conditional generative adversarial networks whose architecture is based on an encoder–decoder network with skip connections (generator) and penalizing structures at the scale of image patches (discriminator). The network is trained on scenes where both the DSM and optical data are available to establish an image-to-DSM translation rule. The trained network is then utilized to simulate elevation information on target scenes where no corresponding elevation information exists. The capability of the approach is evaluated both visually (in terms of photographic interpretation) and quantitatively (in terms of reconstruction errors and classification accuracies) on subdecimeter spatial resolution data sets captured over Vaihingen, Potsdam, and Stockholm. The results confirm the promising performance of the proposed framework.",
"title": ""
},
{
"docid": "3baf11f31351e92c7ff56b066434ae2c",
"text": "Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and the density functions through kernel density estimation. A novel reformulation is proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-ofthe-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.",
"title": ""
},
{
"docid": "a1cd4a4ce70c9c8672eee5ffc085bf63",
"text": "Ternary logic is a promising alternative to conventional binary logic, since it is possible to achieve simplicity and energy efficiency due to the reduced circuit overhead. In this paper, a ternary magnitude comparator design based on Carbon Nanotube Field Effect Transistors (CNFETs) is presented. This design eliminates the usage of complex ternary decoder which is a part of existing designs. Elimination of decoder results in reduction of delay and power. Simulations of proposed and existing designs are done on HSPICE and results proves that the proposed 1-bit comparator consumes 81% less power and shows delay advantage of 41.6% compared to existing design. Further a methodology to extend the 1-bit comparator design to n-bit comparator design is also presented.",
"title": ""
},
{
"docid": "c91e966b803826908ae4dd82cc4a483e",
"text": "Many shallow natural language understanding tasks use dependency trees to extract relations between content words. However, strict surface-structure dependency trees tend to follow the linguistic structure of sentences too closely and frequently fail to provide direct relations between content words. To mitigate this problem, the original Stanford Dependencies representation also defines two dependency graph representations which contain additional and augmented relations that explicitly capture otherwise implicit relations between content words. In this paper, we revisit and extend these dependency graph representations in light of the recent Universal Dependencies (UD) initiative and provide a detailed account of an enhanced and an enhanced++ English UD representation. We further present a converter from constituency to basic, i.e., strict surface structure, UD trees, and a converter from basic UD trees to enhanced and enhanced++ English UD graphs. We release both converters as part of Stanford CoreNLP and the Stanford Parser.",
"title": ""
},
{
"docid": "48fea4f95e6b7dfa7bb371f28751ac5a",
"text": "The suppression mechanism of the differential-mode noise of an X capacitor in offline power supplies is, for the first time, attributed to two distinct concepts: 1) impedance mismatch (regarding a line impedance stabilization network or mains and the equivalent power supply noise source impedance) and 2) C(dv/dt) noise current balancing (to suppress mix-mode noise). The effectiveness of X capacitors is investigated with this theory, along with experimental supports. Understanding of the two aforementioned mechanisms gives better insight into filter effectiveness, which may lead to a more compact filter design.",
"title": ""
},
{
"docid": "8e077186aef0e7a4232eec0d8c73a5a2",
"text": "The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection. 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "266d3ff38aec23ae748fa515dfd7bf60",
"text": "Organizational learning (OL) and knowledge management (KM) research has gone through dramatic changes in the last twenty years and, without doubt, the fi eld will continue to change in the next ten years. Our research suggests that Cyert and March were the fi rst authors to reference organizational learning in their publication of 1963. It was just twenty years ago that a conference was held at Carnegie Mellon University to honor March and his contribution to the fi eld of organizational learning. Many of these presentations were published in a special issue of Organization Science in 1991. Since that time we have seen a rapid expansion in the number of journal articles— both academic and practitioner—devoted to organizational learning. Fields such as information technology, marketing and human resources have also jumped on the bandwagon. Doctoral programs are including seminars on organizational learning, and MBA courses on organizational learning are appearing. All of this refl ects acceptance of the concept that organizations have knowledge, do learn over time, and consider their knowledge base and social capital as valuable assets. It also reaffi rms the legitimacy of research on organizational learning and its practical applications to organizations. The fi rst edition of this Handbook was published in 2003 but most chapters were completed in 2001 or 2002. Our fi rst edition was widely used and it was clear—given the advancement of the fi eld—that a second edition was necessary. Some people might claim that it is foolhardy to seek to cover the full range of the literature within one volume. Our intent is to provide a resource that is useful to academics, practitioners, and students who want an overview of the current fi eld with full recognition that—to our delight—the fi eld continues to have major impact on research and management practices. Our response is",
"title": ""
},
{
"docid": "018df705607ea7a71bf8a2a89b988eb7",
"text": "Adult playfulness is a personality trait that enables people to frame or reframe everyday situations in such a way that they experience them as entertaining, intellectually stimulating, or personally interesting. Earlier research supports the notion that playfulness is associated with the pursuit of an active way of life. While playful children are typically described as being active, only limited knowledge exists on whether playfulness in adults is also associated with physical activity. Additionally, existing literature has not considered different facets of playfulness, but only global playfulness. Therefore, we employed a multifaceted model that allows distinguishing among Other-directed, Lighthearted, Intellectual, and Whimsical playfulness. For narrowing this gap in the literature, we conducted two studies addressing the associations of playfulness with health, activity, and fitness. The main aim of Study 1 was a comparison of self-ratings (N = 529) and ratings from knowledgeable others (N = 141). We tested the association of self- and peer-reported playfulness with self- and peer-reported physical activity, fitness, and health behaviors. There was a good convergence of playfulness among self- and peer-ratings (between r = 0.46 and 0.55, all p < 0.001). Data show that both self- and peer-ratings are differentially associated with physical activity, fitness, and health behaviors. For example, self-rated playfulness shared 3% of the variance with self-rated physical fitness and 14% with the pursuit of an active way of life. Study 2 provides data on the association between self-rated playfulness and objective measures of physical fitness (i.e., hand and forearm strength, lower body muscular strength and endurance, cardio-respiratory fitness, back and leg flexibility, and hand and finger dexterity) using a sample of N = 67 adults. Self-rated playfulness was associated with lower baseline and activity (climbing stairs) heart rate and faster recovery heart rate (correlation coefficients were between -0.19 and -0.24 for global playfulness). Overall, Study 2 supported the findings of Study 1 by showing positive associations of playfulness with objective indicators of physical fitness (primarily cardio-respiratory fitness). The findings represent a starting point for future studies on the relationships between playfulness, and health, activity, and physical fitness.",
"title": ""
},
{
"docid": "8de1acc08d32f8840de8375078f2369a",
"text": "Widespread acceptance of virtual reality has been partially handicapped by the inability of current systems to accommodate multiple viewpoints, thereby limiting their appeal for collaborative applications. We are exploring the ability to utilize passive, untracked participants in a powerwall environment. These participants see the same image as the active, immersive participant. This does present the passive user with a varying viewpoint that does not correspond to their current position. We demonstrate the impact this will have on the perceived image and show that human psychology is actually well adapted to compensating for what, on the surface, would seem to be a very drastic distortion. We present some initial guidelines for system design that minimize the negative impact of passive participation, allowing two or more collaborative participants. We then outline future experimentation to measure user compensation for these distorted viewpoints.",
"title": ""
},
{
"docid": "8106487f98bcc94c1310799e74e7a173",
"text": "We present a method to predict long-term motion of pedestrians, modeling their behavior as jump-Markov processes with their goal a hidden variable. Assuming approximately rational behavior, and incorporating environmental constraints and biases, including time-varying ones imposed by traffic lights, we model intent as a policy in a Markov decision process framework. We infer pedestrian state using a Rao-Blackwellized filter, and intent by planning according to a stochastic policy, reflecting individual preferences in aiming at the same goal.",
"title": ""
},
{
"docid": "d9f442d281de14651ca17ec5d160b2d2",
"text": "Query expansion of named entities can be employed in order to increase the retrieval effectiveness. A peculiarity of named entities compared to other vocabulary terms is that they are very dynamic in appearance, and synonym relationships between terms change with time. In this paper, we present an approach to extracting synonyms of named entities over time from the whole history of Wikipedia. In addition, we will use their temporal patterns as a feature in ranking and classifying them into two types, i.e., time-independent or time-dependent. Time-independent synonyms are invariant to time, while time-dependent synonyms are relevant to a particular time period, i.e., the synonym relationships change over time. Further, we describe how to make use of both types of synonyms to increase the retrieval effectiveness, i.e., query expansion with time-independent synonyms for an ordinary search, and query expansion with time-dependent synonyms for a search wrt. temporal criteria. Finally, through an evaluation based on TREC collections, we demonstrate how retrieval performance of queries consisting of named entities can be improved using our approach.",
"title": ""
},
{
"docid": "2fa75232c6080f2c79897579b78f31d5",
"text": "The rapid development of cloud computing promotes a wide deployment of data and computation outsourcing to cloud service providers by resource-limited entities. Based on a pay-per-use model, a client without enough computational power can easily outsource large-scale computational tasks to a cloud. Nonetheless, the issue of security and privacy becomes a major concern when the customer’s sensitive or confidential data is not processed in a fully trusted cloud environment. Recently, a number of publications have been proposed to investigate and design specific secure outsourcing schemes for different computational tasks. The aim of this survey is to systemize and present the cutting-edge technologies in this area. It starts by presenting security threats and requirements, followed with other factors that should be considered when constructing secure computation outsourcing schemes. In an organized way, we then dwell on the existing secure outsourcing solutions to different computational tasks such as matrix computations, mathematical optimization, and so on, treating data confidentiality as well as computation integrity. Finally, we provide a discussion of the literature and a list of open challenges in the area.",
"title": ""
}
] |
scidocsrr
|
342fae05a11c49e7dd3ea5edfbc13019
|
Automatic Hair Detection in the Wild
|
[
{
"docid": "01e35e372cde2ce0df50d1ff85e59df6",
"text": "In this paper, we present an automatic method for hair segmentation. Our algorithm is divided into two steps. Firstly, we take information from frequential and color analysis in order to create binary masks as descriptor of the hair location. Secondly, we perform a 'matting treatment' which is a process to extract foreground object from an image. This approach is based on markers which positions are initialized from the fusion of frequential and color masks. At the end the matting treatment result is use to segment the hair. Results are evaluated using semi- manual segmentation references.",
"title": ""
},
{
"docid": "72f17106ad48b144ccab55b564fece7d",
"text": "We present an efficient and robust model matching method which uses a joint shape and texture appearance model to generate a set of region template detectors. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the AAM [1]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to human faces, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on two publicly available face data sets and improved tracking on a challenging set of in-car face sequences.",
"title": ""
}
] |
[
{
"docid": "4c0557527bb445c7d641028e2d88005f",
"text": "Small printed antennas will replace the commonly used normal-mode helical antennas of mobile handsets and systems in the future. This paper presents a novel small planar inverted-F antenna (PIFA) which is a common PIFA in which a U-shaped slot is etched to form a dual band operation for wearable and ubiquitous computing equipment. Health issues are considered in selecting suitable antenna topology and the placement of the antenna. Various applications are presented while the paper mainly discusses about the GSM applications.",
"title": ""
},
{
"docid": "759140ad09a5a8ce5c5e1ca78e238de1",
"text": "Various issues make framework development harder than regular development. Building product lines and frameworks requires increased coordination and communication between stakeholders and across the organization.\n The difficulty of building the right abstractions ranges from understanding the domain models, selecting and evaluating the framework architecture, to designing the right interfaces, and adds to the complexity of a framework project.",
"title": ""
},
{
"docid": "fb05cf398f1d50f9321e7745ad8bcdc9",
"text": "Occlusion is the key and challenging problem in stereo matching, because the results from depth maps are significantly influenced by occlusion regions. In this paper, we propose a method for occlusion and error regions detection and for efficient holefilling based on an energy minimization. First, we implement conventional global stereo matching algorithms to estimate depth information. Exploiting the result from a stereo matching method, we segments the depth map occlusion and error regions into nonocclusion regions. To detect occlusion and error regions, we model an energy function with three constraints such as ordering, uniqueness, and color similarity constraints. After labeling the occlusion and error regions, we optimize an energy function based MRF via dynamic programing. In order to evaluate the performance of our proposed method, we measure the percentages of mismatching pixels (BPR). And we subjectively compare the results of our proposed method with conventional methods. Consequently, the proposed method increases the accuracy of depth estimation, and experimental results show that the proposed method generates more stable depth maps compared to the conventional methods.",
"title": ""
},
{
"docid": "9d45c1deaf429be2a5c33cd44b04290e",
"text": "In this paper, a new omni-directional driving system with one spherical wheel is proposed. This system is able to overcome the existing driving systems with structural limitations in vertical, horizontal and diagonal movement. This driving system was composed of two stepping motors, a spherical wheel covered by a ball bearing, a weight balancer for the elimination of eccentricity, and ball plungers for balance. All parts of this structure is located at same distance on the center because the center of gravity of this system must be placed at the center of the system. An own ball bearing was designed for settled rotation and smooth direction change of a spherical wheel. The principle of an own ball bearing is the reversal of the ball mouse. Steel as the material of ball in the own ball bearing, was used for the prevention the slip with ground. One of the stepping motors is used for driving the spherical wheel. This spherical wheel is stable because of the support of ball bearing. And the other enables to move in a wanted direction while it rotates based on the central axis. The ATmega128 chip is used for the control of two stepping motors. To verify the proposed system, driving experiments was executed in variety of environments. Finally, the performance and the validity of the omni-directional driving system were confirmed.",
"title": ""
},
{
"docid": "5d02e3dafc37cd96789342d58cc69019",
"text": "The tremendous number of sensors and smart objects being deployed in the Internet of Things (IoT) pose the potential for IT systems to detect and react to live-situations. For using this hidden potential, complex event processing (CEP) systems offer means to efficiently detect event patterns (complex events) in the sensor streams and therefore, help in realizing a “distributed intelligence” in the IoT. With the increasing number of data sources and the increasing volume at which data is produced, parallelization of event detection is crucial to limit the time events need to be buffered before they actually can be processed. In this paper, we propose a pattern-sensitive partitioning model for data streams that is capable of achieving a high degree of parallelism in detecting event patterns, which formerly could only consistently be detected in a sequential manner or at a low parallelization degree. Moreover, we propose methods to dynamically adapt the parallelization degree to limit the buffering imposed on event detection in the presence of dynamic changes to the workload. Extensive evaluations of the system behavior show that the proposed partitioning model allows for a high degree of parallelism and that the proposed adaptation methods are able to meet a buffering limit for event detection under high and dynamic workloads.",
"title": ""
},
{
"docid": "a226b2f802496414b942d2ba6d95d285",
"text": "Employees are considered as one of the most important assets of any institution. University of Gondar is one of the known universities in Ethiopia and has large number employees. Success of this University depends on the productivity of its employees. Social media, which has become very popular, has infiltrated the workplace and most employees are utilizing social media in the workplace without any access restriction. The purpose of this study is to examine the extent of social media participation by employees and its effect on their productivity. A sample was stratified randomly selected from a population that has internet connectivity in the workplace. Primary data was collected by using a questionnaire and interview. The research found both negative and positive relationship between social media participation and employee productivity. The negative relationship was however found to be stronger as 68.4 % employees spend most of their time on social media enhancing personal networks and 86 % of employees use office hours to visit online social networks. Positive relationship exists in employee, who use of social media for seeking and viewing work related information. The study concluded that employees participate in social media in the workplace for both work and non-work related",
"title": ""
},
{
"docid": "e9dcc0eb5894907142dffdf2aa233c35",
"text": "The explosion of the web and the abundance of linked data demand for effective and efficient methods for storage, management and querying. More specifically, the ever-increasing size and number of RDF data collections raises the need for efficient query answering, and dictates the usage of distributed data management systems for effectively partitioning and querying them. To this direction, Apache Spark is one of the most active big-data approaches, with more and more systems adopting it, for efficient, distributed data management. The purpose of this paper is to provide an overview of the existing works dealing with efficient query answering, in the area of RDF data, using Apache Spark. We discuss on the characteristics and the key dimension of such systems, we describe novel ideas in the area, and the corresponding drawbacks, and provide directions for future work.",
"title": ""
},
{
"docid": "9f15297a7eab4084fa7d17b618d82a02",
"text": "Purpose – The purpose of this study is to update a global ranking of knowledge management and intellectual capital (KM/IC) academic journals. Design/methodology/approach – Two different approaches were utilized: a survey of 379 active KM/IC researchers; and the journal citation impact method. Scores produced by the application of these methods were combined to develop the final ranking. Findings – Twenty-five KM/IC-centric journals were identified and ranked. The top six journals are: Journal of Knowledge Management, Journal of Intellectual Capital, The Learning Organization, Knowledge Management Research & Practice, Knowledge and Process Management and International Journal of Knowledge Management. Knowledge Management Research & Practice has substantially improved its reputation. The Learning Organization and Journal of Intellectual Capital retained their previous positions due to their strong citation impact. The number of KM/IC-centric and KM/IC-relevant journals has been growing at the pace of one new journal launch per year. This demonstrates that KM/IC is not a scientific fad; instead, the discipline is progressing towards academic maturity and recognition. Practical implications – The developed ranking may be used by various stakeholders, including journal editors, publishers, reviewers, researchers, new scholars, students, policymakers, university administrators, librarians and practitioners. It is a useful tool to further promote the KM/IC discipline and develop its unique identity. It is important for all KM/IC journals to become included in Thomson Reuters’ Journal Citation Reports. Originality/value – This is the most up-to-date ranking of KM/IC journals.",
"title": ""
},
{
"docid": "40b129a9960e3d9dc51fa5fbe48eecbc",
"text": "We report the first case of tinea corporis bullosa due to Trichophyton schoenleinii in a 41-year-old Romanian woman, without any involvement of the scalp and hair. The species identification was performed using macroscopic and microscopic features of the dermatophyte and its physiological abilities. Epidemiological aspects of the case are also discussed. The general treatment with terbinafine and topical applications of ciclopiroxolamine cream have led to complete healing, with the lesions disappearing in 2 weeks.",
"title": ""
},
{
"docid": "639b9ff274e5242c4bfc6a99d9c6963e",
"text": "Construction management suffers from many problems which need to be solved or better understood. The research described in this paper evaluates the effectiveness of implementing the Last Planner System (LPS) to improve construction planning practice and enhance site management in the Saudi construction industry. To do so, LPS was implemented in two large state-owned construction projects through an action research process. The data collection methods employed included interviews, observations and a survey questionnaire. The findings identify major benefits covering many aspects of project management, including improved construction planning, enhanced site management and better communication and coordination between the parties involved. The fact that the structural work in one of the projects studied was completed two weeks ahead of schedule provides evidence of improvement of the specific site construction planning practices. The paper also describes barriers to the realisation the full potential of LPS, including the involvement of many subcontractors and people’s commitment and attitude to time.",
"title": ""
},
{
"docid": "a4b20f765b443168e0bced9926b2cb74",
"text": "Over the past decade, analysts have proposed several frameworks to explain the process of radicalization into violent extremism (RVE). These frameworks are based primarily on rational, conceptual models which are neither guided by theory nor derived from systematic research. This article reviews recent (post-9/11) conceptual models of the radicalization process and recent (post-9/11) empirical studies of RVE. It emphasizes the importance of distinguishing between ideological radicalization and terrorism involvement, though both issues deserve further empirical inquiry.Finally, it summarizes some recent RVE-related research efforts, identifies seven things that social science researchers and operational personnel still need to know about violent radicalization, and offers a set of starting assumptions to move forward with a research agenda that might help to thwart tomorrow's terrorists. This article is available in Journal of Strategic Security: http://scholarcommons.usf.edu/jss/ vol4/iss4/3 Journal of Strategic Security Volume 4 Issue 4 2011, pp. 37-62 DOI: 10.5038/1944-0472.4.4.2 Journal of Strategic Security (c) 2011 ISSN: 1944-0464 eISSN: 1944-0472 37 Radicalization into Violent Extremism II: A Review of Conceptual Models and Empirical Research Randy Borum University of South Florida wborum@usf.edu",
"title": ""
},
{
"docid": "b5070b6b55a7fe64fc18993ad9cd7325",
"text": "STUDY OBJECTIVE\nto determine the efficacy of fish-oil dietary supplements in active rheumatoid arthritis and their effect on neutrophil leukotriene levels.\n\n\nDESIGN\nnonrandomized, double-blinded, placebo-controlled, crossover trial with 14-week treatment periods and 4-week washout periods.\n\n\nSETTING\nacademic medical center, referral-based rheumatology clinic.\n\n\nPATIENTS\nforty volunteers with active, definite, or classical rheumatoid arthritis. Five patients dropped out, and two were removed for noncompliance.\n\n\nINTERVENTIONS\ntreatment with nonsteroidal anti-inflammatory drugs, slow-acting antirheumatic drugs, and prednisone was continued. Twenty-one patients began with a daily dosage of 2.7 g of eicosapaentanic acid and 1.8 g of docosahexenoic acid given in 15 MAX-EPA capsules (R.P. Scherer, Clearwater, Florida), and 19 began with identical-appearing placebos. The background diet was unchanged.\n\n\nMEASUREMENTS AND MAIN RESULTS\nthe following results favored fish oil placebo after 14 weeks: mean time to onset of fatigue improved by 156 minutes (95% confidence interval, 1.2 to 311.0 minutes), and number of tender joints decreased by 3.5 (95% Cl, -6.0 to -1.0). Other clinical measures favored fish oil as well but did reach statistical significance. Neutrophil leukotriene B4 production was correlated with the decrease in number of tender joints (Spearman rank correlation r=0.53; p less than 0.05). There were no statistically significant differences in hemoglobin level, sedimentation rate, or presence of rheumatoid factor or in patient-reported adverse effects. An effect from the fish oil persisted beyond the 4-week washout period.\n\n\nCONCLUSIONS\nfish-oil ingestion results in subjective alleviation of active rheumatoid arthritis and reduction in neutrophil leukotriene B4 production. Further studies are needed to elucidate mechanisms of action and optimal dose and duration of fish-oil supplementation.",
"title": ""
},
{
"docid": "4b8823bffcc77968b7ac087579ab84c9",
"text": "Numerous complains have been made by Android users who severely suffer from the sluggish response when interacting with their devices. However, very few studies have been conducted to understand the user-perceived latency or mitigate the UI-lagging problem. In this paper, we conduct the first systematic measurement study to quantify the user-perceived latency using typical interaction-intensive Android apps in running with and without background workloads. We reveal the insufficiency of Android system in ensuring the performance of foreground apps and therefore design a new system to address the insufficiency accordingly. We develop a lightweight tracker to accurately identify all delay-critical threads that contribute to the slow response of user interactions. We then build a resource manager that can efficiently schedule various system resources including CPU, I/O, and GPU, for optimizing the performance of these threads. We implement the proposed system on commercial smartphones and conduct comprehensive experiments to evaluate our implementation. Evaluation results show that our system is able to significantly reduce the user-perceived latency of foreground apps in running with aggressive background workloads, up to 10x, while incurring negligible system overhead of less than 3.1 percent CPU and 7 MB memory.",
"title": ""
},
{
"docid": "6084bf59cfd956d119692d00c442f93d",
"text": "Microbial biofilms are complex, self-organized communities of bacteria, which employ physiological cooperation and spatial organization to increase both their metabolic efficiency and their resistance to changes in their local environment. These properties make biofilms an attractive target for engineering, particularly for the production of chemicals such as pharmaceutical ingredients or biofuels, with the potential to significantly improve yields and lower maintenance costs. Biofilms are also a major cause of persistent infection, and a better understanding of their organization could lead to new strategies for their disruption. Despite this potential, the design of synthetic biofilms remains a major challenge, due to the complex interplay between transcriptional regulation, intercellular signaling, and cell biophysics. Computational modeling could help to address this challenge by predicting the behavior of synthetic biofilms prior to their construction; however, multiscale modeling has so far not been achieved for realistic cell numbers. This paper presents a computational method for modeling synthetic microbial biofilms, which combines three-dimensional biophysical models of individual cells with models of genetic regulation and intercellular signaling. The method is implemented as a software tool (CellModeller), which uses parallel Graphics Processing Unit architectures to scale to more than 30,000 cells, typical of a 100 μm diameter colony, in 30 min of computation time.",
"title": ""
},
{
"docid": "217742ed285e8de40d68188566475126",
"text": "It has been proposed that D-amino acid oxidase (DAO) plays an essential role in degrading D-serine, an endogenous coagonist of N-methyl-D-aspartate (NMDA) glutamate receptors. DAO shows genetic association with amyotrophic lateral sclerosis (ALS) and schizophrenia, in whose pathophysiology aberrant metabolism of D-serine is implicated. Although the pathology of both essentially involves the forebrain, in rodents, enzymatic activity of DAO is hindbrain-shifted and absent in the region. Here, we show activity-based distribution of DAO in the central nervous system (CNS) of humans compared with that of mice. DAO activity in humans was generally higher than that in mice. In the human forebrain, DAO activity was distributed in the subcortical white matter and the posterior limb of internal capsule, while it was almost undetectable in those areas in mice. In the lower brain centers, DAO activity was detected in the gray and white matters in a coordinated fashion in both humans and mice. In humans, DAO activity was prominent along the corticospinal tract, rubrospinal tract, nigrostriatal system, ponto-/olivo-cerebellar fibers, and in the anterolateral system. In contrast, in mice, the reticulospinal tract and ponto-/olivo-cerebellar fibers were the major pathways showing strong DAO activity. In the human corticospinal tract, activity-based staining of DAO did not merge with a motoneuronal marker, but colocalized mostly with excitatory amino acid transporter 2 and in part with GFAP, suggesting that DAO activity-positive cells are astrocytes seen mainly in the motor pathway. These findings establish the distribution of DAO activity in cerebral white matter and the motor system in humans, providing evidence to support the involvement of DAO in schizophrenia and ALS. Our results raise further questions about the regulation of D-serine in DAO-rich regions as well as the physiological/pathological roles of DAO in white matter astrocytes.",
"title": ""
},
{
"docid": "83e53a09792e434db2bb5bef32c7bf61",
"text": "Extractive document summarization aims to conclude given documents by extracting some salient sentences. Often, it faces two challenges: 1) how to model the information redundancy among candidate sentences; 2) how to select the most appropriate sentences. This paper attempts to build a strong summarizer DivSelect+CNNLM by presenting new algorithms to optimize each of them. Concretely, it proposes CNNLM, a novel neural network language model (NNLM) based on convolutional neural network (CNN), to project sentences into dense distributed representations, then models sentence redundancy by cosine similarity. Afterwards, it formulates the selection process as an optimization problem, constructing a diversified selection process (DivSelect) with the aim of selecting some sentences which have high prestige, meantime, are dis-similar with each other. Experimental results on DUC2002 and DUC2004 benchmark data sets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "f22375b6d29a83815aedd999cb945027",
"text": "INTRODUCTION\nNumerous methods for motor unit number estimation (MUNE) have been developed. The objective of this article is to summarize and compare the major methods and the available data regarding their reproducibility, validity, application, refinement, and utility.\n\n\nMETHODS\nUsing specified search criteria, a systematic review of the literature was performed. Reproducibility, normative data, application to specific diseases and conditions, technical refinements, and practicality were compiled into a comprehensive database and analyzed.\n\n\nRESULTS\nThe most commonly reported MUNE methods are the incremental, multiple-point stimulation, spike-triggered averaging, and statistical methods. All have established normative data sets and high reproducibility. MUNE provides quantitative assessments of motor neuron loss and has been applied successfully to the study of many clinical conditions, including amyotrophic lateral sclerosis and normal aging.\n\n\nCONCLUSIONS\nMUNE is an important research technique in human subjects, providing important data regarding motor unit populations and motor unit loss over time.",
"title": ""
},
{
"docid": "b6f05fcc1face0dcf4981e6578b0330e",
"text": "The importance of accurate and timely information describing the nature and extent of land resources and changes over time is increasing, especially in rapidly growing metropolitan areas. We have developed a methodology to map and monitor land cover change using multitemporal Landsat Thematic Mapper (TM) data in the seven-county Twin Cities Metropolitan Area of Minnesota for 1986, 1991, 1998, and 2002. The overall seven-class classification accuracies averaged 94% for the four years. The overall accuracy of land cover change maps, generated from post-classification change detection methods and evaluated using several approaches, ranged from 80% to 90%. The maps showed that between 1986 and 2002 the amount of urban or developed land increased from 23.7% to 32.8% of the total area, while rural cover types of agriculture, forest and wetland decreased from 69.6% to 60.5%. The results quantify the land cover change patterns in the metropolitan area and demonstrate the potential of multitemporal Landsat data to provide an accurate, economical means to map and analyze changes in land cover over time that can be used as inputs to land management and policy decisions. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5d5e42cdb2521c5712b372acaf7fb25a",
"text": "Unsupervised anomaly detection on multior high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.",
"title": ""
},
{
"docid": "70b6779247f28ddc2e153c7bc159c98d",
"text": "Radio-frequency identification (RFID) is a wireless technology for automatic identification using electromagnetic fields in the radio frequency spectrum. In addition to the easy deployment and decreasing prices for tags, this technology has many advantages to bar codes and other common identification methods, such as no required line of sight and the ability to read several tags simultaneously. Therefore it enjoys large popularity among large businesses and continues to spread in the consumer market. Common applications include the fields of electronic article surveillance, access control, tracking, and identification of objects and animals. This paper introduces RFID technology, analyzes modern applications, and tries to point out strengths and weaknesses of RFID systems.",
"title": ""
}
] |
scidocsrr
|
85465a7c369ef42605308baa7aa806f4
|
Analyzing Locally Coordinated Cyber-Physical Attacks for Undetectable Line Outages
|
[
{
"docid": "d3ce627360a466ac95de3a61d64995e1",
"text": "The large size of power systems makes behavioral analysis of electricity markets computationally taxing. Reducing the system into a smaller equivalent, based on congestion zones, can substantially reduce the computational requirements. In this paper, we propose a scheme to determine the equivalent reactance of interfaces of a reduced system based upon the zonal power transfer distribution factors of the original system. The dc power flow model is used to formulate the problem. Test examples are provided using both an illustrative six-bus system and a more realistically sized 12 925-bus system.",
"title": ""
}
] |
[
{
"docid": "600ecbb2ae0e5337a568bb3489cd5e29",
"text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.",
"title": ""
},
{
"docid": "0ee891e2f75553262ebaaaf2be1d8e27",
"text": "How do you know when your core needs to change? And how do you determine what should replace it? From an in-depth study of 25 companies, the author, a strategy consultant, has discovered that it's possible to measure the vitality of a business's core. If it needs reinvention, he says, the best course is to mine hidden assets. Some of the 25 companies were in deep crisis when they began the process of redefining themselves. But, says Zook, management teams can learn to recognize early signs of erosion. He offers five diagnostic questions with which to evaluate the customers, key sources of differentiation, profit pools, capabilities, and organizational culture of your core business. The next step is strategic regeneration. In four-fifths of the companies Zook examined, a hidden asset was the centerpiece of the new strategy. He provides a map for identifying the hidden assets in your midst, which tend to fall into three categories: undervalued business platforms, untapped insights into customers, and underexploited capabilities. The Swedish company Dometic, for example, was manufacturing small absorption refrigerators for boats and RVs when it discovered a hidden asset: its understanding of, and access to, customers in the RV market. The company took advantage of a boom in that market to refocus on complete systems for live-in vehicles. The Danish company Novozymes, which produced relatively low-tech commodity enzymes such as those used in detergents, realized that its underutilized biochemical capability in genetic and protein engineering was a hidden asset and successfully refocused on creating bioengineered specialty enzymes. Your next core business is not likely to announce itself with fanfare. Use the author's tools to conduct an internal audit of possibilities and pinpoint your new focus.",
"title": ""
},
{
"docid": "360f2eb720f51c29b5561215d709139e",
"text": "A statistical hypothesis test determines whether a hypothesis should be rejected based on samples from populations. In particular, randomized controlled experiments (or A/B testing) that compare population means using, e.g., t-tests, have been widely deployed in technology companies to aid in making data-driven decisions. Samples used in these tests are collected from users and may contain sensitive information. Both the data collection and the testing process may compromise individuals’ privacy. In this paper, we study how to conduct hypothesis tests to compare population means while preserving privacy. We use the notation of local differential privacy (LDP), which has recently emerged as the main tool to ensure each individual’s privacy without the need of a trusted data collector. We propose LDP tests that inject noise into every user’s data in the samples before collecting them (so users do not need to trust the data collector), and draw conclusions with bounded type-I (significance level) and type-II errors (1− power). Our approaches can be extended to the scenario where some users require LDP while some are willing to provide exact data. We report experimental results on real-world datasets to verify the effectiveness of our approaches.",
"title": ""
},
{
"docid": "cbdfd886416664809046ff2e674f4ae1",
"text": "Domain adaptation addresses the problem where data instances of a source domain have different distributions from that of a target domain, which occurs frequently in many real life scenarios. This work focuses on unsupervised domain adaptation, where labeled data are only available in the source domain. We propose to interpolate subspaces through dictionary learning to link the source and target domains. These subspaces are able to capture the intrinsic domain shift and form a shared feature representation for cross domain recognition. Further, we introduce a quantitative measure to characterize the shift between two domains, which enables us to select the optimal domain to adapt to the given multiple source domains. We present experiments on face recognition across pose, illumination and blur variations, cross dataset object recognition, and report improved performance over the state of the art.",
"title": ""
},
{
"docid": "d0811a8c8b760b8dadfa9a51df568bd9",
"text": "A strain of the microalga Chlorella pyrenoidosa F-9 in our laboratory showed special characteristics when transferred from autotrophic to heterotrophic culture. In order to elucidate the possible metabolic mechanism, the gene expression profiles of the autonomous organelles in the green alga C. pyrenoidosa under autotrophic and heterotrophic cultivation were compared by suppression subtractive hybridization technology. Two subtracted libraries of autotrophic and heterotrophic C. pyrenoidosa F-9 were constructed, and 160 clones from the heterotrophic library were randomly selected for DNA sequencing. Dot blot hybridization showed that the ratio of positivity was 70.31% from the 768 clones. Five chloroplast genes (ftsH, psbB, rbcL, atpB, and infA) and two mitochondrial genes (cox2 and nad6) were selected to verify their expression levels by real-time quantitative polymerase chain reaction. Results showed that the seven genes were abundantly expressed in the heterotrophic culture. Among the seven genes, the least increment of gene expression was ftsH, which was expressed 1.31-1.85-fold higher under heterotrophy culture than under autotrophy culture, and the highest increment was psbB, which increased 28.07-39.36 times compared with that under autotrophy conditions. The expression levels of the other five genes were about 10 times higher in heterotrophic algae than in autotrophic algae. In inclusion, the chloroplast and mitochondrial genes in C. pyrenoidosa F-9 might be actively involved in heterotrophic metabolism.",
"title": ""
},
{
"docid": "eb3ce498729d7088a4acf525c6961f94",
"text": "Upon vascular injury, platelets are activated by adhesion to adhesive proteins, such as von Willebrand factor and collagen, or by soluble platelet agonists, such as ADP, thrombin, and thromboxane A(2). These adhesive proteins and soluble agonists induce signal transduction via their respective receptors. The various receptor-specific platelet activation signaling pathways converge into common signaling events that stimulate platelet shape change and granule secretion and ultimately induce the \"inside-out\" signaling process leading to activation of the ligand-binding function of integrin α(IIb)β(3). Ligand binding to integrin α(IIb)β(3) mediates platelet adhesion and aggregation and triggers \"outside-in\" signaling, resulting in platelet spreading, additional granule secretion, stabilization of platelet adhesion and aggregation, and clot retraction. It has become increasingly evident that agonist-induced platelet activation signals also cross talk with integrin outside-in signals to regulate platelet responses. Platelet activation involves a series of rapid positive feedback loops that greatly amplify initial activation signals and enable robust platelet recruitment and thrombus stabilization. Recent studies have provided novel insight into the molecular mechanisms of these processes.",
"title": ""
},
{
"docid": "01572c84840fe3449dca555a087d2551",
"text": "A printed two-multiple-input multiple-output (MIMO)-antenna system incorporating a neutralization line for antenna port decoupling for wireless USB-dongle applications is proposed. The two monopoles are located on the two opposite corners of the system PCB and spaced apart by a small ground portion, which serves as a layout area for antenna feeding network and connectors for the use of standalone antennas as an optional scheme. It was found that by removing only 1.5 mm long inwards from the top edge in the small ground portion and connecting the two antennas therein with a thin printed line, the antenna port isolation can be effectively improved. The neutralization line in this study occupies very little board space, and the design requires no conventional modification to the ground plane for mitigating mutual coupling. The behavior of the neutralization line was rigorously analyzed, and the MIMO characteristics of the proposed antennas was also studied and tested in the reverberation chamber. Details of the constructed prototype are described and discussed in this paper.",
"title": ""
},
{
"docid": "681360f20a662f439afaaa022079f7c0",
"text": "We present a multi-PC/camera system that can perform 3D reconstruction and ellipsoids fitting of moving humans in real time. The system consists of five cameras. Each camera is connected to a PC which locally extracts the silhouettes of the moving person in the image captured by the camera. The five silhouette images are then sent, via local network, to a host computer to perform 3D voxel-based reconstruction by an algorithm called SPOT. Ellipsoids are then used to fit the reconstructed data. By using a simple and user-friendly interface, the user can display and observe, in real time and from any view-point, the 3D models of the moving human body. With a rate of higher than 15 frames per second, the system is able to capture nonintrusively sequence of human motions.",
"title": ""
},
{
"docid": "c7162cc2e65c52d9575fe95e2c4f62f4",
"text": "The enactive approach to cognition is typically proposed as a viable alternative to traditional cognitive science. Enactive cognition displaces the explanatory focus from the internal representations of the agent to the direct sensorimotor interaction with its environment. In this paper, we investigate enactive learning through means of artificial agent simulations. We compare the performances of the enactive agent to an agent operating on classical reinforcement learning in foraging tasks within maze environments. The characteristics of the agents are analysed in terms of the accessibility of the environmental states, goals, and exploration/exploitation tradeoffs. We confirm that the enactive agent can successfully interact with its environment and learn to avoid unfavourable interactions using intrinsically defined goals. The performance of the enactive agent is shown to be limited by the number of affordable actions.",
"title": ""
},
{
"docid": "3130e666076d119983ac77c5d77d0aed",
"text": "of Ph.D. dissertation, University of Haifa, Israel.",
"title": ""
},
{
"docid": "b1dd6c2db60cae5405c07c3757ed6696",
"text": "In this paper, we present the Smartbin system that identifies fullness of litter bin. The system is designed to collect data and to deliver the data through wireless mesh network. The system also employs duty cycle technique to reduce power consumption and to maximize operational time. The Smartbin system was tested in an outdoor environment. Through the testbed, we collected data and applied sense-making methods to obtain litter bin utilization and litter bin daily seasonality information. With such information, litter bin providers and cleaning contractors are able to make better decision to increase productivity.",
"title": ""
},
{
"docid": "043b51b50f17840508b0dfb92c895fc9",
"text": "Over the years, several security measures have been employed to combat the menace of insecurity of lives and property. This is done by preventing unauthorized entrance into buildings through entrance doors using conventional and electronic locks, discrete access code, and biometric methods such as the finger prints, thumb prints, the iris and facial recognition. In this paper, a prototyped door security system is designed to allow a privileged user to access a secure keyless door where valid smart card authentication guarantees an entry. The model consists of hardware module and software which provides a functionality to allow the door to be controlled through the authentication of smart card by the microcontroller unit. (",
"title": ""
},
{
"docid": "73545ef815fb22fa048fed3e0bc2cc8b",
"text": "Redox-based resistive switching devices (ReRAM) are an emerging class of nonvolatile storage elements suited for nanoscale memory applications. In terms of logic operations, ReRAM devices were suggested to be used as programmable interconnects, large-scale look-up tables or for sequential logic operations. However, without additional selector devices these approaches are not suited for use in large scale nanocrossbar memory arrays, which is the preferred architecture for ReRAM devices due to the minimum area consumption. To overcome this issue for the sequential logic approach, we recently introduced a novel concept, which is suited for passive crossbar arrays using complementary resistive switches (CRSs). CRS cells offer two high resistive storage states, and thus, parasitic “sneak” currents are efficiently avoided. However, until now the CRS-based logic-in-memory approach was only shown to be able to perform basic Boolean logic operations using a single CRS cell. In this paper, we introduce two multi-bit adder schemes using the CRS-based logic-in-memory approach. We proof the concepts by means of SPICE simulations using a dynamical memristive device model of a ReRAM cell. Finally, we show the advantages of our novel adder concept in terms of step count and number of devices in comparison to a recently published adder approach, which applies the conventional ReRAM-based sequential logic concept introduced by Borghetti et al.",
"title": ""
},
{
"docid": "3ba011d181a4644c8667b139c63f50ff",
"text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.",
"title": ""
},
{
"docid": "5546ec134b205144fed46a585db447b4",
"text": "Historically, the control of wound infection depended on antiseptic and aseptic techniques directed at coping with the infecting organism. In the 19th century and the early part of the 20th century, wound infections had devastating consequences and a measurable mortality. Even in the 1960s, before the correct use of antibiotics and the advent of modern preoperative and postoperative care, as much as one quarter of a surgical ward might have been occupied by patients with wound complications. As a result, wound management, in itself, became an important component of ward care and of medical education. It is fortunate that many factors have intervened so that the so-called wound rounds have become a practice of the past.The epidemiology of wound infection has changed as surgeons have learned to control bacteria and the inoculum as well as to focus increasingly on the patient (the host) for measures that will continue to provide improved results. The following three factors are the determinants of any infectious process:",
"title": ""
},
{
"docid": "ccddb0cb0f0fe28090d8e0540914ee6c",
"text": "Do online consumer reviews affect restaurant demand? I investigate this question using a novel dataset combining reviews from the website Yelp.com and restaurant data from the Washington State Department of Revenue. Because Yelp prominently displays a restaurant's rounded average rating, I can identify the causal impact of Yelp ratings on demand with a regression discontinuity framework that exploits Yelp’s rounding thresholds. I present three findings about the impact of consumer reviews on the restaurant industry: (1) a one-star increase in Yelp rating leads to a 5-9 percent increase in revenue, (2) this effect is driven by independent restaurants; ratings do not affect restaurants with chain affiliation, and (3) chain restaurants have declined in market share as Yelp penetration has increased. This suggests that online consumer reviews substitute for more traditional forms of reputation. I then test whether consumers use these reviews in a way that is consistent with standard learning models. I present two additional findings: (4) consumers do not use all available information and are more responsive to quality changes that are more visible and (5) consumers respond more strongly when a rating contains more information. Consumer response to a restaurant’s average rating is affected by the number of reviews and whether the reviewers are certified as “elite” by Yelp, but is unaffected by the size of the reviewers’ Yelp friends network. † Harvard Business School, mluca@hbs.edu",
"title": ""
},
{
"docid": "c9394a05e7f18eece53d082e346605bc",
"text": "Machine learning (ML) is one of the intelligent methodologies that have shown promising results in the domains of classification and prediction. One of the expanding areas necessitating good predictive accuracy is sport prediction, due to the large monetary amounts involved in betting. In addition, club managers and owners are striving for classification models so that they can understand and formulate strategies needed to win matches. These models are based on numerous factors involved in the games, such as the results of historical matches, player performance indicators, and opposition information. This paper provides a critical analysis of the literature in ML, focusing on the application of Artificial Neural Network (ANN) to sport results prediction. In doing so, we identify the learning methodologies utilised, data sources, appropriate means of model evaluation, and specific challenges of predicting sport results. This then leads us to propose a novel sport prediction framework through which ML can be used as a learning strategy. Our research will hopefully be informative and of use to those performing future research in this application area. 2017 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "719bb67f6c0ced91a2ba4d08b8708470",
"text": "We perform a series of 3-class sentiment classification experiments on a set of 2,624 tweets produced during the run-up to the Irish General Elections in February 2011. Even though tweets that have been labelled as sarcastic have been omitted from this set, it still represents a difficult test set and the highest accuracy we achieve is 61.6% using supervised learning and a feature set consisting of subjectivity-lexicon-based scores, Twitterspecific features and the top 1,000 most discriminative words. This is superior to various naive unsupervised approaches which use subjectivity lexicons to compute an overall sentiment score for a <tweet,political party> pair.",
"title": ""
},
{
"docid": "64635c4d7d372acdba1fc3c36ffaaf12",
"text": "We investigate a technique from the literature, called the phantom-types technique, that uses parametric polymorphism, type constraints, and unification of polymorphic types to model a subtyping hierarchy. Hindley-Milner type systems, such as the one found in Standard ML, can be used to enforce the subtyping relation, at least for first-order values. We show that this technique can be used to encode any finite subtyping hierarchy (including hierarchies arising from multiple interface inheritance). We formally demonstrate the suitability of the phantom-types technique for capturing first-order subtyping by exhibiting a type-preserving translation from a simple calculus with bounded polymorphism to a calculus embodying the type system of SML.",
"title": ""
}
] |
scidocsrr
|
8d4635e50565994ce3da1cfeeea5371f
|
Natural Language Inference over Interaction Space
|
[
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
}
] |
[
{
"docid": "60922247ab6ec494528d3a03c0909231",
"text": "This paper proposes a new \"zone controlled induction heating\" (ZCIH) system. The ZCIH system consists of two or more sets of a high-frequency inverter and a split work coil, which adjusts the coil current amplitude in each zone independently. The ZCIH system has capability of controlling the exothermic distribution on the work piece to avoid the strain caused by a thermal expansion. As a result, the ZCIH system enables a rapid heating performance as well as an temperature uniformity. This paper proposes current phase control making the coil current in phase with each other, to adjust the coil current amplitude even when a mutual inductance exists between the coils. This paper presents operating principle, theoretical analysis, and experimental results obtained from a laboratory setup and a six-zone prototype for a semiconductor processing.",
"title": ""
},
{
"docid": "3b39cb869ee94778c5c20bff169631f2",
"text": "Mobile app reviews by users contain a wealth of information on the issues that users are experiencing. For example, a review might contain a feature request, a bug report, and/or a privacy complaint. Developers, users and app store owners (e.g. Apple, Blackberry, Google, Microsoft) can benefit from a better understanding of these issues – developers can better understand users’ concerns, app store owners can spot anomalous apps, and users can compare similar apps to decide which ones to download or purchase. However, user reviews are not labelled, e.g. we do not know which types of issues are raised in a review. Hence, one must sift through potentially thousands of reviews with slang and abbreviations to understand the various types of issues. Moreover, the unstructured and informal nature of reviews complicates the automated labelling of such reviews. In this paper, we study the multi-labelled nature of reviews from 20 mobile apps in the Google Play Store and Apple App Store. We find that up to 30 % of the reviews raise various types of issues in a single review (e.g. a review might contain a feature request and a bug report). We then propose an approach that can automatically assign multiple labels to reviews based on the raised issues with a precision of 66 % and recall of 65 %. Finally, we apply our approach to address three proof-of-concept analytics use case scenarios: (i) we compare competing apps to assist developers and users, (ii) we provide an overview of 601,221 reviews from 12,000 apps in the Google Play Store to assist app store owners and developers and (iii) we detect anomalous apps in the Google Play Store to assist app store owners and users.",
"title": ""
},
{
"docid": "3d10e6337a4c4f1e8bc646f49873636d",
"text": "Glutamate neurotransmission plays a crucial role in a variety of functions in the central nervous system, including learning and memory. However, little is known about the mechanisms underlying this process in mammals because of the scarceness of experimental models that permit correlation of behavioral and biochemical changes occurring during the different stages of learning and the retrieval of the acquired information. One model that has been useful to study these mechanisms is conditioned taste aversion (CTA), a paradigm in which animals learn to avoid new tastes when they are associated with gastrointestinal malaise. Glutamate receptors of the N-methyl-D-aspartate (NMDA) type appear to be necessary in this process, because blockade of this receptor prevents CTA. Phosphorylation of the main subunits of the NMDA receptor is a well-established biochemical mechanism for the modulation of the receptor response. Such modulation seems to be involved in CTA, because inhibitors of protein kinase C (PKC) block CTA acquisition and because the exposure to an unfamiliar taste results in an increased phosphorylation of tyrosine and serine residues of the NR2B subunit of the receptor in the insular cortex, the cerebral region where gustatory and visceral information converge. In this work we review these mechanisms of NMDA receptor modulation in CTA.",
"title": ""
},
{
"docid": "c2f53cf694b43d779b11d98a0cc03c6e",
"text": "The cross entropy (CE) method is a model based search method to solve optimization problems where the objective function has minimal structure. The Monte-Carlo version of the CE method employs the naive sample averaging technique which is inefficient, both computationally and space wise. We provide a novel stochastic approximation version of the CE method, where the sample averaging is replaced with incremental geometric averaging. This approach can save considerable computational and storage costs. Our algorithm is incremental in nature and possesses additional attractive features such as accuracy, stability, robustness and convergence to the global optimum for a particular class of objective functions. We evaluate the algorithm on a variety of global optimization benchmark problems and the results obtained corroborate our theoretical findings.",
"title": ""
},
{
"docid": "d62bded822aff38333a212ed1853b53c",
"text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated",
"title": ""
},
{
"docid": "79f5415cfc7f89685227abb130cd75e5",
"text": "Software engineering is knowledge-intensive work, and how to manage software engineering knowledge has received much attention. This systematic review identifies empirical studies of knowledge management initiatives in software engineering, and discusses the concepts studied, the major findings, and the research methods used. Seven hundred and sixty-two articles were identified, of which 68 were studies in an industry context. Of these, 29 were empirical studies and 39 reports of lessons learned. More than half of the empirical studies were case studies. The majority of empirical studies relate to technocratic and behavioural aspects of knowledge management, while there are few studies relating to economic, spatial and cartographic approaches. A finding reported across multiple papers was the need to not focus exclusively on explicit knowledge, but also consider tacit knowledge. We also describe implications for research and for practice.",
"title": ""
},
{
"docid": "c759c1a7f0479906750f4261f5659700",
"text": "Recently, it has been shown that excellent results can be achieved in both facial landmark localization and pose-invariant face recognition. These breakthroughs are attributed to the efforts of the community to manually annotate facial images in many different poses and to collect 3D facial data. In this paper, we propose a novel method for joint frontal view reconstruction and landmark localization using a small set of frontal images only. By observing that the frontal facial image is the one having the minimum rank of all different poses, an appropriate model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem, involving the minimization of the nuclear norm and the matrix l1 norm is solved. The proposed method is assessed in frontal face reconstruction, face landmark localization, pose-invariant face recognition, and face verification in unconstrained conditions. The relevant experiments have been conducted on 8 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems.",
"title": ""
},
{
"docid": "649faa18b01e86f6c2022880326373d7",
"text": "Is battery energy storage a feasible solution for lowering the operational costs of electric vehicle fast charging and reducing its impact on local grids? The thesis project aims at answering this question for the Swedish scenario. The proposed solution (fast charging station coupled with storage) is modelled in MATLAB, and its performance is tested in the framework provided by Swedish regulation and electricity tariff structure. The analysis is centred on the economic performance of the system. Its cost-effectiveness is assessed by means of an optimisation algorithm, designed for delivering the optimal control strategy and the required equipment sizing. A mixed-integer linear programming (MILP) formulation is utilised. The configuration and operative costs of conventional fast charging stations are used as a benchmark for the output of the optimisation. Sensitivity analysis is conducted on most relevant parameters: charging load, battery price and tariff structure. The modelling of the charging demand is based on statistics from currently implemented 50 kW DC chargers in Sweden. Overall, results show that with current figures the system may be an economically viable solution for both reducing costs and lowering the impact on the local distribution grid, at least during peak-period pricing. However, sensitivity analysis illustrates how system design and performance are highly dependent on input parameters. Among these, electricity tariff was identified as the most important. Consequently, detailed discussion on the influence of this parameter is conducted. Finally, the study shows how the system is in line with most recent directives proposed by the European Commission.",
"title": ""
},
{
"docid": "397a10734b9850629d9b0348baec95af",
"text": "Genetic algorithms (GAs) have been extensively used as a means for performing global optimization in a simple yet reliable manner. However, in some realistic engineering design optimization domains the simple, classical implementation of a GA based on binary encoding and bit mutation and crossover is often ineecient and unable to reach the global optimum. In this paper we describe a GA for continuous design-space optimization that uses new GA operators and strategies tailored to the structure and properties of engineering design domains. Empirical results in the domains of supersonic transport aircraft and supersonic missile inlets demonstrate that the newly formulated GA can be signiicantly better than the classical GA in both eeciency and reliability.",
"title": ""
},
{
"docid": "1dfbe95e53aeae347c2b42ef297a859f",
"text": "With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "7ac04ab3385f6477e30db3ba62cec3ef",
"text": "Ensemble methods for classification and regression have focused a great deal of attention in recent years. They have shown, both theoretically and empirically, that they are able to perform substantially better than single models in a wide range of tasks. We have adapted an ensemble method to the problem of predicting future values of time series using recurrent neural networks (RNNs) as base learners. The improvement is made by combining a large number of RNNs, each of which is generated by training on a different set of examples. This algorithm is based on the boosting algorithm where difficult points of the time series are concentrated on during the learning process however, unlike the original algorithm, we introduce a new parameter for tuning the boosting influence on available examples. We test our boosting algorithm for RNNs on single-step-ahead and multi-step-ahead prediction problems. The results are then compared to other regression methods, including those of different local approaches. The overall results obtained through our ensemble method are more accurate than those obtained through the standard method, backpropagation through time, on these datasets and perform significantly better even when long-range dependencies play an important role. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ea277c160544fb54bef69e2a4fa85233",
"text": "This paper proposes approaches to measure linkography in protocol studies of designing. It outlines the ideas behind using clustering and Shannon’s entropy as measures of designing behaviour. Hypothetical cases are used to illustrate the methods. The paper concludes that these methods may form the basis of a new tool to assess designer behaviour in terms of chunking of design ideas and the opportunities for idea development.",
"title": ""
},
{
"docid": "b41a8bbd52a0c6a25cb1a102eb5a2f8b",
"text": "Although the broad social and business success of recommender systems has been achieved across several domains, there is still a long way to go in terms of user satisfaction. One of the key dimensions for significant improvement is the concept of unexpectedness. In this article, we propose a method to improve user satisfaction by generating unexpected recommendations based on the utility theory of economics. In particular, we propose a new concept of unexpectedness as recommending to users those items that depart from what they would expect from the system - the consideration set of each user. We define and formalize the concept of unexpectedness and discuss how it differs from the related notions of novelty, serendipity, and diversity. In addition, we suggest several mechanisms for specifying the users’ expectations and propose specific performance metrics to measure the unexpectedness of recommendation lists. We also take into consideration the quality of recommendations using certain utility functions and present an algorithm for providing users with unexpected recommendations of high quality that are hard to discover but fairly match their interests. Finally, we conduct several experiments on “real-world” datasets and compare our recommendation results with other methods. The proposed approach outperforms these baseline methods in terms of unexpectedness and other important metrics, such as coverage, aggregate diversity and dispersion, while avoiding any accuracy loss.",
"title": ""
},
{
"docid": "ca599d7b637d25835d881c6803a9e064",
"text": "Accumulating research shows that prenatal exposure to maternal stress increases the risk for behavioral and mental health problems later in life. This review systematically analyzes the available human studies to identify harmful stressors, vulnerable periods during pregnancy, specificities in the outcome and biological correlates of the relation between maternal stress and offspring outcome. Effects of maternal stress on offspring neurodevelopment, cognitive development, negative affectivity, difficult temperament and psychiatric disorders are shown in numerous epidemiological and case-control studies. Offspring of both sexes are susceptible to prenatal stress but effects differ. There is not any specific vulnerable period of gestation; prenatal stress effects vary for different gestational ages possibly depending on the developmental stage of specific brain areas and circuits, stress system and immune system. Biological correlates in the prenatally stressed offspring are: aberrations in neurodevelopment, neurocognitive function, cerebral processing, functional and structural brain connectivity involving amygdalae and (pre)frontal cortex, changes in hypothalamo-pituitary-adrenal (HPA)-axis and autonomous nervous system.",
"title": ""
},
{
"docid": "cc45fefcf65e5ab30d5bb68d390beb4c",
"text": "In this paper, the basic running performance of the cylindrical tracked vehicle with sideways mobility is presented. The crawler mechanism is of circular cross-section and has active rolling axes at the center of the circles. Conventional crawler mechanisms can support massive loads, but cannot produce sideways motion. Additionally, previous crawler edges sink undesirably on soft ground, particularly when the vehicle body is subject to a sideways tilt. The proposed design solves these drawbacks by adopting a circular cross-section crawler. A prototype. Basic motion experiments with confirm the novel properties of this mechanism: sideways motion and robustness against edge-sink.",
"title": ""
},
{
"docid": "7bf5aaa12c9525909f39dc8af8774927",
"text": "Certain deterministic non-linear systems may show chaotic behaviour. Time series derived from such systems seem stochastic when analyzed with linear techniques. However, uncovering the deterministic structure is important because it allows constructing more realistic and better models and thus improved predictive capabilities. This paper provides a review of two main key features of chaotic systems, the dimensions of their strange attractors and the Lyapunov exponents. The emphasis is on state space reconstruction techniques that are used to estimate these properties, given scalar observations. Data generated from equations known to display chaotic behaviour are used for illustration. A compilation of applications to real data from widely di erent elds is given. If chaos is found to be present, one may proceed to build non-linear models, which is the topic of the second paper in this series.",
"title": ""
},
{
"docid": "2f00b33de4c500ac30098385dee3e280",
"text": "An algorithm is developed for computing the matrix cosine, building on a proposal of Serbin and Blalock. The algorithm scales the matrix by a power of 2 to make the ∞-norm less than or equal to 1, evaluates a Padé approximant, and then uses the double angle formula cos (2A)=2cos (A)2−I to recover the cosine of the original matrix. In addition, argument reduction and balancing is used initially to decrease the norm. We give truncation and rounding error analyses to show that an [8,8] Padé approximant produces the cosine of the scaled matrix correct to machine accuracy in IEEE double precision arithmetic, and we show that this Padé approximant can be more efficiently evaluated than a corresponding Taylor series approximation. We also provide error analysis to bound the propagation of errors in the double angle recurrence. Numerical experiments show that our algorithm is competitive in accuracy with the Schur–Parlett method of Davies and Higham, which is designed for general matrix functions, and it is substantially less expensive than that method for matrices of ∞-norm of order 1. The dominant computational kernels in the algorithm are matrix multiplication and solution of a linear system with multiple right-hand sides, so the algorithm is well suited to modern computer architectures.",
"title": ""
},
{
"docid": "826612712b3a44da30e6fb7e2dba95bc",
"text": "Flyback converters show the characteristics of current source when operating in discontinuous conduction mode (DCM) and boundary conduction mode (BCM), which makes it widely used in photovoltaic grid-connected micro-inverter. In this paper, an active clamp interleaved flyback converter operating with combination of DCM and BCM is proposed in micro-inverter to achieve zero voltage switching (ZVS) for both of primary switches and fully recycle the energy in the leakage inductance. The proposed control method makes active-clamping part include only one clamp capacitor. In DCM area, only one flyback converter operates and turn-off of its auxiliary switch is suggested here to reduce resonant conduction losses, which improve the efficiency at light loads. Performance of the proposed circuit is validated by the simulation results and experimental results.",
"title": ""
},
{
"docid": "f4e15eb37843ff4e2938b1b69ab88cb3",
"text": "Static analysis tools are often used by software developers to entail early detection of potential faults, vulnerabilities, code smells, or to assess the source code adherence to coding standards and guidelines. Also, their adoption within Continuous Integration (CI) pipelines has been advocated by researchers and practitioners. This paper studies the usage of static analysis tools in 20 Java open source projects hosted on GitHub and using Travis CI as continuous integration infrastructure. Specifically, we investigate (i) which tools are being used and how they are configured for the CI, (ii) what types of issues make the build fail or raise warnings, and (iii) whether, how, and after how long are broken builds and warnings resolved. Results indicate that in the analyzed projects build breakages due to static analysis tools are mainly related to adherence to coding standards, and there is also some attention to missing licenses. Build failures related to tools identifying potential bugs or vulnerabilities occur less frequently, and in some cases such tools are activated in a \"softer\" mode, without making the build fail. Also, the study reveals that build breakages due to static analysis tools are quickly fixed by actually solving the problem, rather than by disabling the warning, and are often properly documented.",
"title": ""
},
{
"docid": "a8b7d6b3a43d39c8200e7787c3d58a0e",
"text": "Being Scrum the agile software development framework most commonly used in the software industry, its applicability is attracting great attention to the academia. That is why this topic is quite often included in computer science and related university programs. In this article, we present a course design of a Software Engineering course where an educational framework and an open-source agile project management tool were used to develop real-life projects by undergraduate students. During the course, continuous guidance was given by the teaching staff to facilitate the students' learning of Scrum. Results indicate that students find it easy to use the open-source tool and helpful to apply Scrum to a real-life project. However, the unavailability of the client and conflicts among the team members have negative impact on the realization of projects. The guidance given to students along the course helped identify five common issues faced by students through the learning process.",
"title": ""
}
] |
scidocsrr
|
a11f320db8280a6143977c79ec8ce5d4
|
Probabilistic models for answer-ranking in multilingual question-answering
|
[
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
},
{
"docid": "1ac4ac9b112c2554db37de2070d7c2df",
"text": "This paper studies empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks. Both oversampling and undersampling are considered. These techniques modify the distribution of the training data such that the costs of the examples are conveyed explicitly by the appearances of the examples. Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified. Moreover, hard-ensemble and soft-ensemble, i.e., the combination of above techniques via hard or soft voting schemes, are also tested. Twenty-one UCl data sets with three types of cost matrices and a real-world cost-sensitive data set are used in the empirical study. The results suggest that cost-sensitive learning with multiclass tasks is more difficult than with two-class tasks, and a higher degree of class imbalance may increase the difficulty. It also reveals that almost all the techniques are effective on two-class tasks, while most are ineffective and even may cause negative effect on multiclass tasks. Overall, threshold-moving and soft-ensemble are relatively good choices in training cost-sensitive neural networks. The empirical study also suggests that some methods that have been believed to be effective in addressing the class imbalance problem may, in fact, only be effective on learning with imbalanced two-class data sets.",
"title": ""
}
] |
[
{
"docid": "c4e92e313fbad1299340c76902b5ef35",
"text": "This paper presents the simple and inexpensive method to implement a square-root extractor for voltage input signal. The proposed extractor is based on the use of two operational amplifiers (op amps) as only active elements. The proposed technique employs the op amp supply-current sensing to achieve an inherently quadratic characteristic. The low-output distortion in output signal can be achieved. Experimental results verifying the characteristic of the proposed circuit are also included.",
"title": ""
},
{
"docid": "179be5148a006cd12d0182686c36852b",
"text": "A simple, fast, and approximate voxel-based approach to 6-DOF haptic rendering is presented. It can reliably sustain a 1000 Hz haptic refresh rate without resorting to asynchronous physics and haptic rendering loops. It enables the manipulation of a modestly complex rigid object within an arbitrarily complex environment of static rigid objects. It renders a short-range force field surrounding the static objects, which repels the manipulated object and strives to maintain a voxel-scale minimum separation distance that is known to preclude exact surface interpenetration. Force discontinuities arising from the use of a simple penalty force model are mitigated by a dynamic simulation based on virtual coupling. A generalization of octree improves voxel memory efficiency. In a preliminary implementation, a commercially available 6-DOF haptic prototype device is driven at a constant 1000 Hz haptic refresh rate from one dedicated haptic processor, with a separate processor for graphics. This system yields stable and convincing force feedback for a wide range of user controlled motion inside a large, complex virtual environment, with very few surface interpenetration events. This level of performance appears suited to applications such as certain maintenance and assembly task simulations that can tolerate voxel-scale minimum separation distances.",
"title": ""
},
{
"docid": "5ea560095b752ca8e7fb6672f4092980",
"text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.",
"title": ""
},
{
"docid": "f779f376172ae09ee8a5be4f36a7a114",
"text": "In this paper, we presented our study and benchmark on Reverse Image Search (RIS) methods, with a special focus on finding almost similar images in a very large image collection. In our framework we concentrate our study on radius (threshold) based image search methods. We focused our study on perceptual hash based solutions for their scalability, but other solutions seem to give also good results. We studied the speed and the accuracy (precision/recall) of several existing image features. We also proposed a two-layer method that combines a fast but not very precise method with a slower but more accurate method to provide a scalable and precise RIS system. MOTS-CLES : recherche d’images inversé, pHash, optimisation, SI images",
"title": ""
},
{
"docid": "d5641090db7579faff175e4548c25096",
"text": "Integration is central to HIV-1 replication and helps mold the reservoir of cells that persists in AIDS patients. HIV-1 interacts with specific cellular factors to target integration to interior regions of transcriptionally active genes within gene-dense regions of chromatin. The viral capsid interacts with several proteins that are additionally implicated in virus nuclear import, including cleavage and polyadenylation specificity factor 6, to suppress integration into heterochromatin. The viral integrase protein interacts with transcriptional co-activator lens epithelium-derived growth factor p75 to principally position integration within gene bodies. The integrase additionally senses target DNA distortion and nucleotide sequence to help fine-tune the specific phosphodiester bonds that are cleaved at integration sites. Research into virus–host interactions that underlie HIV-1 integration targeting has aided the development of a novel class of integrase inhibitors and may help to improve the safety of viral-based gene therapy vectors.",
"title": ""
},
{
"docid": "3bae971fce094c3ff6c34595bac60ef2",
"text": "In this work, we present a 3D 128Gb 2bit/cell vertical-NAND (V-NAND) Flash product. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1xnm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50MB/s write throughput with 3K endurance for typical embedded applications. Also, extended endurance of 35K is achieved with 36MB/s of write throughput for data center and enterprise SSD applications. And 2nd generation of 3D V-NAND opens up a whole new world at SSD endurance, density and battery life for portables.",
"title": ""
},
{
"docid": "65fac26fc29ff492eb5a3e43f58ecfb2",
"text": "The introduction of new anticancer drugs into the clinic is often hampered by a lack of qualified biomarkers. Method validation is indispensable to successful biomarker qualification and is also a regulatory requirement. Recently, the fit-for-purpose approach has been developed to promote flexible yet rigorous biomarker method validation, although its full implications are often overlooked. This review aims to clarify many of the scientific and regulatory issues surrounding biomarker method validation and the analysis of samples collected from clinical trial subjects. It also strives to provide clear guidance on validation strategies for each of the five categories that define the majority of biomarker assays, citing specific examples.",
"title": ""
},
{
"docid": "7a5d22ae156d6a62cfd080c2a58103d2",
"text": "Stochastic neurons and hard non-linearities can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such stochastic or non-smooth neurons? I.e., can we “back-propagate” through these stochastic neurons? We examine this question, existing approaches, and compare four families of solutions, applicable in different settings. One of them is the minimum variance unbiased gradient estimator for stochatic binary neurons (a special case of the REINFORCE algorithm). A second approach, introduced here, decomposes the operation of a binary stochastic neuron into a stochastic binary part and a smooth differentiable part, which approximates the expected effect of the pure stochatic binary neuron to first order. A third approach involves the injection of additive or multiplicative noise in a computational graph that is otherwise differentiable. A fourth approach heuristically copies the gradient with respect to the stochastic output directly as an estimator of the gradient with respect to the sigmoid argument (we call this the straight-through estimator). To explore a context where these estimators are useful, we consider a small-scale version of conditional computation, where sparse stochastic units form a distributed representation of gaters that can turn off in combinatorially many ways large chunks of the computation performed in the rest of the neural network. In this case, it is important that the gating units produce an actual 0 most of the time. The resulting sparsity can be potentially be exploited to greatly reduce the computational cost of large deep networks for which conditional computation would be useful.",
"title": ""
},
{
"docid": "5f7b4857f0fdc614d615a7fedbf75145",
"text": "The authors evaluated the reliability and validity of a set of 7 behavioral decision-making tasks, measuring different aspects of the decision-making process. The tasks were administered to individuals from diverse populations. Participants showed relatively consistent performance within and across the 7 tasks, which were then aggregated into an Adult Decision-Making Competence (A-DMC) index that showed good reliability. The validity of the 7 tasks and of overall A-DMC emerges in significant relationships with measures of socioeconomic status, cognitive ability, and decision-making styles. Participants who performed better on the A-DMC were less likely to report negative life events indicative of poor decision making, as measured by the Decision Outcomes Inventory. Significant predictive validity remains when controlling for demographic measures, measures of cognitive ability, and constructive decision-making styles. Thus, A-DMC appears to be a distinct construct relevant to adults' real-world decisions.",
"title": ""
},
{
"docid": "5d2e0bd9c691af163e0e2221b9406c82",
"text": "Many high-throughput experimental technologies have been developed to assess the effects of large numbers of mutations (variation) on phenotypes. However, designing functional assays for these methods is challenging, and systematic testing of all combinations is impossible, so robust methods to predict the effects of genetic variation are needed. Most prediction methods exploit evolutionary sequence conservation but do not consider the interdependencies of residues or bases. We present EVmutation, an unsupervised statistical method for predicting the effects of mutations that explicitly captures residue dependencies between positions. We validate EVmutation by comparing its predictions with outcomes of high-throughput mutagenesis experiments and measurements of human disease mutations and show that it outperforms methods that do not account for epistasis. EVmutation can be used to assess the quantitative effects of mutations in genes of any organism. We provide pre-computed predictions for ∼7,000 human proteins at http://evmutation.org/.",
"title": ""
},
{
"docid": "194156892cbdb0161e9aae6a01f78703",
"text": "Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.",
"title": ""
},
{
"docid": "011a791fdac8606744eef1df9a63651d",
"text": "Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge.",
"title": ""
},
{
"docid": "961372a5e1b21053894040a11e946c8d",
"text": "The main purpose of this paper is to introduce an approach to design a DC-DC boost converter with constant output voltage for grid connected photovoltaic application system. The boost converter is designed to step up a fluctuating solar panel voltage to a higher constant DC voltage. It uses voltage feedback to keep the output voltage constant. To do so, a microcontroller is used as the heart of the control system which it tracks and provides pulse-width-modulation signal to control power electronic device in boost converter. The boost converter will be able to direct couple with grid-tied inverter for grid connected photovoltaic system. Simulations were performed to describe the proposed design. Experimental works were carried out with the designed boost converter which has a power rating of 100 W and 24 V output voltage operated in continuous conduction mode at 20 kHz switching frequency. The test results show that the proposed design exhibits a good performance.",
"title": ""
},
{
"docid": "0114711538c0503912872448705a8c22",
"text": "With the rapid development of networking technology, grid computing has emerged as a source for satisfying the increasing demand of the computing power of scientific computing community. Mostly, the user applications in scientific and enterprise domains are constructed in the form of workflows in which precedence constraints between tasks are defined. Scheduling of workflow applications belongs to the class of NP-hard problems, so meta-heuristic approaches are preferred options. In this paper, $$\\varepsilon $$ ε -fuzzy dominance sort based discrete particle swarm optimization ( $$\\varepsilon $$ ε -FDPSO) approach is used to solve the workflow scheduling problem in the grid. The $$\\varepsilon $$ ε -FDPSO approach has never been used earlier in grid scheduling. The metric, fuzzy dominance which quantifies the relative fitness of solutions in multi-objective domain is used to generate the Pareto optimal solutions. In addition, the scheme also incorporates a fuzzy based mechanism to determine the best compromised solution. For the workflow applications two scheduling problems are solved. In one of the scheduling problems, we addressed two major conflicting objectives, i.e. makespan (execution time) and cost, under constraints (deadline and budget). While, in other, we optimized makespan, cost and reliability objectives simultaneously in order to incorporate the dynamic characteristics of grid resources. The performance of the approach has been compared with other acknowledged meta-heuristics like non-dominated sort genetic algorithm and multi-objective particle swarm optimization. The simulation analysis substantiates that the solutions obtained with $$\\varepsilon $$ ε -FDPSO deliver better convergence and uniform spacing among the solutions keeping the computation overhead limited.",
"title": ""
},
{
"docid": "e48da0cf3a09b0fd80f0c2c01427a931",
"text": "Timely analysis of information in cybersecurity necessitates automated information extraction from unstructured text. Unfortunately, state-of-the-art extraction methods require training data, which is unavailable in the cyber-security domain. To avoid the arduous task of handlabeling data, we develop a very precise method to automatically label text from several data sources by leveraging article-specific structured data and provide public access to corpus annotated with cyber-security entities. We then prototype a maximum entropy model that processes this corpus of auto-labeled text to label new sentences and present results showing the Collins Perceptron outperforms the MLE with LBFGS and OWL-QN optimization for parameter fitting. The main contribution of this paper is an automated technique for creating a training corpus from text related to a database. As a multitude of domains can benefit from automated extraction of domain-specific concepts for which no labeled data is available, we hope our solution is widely applicable.",
"title": ""
},
{
"docid": "4d74c40627965ac9c08bf05f629bc281",
"text": "UNLABELLED\nThe increased popularity and functionality of mobile devices has a number of implications for the delivery of mental health services. Effective use of mobile applications has the potential to (a) increase access to evidence-based care; (b) better inform consumers of care and more actively engage them in treatment; (c) increase the use of evidence-based practices; and (d) enhance care after formal treatment has concluded. The current paper presents an overview of the many potential uses of mobile applications as a means to facilitate ongoing care at various stages of treatment. Examples of current mobile applications in behavioural treatment and research are described, and the implications of such uses are discussed. Finally, we provide recommendations for methods to include mobile applications into current treatment and outline future directions for evaluation.\n\n\nKEY PRACTITIONER MESSAGE\nMobile devices are becoming increasingly common among the adult population and have tremendous potential to advance clinical care. Mobile applications have the potential to enhance clinical care at stages of treatment-from engaging patients in clinical care to facilitating adherence to practices and in maintaining treatment gains. Research is needed to validate the efficacy and effectiveness of mobile applications in clinical practice. Research on such devices must incorporate assessments of usability and adherence in addition to their incremental benefit to treatment.",
"title": ""
},
{
"docid": "58d2f5d181095fc59eaf9c7aa58405b0",
"text": "Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. A frequency domain smoothingsharpening technique is proposed and its impact is assessed to beneficially enhance mammogram images. This technique aims to gain the advantages of enhance and sharpening process that aims to highlight sudden changes in the image intensity, it is usually applied to remove random noise from digital images. The already developed technique also eliminates the drawbacks of each of the two sharpening and smoothing techniques resulting from their individual application in image processing field. The selection of parameters is almost invariant of the type of background tissues and severity of the abnormality, giving significantly improved results even for denser mammographic images. The proposed technique is tested breast X-ray mammograms. The simulated results show that the high potential to advantageously enhance the image contrast hence giving extra aid to radiologists to detect and classify mammograms of breast cancer. Keywords— Fourier transform, Gabor filter, Image, enhancement, Mammograms, Segmentation",
"title": ""
},
{
"docid": "98110985cd175f088204db452a152853",
"text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.",
"title": ""
},
{
"docid": "025afef94063f764a9901a5b30598bcd",
"text": "We describe an ensemble of classifiers based algorithm for incremental learning in nonstationary environments. In this formulation, we assume that the learner is presented with a series of training datasets, each of which is drawn from a different snapshot of a distribution that is drifting at an unknown rate. Furthermore, we assume that the algorithm must learn the new environment in an incremental manner, that is, without having access to previously available data. Instead of a time window over incoming instances, or an aged based forgetting – as used by most ensemble based nonstationary learning algorithms – a strategic weighting mechanism is employed that tracks the classifiers’ performances over drifting environments to determine appropriate voting weights. Specifically, the proposed approach generates a single classifier for each dataset that becomes available, and then combines them through a dynamically modified weighted majority voting, where the voting weights themselves are computed as weighted averages of classifiers’ individual performances over all environments. We describe the implementation details of this approach, as well as its initial results on simulated non-stationary environments.",
"title": ""
}
] |
scidocsrr
|
904580785b09a8f2b89a98eae91a5a7f
|
Graph Theoretical Similarity Approach To Compare Molecular Electrostatic Potentials
|
[
{
"docid": "3c3f3a9d6897510d5d5d3d55c882502c",
"text": "Error-tolerant graph matching is a powerful concept that has various applications in pattern recognition and machine vision. In the present paper, a new distance measure on graphs is proposed. It is based on the maximal common subgraph of two graphs. The new measure is superior to edit distance based measures in that no particular edit operations together with their costs need to be defined. It is formally shown that the new distance measure is a metric. Potential algorithms for the efficient computation of the new measure are discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "fec8129b24f30d4dbb93df4dce7885e8",
"text": "We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.",
"title": ""
},
{
"docid": "0609806deb4a10695cce78375342b643",
"text": "In this paper, Faster R-CNN was employed to recognize the CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). Unlike traditional method, the proposed method is based on deep learning object detection framework. By inputting the database into the network and training the Faster R-CNN, the feature map can be obtained through the convolutional layers. The proposed method can recognize the character and it is location. Experiments show that Faster R-CNN can be used in CAPTCHA recog‐ nition with promising speed and accuracy. The experimental results also show that the mAP (mean average precision) value will improve with the depth of the network increasing.",
"title": ""
},
{
"docid": "894cfbb522a356bba407481bd051d834",
"text": "We propose a novel method to handle thin structures in Image-Based Rendering (IBR), and specifically structures supported by simple geometric shapes such as planes, cylinders, etc. These structures, e.g. railings, fences, oven grills etc, are present in many man-made environments and are extremely challenging for multi-view 3D reconstruction, representing a major limitation of existing IBR methods. Our key insight is to exploit multi-view information. After a handful of user clicks to specify the supporting geometry, we compute multi-view and multi-layer alpha mattes to extract the thin structures. We use two multi-view terms in a graph-cut segmentation, the first based on multi-view foreground color prediction and the second ensuring multiview consistency of labels. Occlusion of the background can challenge reprojection error calculation and we use multiview median images and variance, with multiple layers of thin structures. Our end-to-end solution uses the multi-layer segmentation to create per-view mattes and the median colors and variance to create a clean background. We introduce a new multi-pass IBR algorithm based on depth-peeling to allow free-viewpoint navigation of multi-layer semi-transparent thin structures. Our results show significant improvement in rendering quality for thin structures compared to previous image-based rendering solutions.",
"title": ""
},
{
"docid": "2943f1d374a6a63ef1b140a83e5a8caf",
"text": "Gill morphometric and gill plasticity of the air-breathing striped catfish (Pangasianodon hypophthalmus) exposed to different temperatures (present day 27°C and future 33°C) and different air saturation levels (92% and 35%) during 6weeks were investigated using vertical sections to estimate the respiratory lamellae surface areas, harmonic mean barrier thicknesses, and gill component volumes. Gill respiratory surface area (SA) and harmonic mean water - blood barrier thicknesses (HM) of the fish were strongly affected by both environmental temperature and oxygen level. Thus initial values for 27°C normoxic fish (12.4±0.8g) were 211.8±21.6mm2g-1 and 1.67±0.12μm for SA and HM respectively. After 5weeks in same conditions or in the combinations of 33°C and/or PO2 of 55mmHg, this initial surface area scaled allometrically with size for the 33°C hypoxic group, whereas branchial SA was almost eliminated in the 27°C normoxic group, with other groups intermediate. In addition, elevated temperature had an astounding effect on growth with the 33°C group growing nearly 8-fold faster than the 27°C fish.",
"title": ""
},
{
"docid": "14ba02b92184c21cbbe2344313e09c23",
"text": "Smart meters are at high risk to be an attack target or to be used as an attacking means of malicious users because they are placed at the closest location to users in the smart gridbased infrastructure. At present, Korea is proceeding with 'Smart Grid Advanced Metering Infrastructure (AMI) Construction Project', and has selected Device Language Message Specification/ COmpanion Specification for Energy Metering (DLMS/COSEM) protocol for the smart meter communication. However, the current situation is that the vulnerability analysis technique is still insufficient to be applied to DLMS/COSEM-based smart meters. Therefore, we propose a new fuzzing architecture for analyzing vulnerabilities which is applicable to actual DLMS/COSEM-based smart meter devices. In addition, this paper presents significant case studies for verifying proposed fuzzing architecture through conducting the vulnerability analysis of the experimental results from real DLMS/COSEM-based smart meter devices used in Korea SmartGrid Testbed.",
"title": ""
},
{
"docid": "1a5b63ae29de488a64518abcde04fb2f",
"text": "A thorough review of available literature was conducted to inform of advancements in mobile LIDAR technology, techniques, and current and emerging applications in transportation. The literature review touches briefly on the basics of LIDAR technology followed by a more in depth description of current mobile LIDAR trends, including system components and software. An overview of existing quality control procedures used to verify the accuracy of the collected data is presented. A collection of case studies provides a clear description of the advantages of mobile LIDAR, including an increase in safety and efficiency. The final sections of the review identify current challenges the industry is facing, the guidelines that currently exist, and what else is needed to streamline the adoption of mobile LIDAR by transportation agencies. Unfortunately, many of these guidelines do not cover the specific challenges and concerns of mobile LIDAR use as many have been developed for airborne LIDAR acquisition and processing. From this review, there is a lot of discussion on “what” is being done in practice, but not a lot on “how” and “how well” it is being done. A willingness to share information going forward will be important for the successful use of mobile LIDAR.",
"title": ""
},
{
"docid": "7be2cc5662550a74c02c9fb9ec0eef2f",
"text": "In this article, we describe a system that reads news articles in four different languages and detects what happened, who is involved, where and when. This event-centric information is represented as episodic situational knowledge on individuals in an interoperable RDF format that allows for reasoning on the implications of the events. Our system covers the complete path from unstructured text to structured knowledge, for which we defined a formal model that links interpreted textual mentions of things to their representation as instances. The model forms the skeleton for interoperable interpretation across different sources and languages. The real content, however, is defined using multilingual and cross-lingual knowledge resources, both semantic and episodic. We explain how these knowledge resources are used for the processing of text and ultimately define the actual content of the episodic situational knowledge that is reported in the news. The knowledge and model in our system can be seen as an example how the Semantic Web helps NLP. However, our systems also generate massive episodic knowledge of the same type as the Semantic Web is built on. We thus envision a cycle of knowledge acquisition and NLP improvement on a massive scale. This article reports on the details of the system but also on the performance of various high-level components. We demonstrate that our system performs at state-of-the-art level for various subtasks in the four languages of the project, but that we also consider the full integration of these tasks in an overall system with the purpose of reading text. We applied our system to millions of news articles, generating billions of triples expressing formal semantic properties. This shows the capacity of the system to perform at an unprecedented scale.",
"title": ""
},
{
"docid": "7c9c047055d123aff65c9c7a3db59dfc",
"text": "Organizations publish the individual’s information in order to utilize the data for the research purpose. But the confidential information about the individual is revealed by the adversary by combining the various releases of the several organizations. This is called as linkage attacks. This attack can be avoided by the SLOMS method which vertically partitions the single quasi table and multiple sensitive tables. The SLOMS method uses MSB-KACA algorithm to generalize the quasi identifier table in order to implement k-Anonymity and bucketizes the sensitive attribute table to implement l-diversity. But there is a chance of probabilistic inference attack due to bucketization. So, the method called t-closeness can be applied over MSB-KACA algorithm which compute the value using Earth Mover Distance(EMD) and set the minimum value as threshold in order to equally distribute the attributes in the table based on the threshold ’t’. Such that the probabilistic inference attack can be avoided. The performance of t-closeness gets improved and evaluated by Disclosure rate which becomes minimal while comparing with MSB-KACA algorithm.",
"title": ""
},
{
"docid": "b98dcd982093b2454f9d490dc2433719",
"text": "Kinematics of space rovers has been studied by several past researchers. However most studies assume that all the wheels of the rover is always in contact with the terrain. As the lunar terrain is very uneven and contains ash, dust, rocks, etc this assumption need not always be valid. Hence in this study we have analyzed the kinematics of a rover in which one wheel can loose contact with the ground or slide. The model considered in this paper is a bicycle model of the rover with an assumption that the yaw angle would be negligible and thus the results give us a fair insight of 3-D modeling of rovers. A quasi static analysis of the robot was used to develop a slip control chart that guides the amount of slip expected while traversing different terrains.",
"title": ""
},
{
"docid": "5fc192fc2f5be64a69eea7c4e848dd95",
"text": "Hypertrophic scars and keloids are fibroproliferative disorders that may arise after any deep cutaneous injury caused by trauma, burns, surgery, etc. Hypertrophic scars and keloids are cosmetically problematic, and in combination with functional problems such as contractures and subjective symptoms including pruritus, these significantly affect patients' quality of life. There have been many studies on hypertrophic scars and keloids; but the mechanisms underlying scar formation have not yet been well established, and prophylactic and treatment strategies remain unsatisfactory. In this review, the authors introduce and summarize classical concepts surrounding wound healing and review recent understandings of the biology, prevention and treatment strategies for hypertrophic scars and keloids.",
"title": ""
},
{
"docid": "3f7c522505a3804264e691ced904d035",
"text": "We design and release BONIE, the first open numerical relation extractor, for extracting Open IE tuples where one of the arguments is a number or a quantity-unit phrase. BONIE uses bootstrapping to learn the specific dependency patterns that express numerical relations in a sentence. BONIE’s novelty lies in task-specific customizations, such as inferring implicit relations, which are clear due to context such as units (for e.g., ‘square kilometers’ suggests area, even if the word ‘area’ is missing in the sentence). BONIE obtains 1.5x yield and 15 point precision gain on numerical facts over a state-of-the-art Open IE system.",
"title": ""
},
{
"docid": "72c054c955a34fbac8e798665ece8f57",
"text": "In this paper, we propose and empirically validate a suite of hotspot patterns: recurring architecture problems that occur in most complex systems and incur high maintenance costs. In particular, we introduce two novel hotspot patterns, Unstable Interface and Implicit Cross-module Dependency. These patterns are defined based on Baldwin and Clark's design rule theory, and detected by the combination of history and architecture information. Through our tool-supported evaluations, we show that these patterns not only identify the most error-prone and change-prone files, they also pinpoint specific architecture problems that may be the root causes of bug-proneness and change-proneness. Significantly, we show that 1) these structure-history integrated patterns contribute more to error- and change-proneness than other hotspot patterns, and 2) the more hotspot patterns a file is involved in, the more error- and change-prone it is. Finally, we report on an industrial case study to demonstrate the practicality of these hotspot patterns. The architect and developers confirmed that our hotspot detector discovered the majority of the architecture problems causing maintenance pain, and they have started to improve the system's maintainability by refactoring and fixing the identified architecture issues.",
"title": ""
},
{
"docid": "0105070bd23400083850627b1603af0b",
"text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.",
"title": ""
},
{
"docid": "2aecaa95df956d905a39a7394a4b08ad",
"text": "Superpixels provide an efficient low/mid-level representation of image data, which greatly reduces the number of image primitives for subsequent vision tasks. Existing superpixel algorithms are not differentiable, making them difficult to integrate into otherwise end-to-end trainable deep neural networks. We develop a new differentiable model for superpixel sampling that leverages deep networks for learning superpixel segmentation. The resulting Superpixel Sampling Network (SSN) is end-to-end trainable, which allows learning task-specific superpixels with flexible loss functions and has fast runtime. Extensive experimental analysis indicates that SSNs not only outperform existing superpixel algorithms on traditional segmentation benchmarks, but can also learn superpixels for other tasks. In addition, SSNs can be easily integrated into downstream deep networks resulting in performance improvements.",
"title": ""
},
{
"docid": "1dd8fdb5f047e58f60c228e076aa8b66",
"text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.",
"title": ""
},
{
"docid": "7115c9872b05a20efeaafaaed7c2e173",
"text": "Today, bibliographic digital libraries play an important role in helping members of academic community search for novel research. In particular, author disambiguation for citations is a major problem during the data integration and cleaning process, since author names are usually very ambiguous. For solving this problem, we proposed two kinds of correlations between citations, namely, Topic Correlation and Web Correlation, to exploit relationships between citations, in order to identify whether two citations with the same author name refer to the same individual. The topic correlation measures the similarity between research topics of two citations; while the Web correlation measures the number of co-occurrence in web pages. We employ a pair-wise grouping algorithm to group citations into clusters. The results of experiments show that the disambiguation accuracy has great improvement when using topic correlation and Web correlation, and Web correlation provides stronger evidences about the authors of citations.",
"title": ""
},
{
"docid": "bb240f2e536e5e5cd80fcca8c9d98171",
"text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.",
"title": ""
},
{
"docid": "e982954841e753aa0dd4f66fe2eb4f7a",
"text": "Background. Observational studies suggest that people who consume more fruits and vegetables containing beta carotene have somewhat lower risks of cancer and cardiovascular disease, and earlier basic research suggested plausible mechanisms. Because large randomized trials of long duration were necessary to test this hypothesis directly, we conducted a trial of beta carotene supplementation. Methods. In a randomized, double-blind, placebo-controlled trial of beta carotene (50 mg on alternate days), we enrolled 22,071 male physicians, 40 to 84 years of age, in the United States; 11 percent were current smokers and 39 percent were former smokers at the beginning of the study in 1982. By December 31, 1995, the scheduled end of the study, fewer than 1 percent had been lost to followup, and compliance was 78 percent in the group that received beta carotene. Results. Among 11,036 physicians randomly assigned to receive beta carotene and 11,035 assigned to receive placebo, there were virtually no early or late differences in the overall incidence of malignant neoplasms or cardiovascular disease, or in overall mortality. In the beta carotene group, 1273 men had any malignant neoplasm (except nonmelanoma skin cancer), as compared with 1293 in the placebo group (relative risk, 0.98; 95 percent confidence interval, 0.91 to 1.06). There were also no significant differences in the number of cases of lung cancer (82 in the beta carotene group vs. 88 in the placebo group); the number of deaths from cancer (386 vs. 380), deaths from any cause (979 vs. 968), or deaths from cardiovascular disease (338 vs. 313); the number of men with myocardial infarction (468 vs. 489); the number with stroke (367 vs. 382); or the number with any one of the previous three end points (967 vs. 972). Among current and former smokers, there were also no significant early or late differences in any of these end points. Conclusions. In this trial among healthy men, 12 years of supplementation with beta carotene produced neither benefit nor harm in terms of the incidence of malignant neoplasms, cardiovascular disease, or death from all causes. (N Engl J Med 1996;334:1145-9.) 1996, Massachusetts Medical Society. From the Divisions of Preventive Medicine (C.H.H., J.E.B., J.E.M., N.R.C., C.B., F.L., J.M.G., P.M.R.) and Cardiovascular Medicine (J.M.G., P.M.R.) and the Channing Laboratory (M.S., B.R., W.W.), Department of Medicine, Brigham and Women’s Hospital; the Department of Ambulatory Care and Prevention, Harvard Medical School (C.H.H., J.E.B., N.R.C.); and the Departments of Epidemiology (C.H.H., J.E.B., M.S., W.W.), Biostatistics (B.R.), and Nutrition (M.S., W.W.), Harvard School of Public Health — all in Boston; and the Imperial Cancer Research Fund Clinical Trial Service Unit, University of Oxford, Oxford, England (R.P.). Address reprint requests to Dr. Hennekens at 900 Commonwealth Ave. E., Boston, MA 02215. Supported by grants (CA-34944, CA-40360, HL-26490, and HL-34595) from the National Institutes of Health. O BSERVATIONAL epidemiologic studies suggest that people who consume higher dietary levels of fruits and vegetables containing beta carotene have a lower risk of certain types of cancer 1,2 and cardiovascular disease, 3 and basic research suggests plausible mechanisms. 4-6 It is difficult to determine from observational studies, however, whether the apparent benefits are due to beta carotene itself, other nutrients in beta carotene– rich foods, other dietary habits, or other, nondietary lifestyle characteristics. 7 Long-term, large, randomized trials can provide a direct test of the efficacy of beta carotene in the prevention of cancer or cardiovascular disease. 8 For cancer, such trials should ideally last longer than the latency period (at least 5 to 10 years) of many types of cancer. A trial lasting 10 or more years could allow a sufficient period of latency and an adequate number of cancers for the detection of even a small reduction in overall risk due to supplementation with beta carotene. Two large, randomized, placebo-controlled trials in well-nourished populations (primarily cigarette smokers) have been reported. The Alpha-Tocopherol, Beta Carotene (ATBC) Cancer Prevention Study, a placebocontrolled trial, assigned 29,000 Finnish male smokers to receive beta carotene, vitamin E, both active agents, or neither, for an average of six years. 9 The BetaCarotene and Retinol Efficacy Trial (CARET) enrolled 18,000 men and women at high risk for lung cancer because of a history of cigarette smoking or occupational exposure to asbestos; this trial evaluated combined treatment with beta carotene and retinol for an average of less than four years. 10 Both studies found no benefits of such supplementation in terms of the incidence of Downloaded from www.nejm.org at UW MADISON on December 04, 2003. Copyright © 1996 Massachusetts Medical Society. All rights reserved. 1146 THE NEW ENGLAND JOURNAL OF MEDICINE May 2, 1996 cancer or cardiovascular disease; indeed, both found somewhat higher rates of lung cancer and cardiovascular disease among subjects given beta carotene. The estimated excess risks were small, and it remains unclear whether beta carotene was truly harmful. Moreover, since the duration of these studies was relatively short, they leave open the possibility that benefit, especially in terms of cancer, would become evident with longer treatment and follow-up. 11 In this report, we describe the findings of the beta carotene component of the Physicians’ Health Study, a randomized trial in which 22,071 U.S. male physicians were treated and followed for an average of 12 years.",
"title": ""
},
{
"docid": "6776ac2fe694e43b8fa138c95e387a79",
"text": "The arrival of the Data Web will bring an abundance of explicit semantics either complementary to or embedded within traditional web content. This body of semantics both demands and enables new interaction techniques to be introduced into the web experience. In this position paper, we proposes that the current web browsing paradigm of “one web page at a time” needs to be updated because the typical unit of web information to interact with will no longer be a whole and single web page but can be smaller and numerous bits of data. We introduce the set-based browsing paradigm that lets the user traverse graph-based data that will be found on the Data Web in an efficient manner, moving from a set of things to a related set of things rather than from one single thing to one single other thing. We demonstrate this paradigm as a standalone application on a web-like database and as a browser extension on existing web pages. categories and subject Descriptors H5.2 [Information Interfaces and Presentation]: User Interfaces – Graphical user interfaces (GUI). H5.4 [Information Interfaces and Presentation]: Hypertext/Hypermedia – User issues.",
"title": ""
},
{
"docid": "c6a649a1eed332be8fc39bfa238f4214",
"text": "The Internet of things (IoT), which integrates a variety of devices into networks to provide advanced and intelligent services, has to protect user privacy and address attacks such as spoofing attacks, denial of service (DoS) attacks, jamming, and eavesdropping. We investigate the attack model for IoT systems and review the IoT security solutions based on machine-learning (ML) techniques including supervised learning, unsupervised learning, and reinforcement learning (RL). ML-based IoT authentication, access control, secure offloading, and malware detection schemes to protect data privacy are the focus of this article. We also discuss the challenges that need to be addressed to implement these ML-based security schemes in practical IoT systems.",
"title": ""
}
] |
scidocsrr
|
8437bfc3f0cc62faaecb46ffc140d07a
|
Reconfigurable Magnetic Resonance-Coupled Wireless Power Transfer System
|
[
{
"docid": "fcc81dd8a3de04a1ccc7af7302653400",
"text": "Wireless power technology offers the promise of cutting the last cord, allowing users to seamlessly recharge mobile devices as easily as data are transmitted through the air. Initial work on the use of magnetically coupled resonators for this purpose has shown promising results. We present new analysis that yields critical insight into the design of practical systems, including the introduction of key figures of merit that can be used to compare systems with vastly different geometries and operating conditions. A circuit model is presented along with a derivation of key system concepts, such as frequency splitting, the maximum operating distance (critical coupling), and the behavior of the system as it becomes undercoupled. This theoretical model is validated against measured data and shows an excellent average coefficient of determination of 0.9875. An adaptive frequency tuning technique is demonstrated, which compensates for efficiency variations encountered when the transmitter-to-receiver distance and/or orientation are varied. The method demonstrated in this paper allows a fixed-load receiver to be moved to nearly any position and/or orientation within the range of the transmitter and still achieve a near-constant efficiency of over 70% for a range of 0-70 cm.",
"title": ""
}
] |
[
{
"docid": "4d9f0cf629cd3695a2ec249b81336d28",
"text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.",
"title": ""
},
{
"docid": "785a0d51c9d105532a2e571afccd957b",
"text": "Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.",
"title": ""
},
{
"docid": "874ad221d7ea2fc9fdc368b814e7f4de",
"text": "Tail labels in the multi-label learning problem undermine the low-rank assumption. Nevertheless, this problem has rarely been investigated. In addition to using the low-rank structure to depict label correlations, this paper explores and exploits an additional sparse component to handle tail labels behaving as outliers, in order to make the classical low-rank principle in multi-label learning valid. The divide-and-conquer optimization technique is employed to increase the scalability of the proposed algorithm while theoretically guaranteeing its performance. A theoretical analysis of the generalizability of the proposed algorithm suggests that it can be improved by the low-rank and sparse decomposition given tail labels. Experimental results on real-world data demonstrate the significance of investigating tail labels and the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "8f97eed7ae59062915b422cb65c7729b",
"text": "In this modern scientific world, technologies are transforming rapidly but along with the ease and comfort they also bring in a big concern for security. Taking into account the physical security of the system to ensure access control and authentication of users, made us to switch to a new system of Biometric combined with ATM PIN as PIN can easily be guessed, stolen or misused. Biometric is added with the existing technology to double the security in order to reduce ATM frauds but it has also put forward several issues which include sensor durability and time consumption. This paper envelops two questions “Is it really worthy to go through the entire biometric process to just debit a low amount?” and “What could be the maximum amount one can lose if one's card is misused?” As an answer we propose a constraint on transactions by ATM involving biometric to improve the system performance and to solve the defined issues. The proposal is divided in two parts. The first part solves sensor performance issue by adding a limit on amount of cash and number of transactions is defined in such a way that if one need to withdraw a big amount OR attempts for multiple transactions by withdrawing small amount again and again, it shall be necessary to present biometric. On the other hand if one need to make only balance enquiry or the cash is low and the number of transactions in a day is less than defined attempts, biometric presentation is not mandatory. It may help users to save time and maintain sensor performance by not furnishing their biometric for few hundred apart from maintaining security. In the second part this paper explains how fingerprint verification is conducted if the claimant is allowed to access the system and what could be the measures to increase performance of fingerprint biometric system which could be added to our proposed system to enhance the overall system performance.",
"title": ""
},
{
"docid": "eb0672f019c82dfe0614b39d3e89be2e",
"text": "The support of medical decisions comes from several sources. These include individual physician experience, pathophysiological constructs, pivotal clinical trials, qualitative reviews of the literature, and, increasingly, meta-analyses. Historically, the first of these four sources of knowledge largely informed medical and dental decision makers. Meta-analysis came on the scene around the 1970s and has received much attention. What is meta-analysis? It is the process of combining the quantitative results of separate (but similar) studies by means of formal statistical methods. Statistically, the purpose is to increase the precision with which the treatment effect of an intervention can be estimated. Stated in another way, one can say that meta-analysis combines the results of several studies with the purpose of addressing a set of related research hypotheses. The underlying studies can come in the form of published literature, raw data from individual clinical studies, or summary statistics in reports or abstracts. More broadly, a meta-analysis arises from a systematic review. There are three major components to a systematic review and meta-analysis. The systematic review starts with the formulation of the research question and hypotheses. Clinical or substantive insight about the particular domain of research often identifies not only the unmet investigative needs, but helps prepare for the systematic review by defining the necessary initial parameters. These include the hypotheses, endpoints, important covariates, and exposures or treatments of interest. Like any basic or clinical research endeavor, a prospectively defined and clear study plan enhances the expected utility and applicability of the final results for ultimately influencing practice or policy. After this foundational preparation, the second component, a systematic review, commences. The systematic review proceeds with an explicit and reproducible protocol to locate and evaluate the available data. The collection, abstraction, and compilation of the data follow a more rigorous and prospectively defined objective process. The definitions, structure, and methodologies of the underlying studies must be critically appraised. Hence, both “the content” and “the infrastructure” of the underlying data are analyzed, evaluated, and systematically recorded. Unlike an informal review of the literature, this systematic disciplined approach is intended to reduce the potential for subjectivity or bias in the subsequent findings. Typically, a literature search of an online database is the starting point for gathering the data. The most common sources are MEDLINE (United States Library of Overview, Strengths, and Limitations of Systematic Reviews and Meta-Analyses",
"title": ""
},
{
"docid": "a90fe1117e587d5b48a056278f48b01d",
"text": "The concept of a medical parallel robot applicable to chest compression in the process of cardiopulmonary resuscitation (CPR) is proposed in this paper. According to the requirement of CPR action, a three-prismatic-universal-universal (3-PUU) translational parallel manipulator (TPM) is designed and developed for such applications, and a detailed analysis has been performed for the 3-PUU TPM involving the issues of kinematics, dynamics, and control. In view of the physical constraints imposed by mechanical joints, both the robot-reachable workspace and the maximum inscribed cylinder-usable workspace are determined. Moreover, the singularity analysis is carried out via the screw theory, and the robot architecture is optimized to obtain a large well-conditioning usable workspace. Based on the principle of virtual work with a simplifying hypothesis adopted, the dynamic model is established, and dynamic control utilizing computed torque method is implemented. At last, the experimental results made for the prototype illustrate the performance of the control algorithm well. This research will lay a good foundation for the development of a medical robot to assist in CPR operation.",
"title": ""
},
{
"docid": "3d3e728e5587fe9fd686fca09a6a06f4",
"text": "Knowing how to manage one's own learning has become increasingly important in recent years, as both the need and the opportunities for individuals to learn on their own outside of formal classroom settings have grown. During that same period, however, research on learning, memory, and metacognitive processes has provided evidence that people often have a faulty mental model of how they learn and remember, making them prone to both misassessing and mismanaging their own learning. After a discussion of what learners need to understand in order to become effective stewards of their own learning, we first review research on what people believe about how they learn and then review research on how people's ongoing assessments of their own learning are influenced by current performance and the subjective sense of fluency. We conclude with a discussion of societal assumptions and attitudes that can be counterproductive in terms of individuals becoming maximally effective learners.",
"title": ""
},
{
"docid": "2036bbbb2fd5cb06e69e080d5e9ba656",
"text": "An electric traction system supplied at 2 /spl times/ 25 kV with autotransformers (ATs) is considered. The conductors' arrangement is that of the new European High Speed Railway Line (HSRL) under construction: The overhead supply conductors in contact with the train pantograph are connected to a symmetrical circuit (the feeder) with the purpose of current balancing; the traction return current flows from the rolling stock axles back to the supply (i.e., substation) through the traction rails and additional return conductors. The test campaign carried on the Rome-Naples HSRL allowed the validation of the multiconductor transmission line model used for system analysis. Measurements were performed in normal system configuration (2 /spl times/ 25-kV supply) and degraded configuration (1 /spl times/ 25 kV with AT and feeder not operating). Metrological issues (accuracy and consistency) are detailed for the presented results.",
"title": ""
},
{
"docid": "dc41eb4913c47c4b64d3ca4c1dac6e8d",
"text": "Applied Geostatistics with SGeMS: A User's Guide PetraSim: A graphical user interface for the TOUGH2 family of multiphase flow and transport codes. Applied Geostatistics with SGeMS: A User's Guide · Certain Death in Sierra Treatise on Fungi as Experimental Systems for Basic and Applied Research. Baixe grátis o arquivo SGeMS User's Guide enviado para a disciplina de Applied Geostatistics with SGeMS: A Users' Guide · S-GeMS Tutorial Notes. Applied Geostatistics with SGeMS: A User's Guide · Certain Death in Sierra Leone: Introduction to Stochastic Calculus Applied to Finance, Second Edition. Build Native Cross-Platform Apps with Appcelerator: A beginner's guide for Web Developers Applied GeostAtistics with SGeMS: A User's guide (Repost).",
"title": ""
},
{
"docid": "b69f2c426f86ad0e07172eb4d018b818",
"text": "Versatile motor skills for hitting and throwing motions can be observed in humans already in early ages. Future robots require high power-to-weight ratios as well as inherent long operational lifetimes without breakage in order to achieve similar perfection. Robustness due to passive compliance and high-speed catapult-like motions as possible with fast energy release are further beneficial characteristics. Such properties can be realized with antagonistic muscle-based designs. Additionally, control algorithms need to exploit the full potential of the robot. Learning control is a promising direction due to its the potential to capture uncertainty and control of complex systems. The aim of this paper is to build a robotic arm that is capable of generating high accelerations and sophisticated trajectories as well as enable exploration at such speeds for robot learning approaches. Hence, we have designed a light-weight robot arm with moving masses below 700 g with powerful antagonistic compliant actuation with pneumatic artificial muscles. Rather than recreating human anatomy, our system is designed to be easy to control in order to facilitate future learning of fast trajectory tracking control. The resulting robot is precise at low speeds using a simple PID controller while reaching high velocities of up to 12 m/s in task space and 1500 deg/s in joint space. This arm will enable new applications in fast changing and uncertain task like robot table tennis while being a sophisticated and reproducible test-bed for robot skill learning methods. Construction details are available.",
"title": ""
},
{
"docid": "450b95f0eafae4af0f643cb04b2cee40",
"text": "This paper investigates a consensus-based auction algorithm in the context of decentralized traffic control. In particular, we study the automation of a road intersection, where a set of vehicles is required to cross without collisions. The crossing order will be negotiated in a decentralized fashion. An on-board model predictive controller (MPC) will compute an optimal trajectory which avoids collisions with higher priority vehicles, thus retaining convex safety constraints. Simulations are then performed in a time-variant traffic environment.",
"title": ""
},
{
"docid": "92e955705aa333923bb7b14af946fc2f",
"text": "This study examines the role of online daters’ physical attractiveness in their profile selfpresentation and, in particular, their use of deception. Sixty-nine online daters identified the deceptions in their online dating profiles and had their photograph taken in the lab. Independent judges rated the online daters’ physical attractiveness. Results show that the lower online daters’ attractiveness, the more likely they were to enhance their profile photographs and lie about their physical descriptors (height, weight, age). The association between attractiveness and deception did not extend to profile elements unrelated to their physical appearance (e.g., income, occupation), suggesting that their deceptions were limited and strategic. Results are discussed in terms of (a) evolutionary theories about the importance of physical attractiveness in the dating realm and (b) the technological affordances that allow online daters to engage in selective self-presentation.",
"title": ""
},
{
"docid": "c391b0cddadc4fb8dde78e453e501b57",
"text": "In this paper, we explore how privacy settings and privacy policy consumption (reading the privacy policy) affect the relationship between privacy attitudes and disclosure behaviors. We present results from a survey completed by 122 users of Facebook regarding their information disclosure practices and their attitudes about privacy. Based on our data, we develop and evaluate a model for understanding factors that affect how privacy attitudes influence disclosure and discuss implications for social network sites. Our analysis shows that the relationship between privacy attitudes and certain types of disclosures (those furthering contact) are controlled by privacy policy consumption and privacy behaviors. This provides evidence that social network sites could help mitigate concerns about disclosure by providing transparent privacy policies and privacy controls. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eb24b80d6651350e93cd0b2c614ffcb8",
"text": "This document reviews the main design decisions that have been made for the physical layer design of the Device-to-Device (D2D) synchronization signals and channels as part of the 3GPP Release-12 specification work. The intention is to provide some context and background to the agreed design decisions because this rationale will not be described in the 3GPP specifications.",
"title": ""
},
{
"docid": "3eff8dca65a9a119a9f5c38dbf8dc978",
"text": "Advances in predicting in vivo performance of drug products has the potential to change how drug products are developed and reviewed. Modeling and simulation methods are now more commonly used in drug product development and regulatory drug review. These applications include, but are not limited to: the development of biorelevant specifications, the determination of bioequivalence metrics for modified release products with rapid therapeutic onset, the design of in vitro-in vivo correlations in a mechanistic framework, and prediction of food effect. As new regulatory concepts such as quality by design require better application of biopharmaceutical modeling in drug product development, regulatory challenges in bioequivalence demonstration of complex drug products also present exciting opportunities for creative modeling and simulation approaches. A collaborative effort among academia, government and industry in modeling and simulation will result in improved safe and effective new/generic drugs to the American public.",
"title": ""
},
{
"docid": "f3e382102c57e9d8f5349e374d1e6907",
"text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the handeye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of [1] for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.",
"title": ""
},
{
"docid": "6a3695fd6a358fa39a2641a478caf38c",
"text": "With the increase in the number of vehicles, many intelligent systems have been developed to help drivers to drive safely. Lane detection is a crucial element of any driver assistance system. At present, researchers working on lane detection are confronted with several major challenges, such as attaining robustness to inconsistencies in lighting and background clutter. To address these issues in this work, we propose a method named Lane Detection with Two-stage Feature Extraction (LDTFE) to detect lanes, whereby each lane has two boundaries. To enhance robustness, we take lane boundary as collection of small line segments. In our approach, we apply a modified HT (Hough Transform) to extract small line segments of the lane contour, which are then divided into clusters by using the DBSCAN (Density Based Spatial Clustering of Applications with Noise) clustering algorithm. Then, we can identify the lanes by curve fitting. The experimental results demonstrate that our modified HT works better for LDTFE than LSD (Line Segment Detector). Through extensive experiments, we demonstrate the outstanding performance of our method on the challenging dataset of road images compared with state-of-the-art lanedetection methods. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b31286a1ed91cffcfab7cb5e17392fb9",
"text": "This paper presents the use of frequency modulation as a spread spectrum technique to reduce conducted electromagnetic interference (EMI) in the A frequency band (9-150 kHz) caused by resonant inverters used in induction heating home appliances. For sinusoidal, triangular, and sawtooth modulation profiles, the influence of peak period deviation in EMI reduction and in the power delivered to the load is analyzed. A digital circuit that generates the best of the analyzed modulation profiles is implemented in a field programmable gate array. The design is modeled in a very-high-speed integrated circuits hardware description language (VHDL). The digital circuit, the power converter, and the spectrum analyzer are simulated all together using a mixed-signal simulation tool to verify the functionality of the VHDL description. The spectrum analyzer is modeled in VHDL-analog and mixed-signal extension language (VHDL-AMS) and takes into account the resolution bandwidth stipulated by the EMI measurement standard. Finally, the simulations are experimentally verified on a 3.5 kW resonant inverter operating at 35 kHz.",
"title": ""
},
{
"docid": "8c38fa79c02e9b9aabd107f5b02d2587",
"text": "Graph computation approaches such as GraphChi and TurboGraph recently demonstrated that a single PC can perform efficient computation on billion-node graphs. To achieve high speed and scalability, they often need sophisticated data structures and memory management strategies. We propose a minimalist approach that forgoes such requirements, by leveraging the fundamental memory mapping (MMap) capability found on operating systems. We contribute: (1) a new insight that MMap is a viable technique for creating fast and scalable graph algorithms that surpasses some of the best techniques; (2) the design and implementation of popular graph algorithms for billion-scale graphs with little code, thanks to memory mapping; (3) extensive experiments on real graphs, including the 6.6 billion edge Yahoo Web graph, and show that this new approach is significantly faster or comparable to the highly-optimized methods (e.g., 9.5X faster than GraphChi for computing PageRank on 1.47B edge Twitter graph). We believe our work provides a new direction in the design and development of scalable algorithms. Our packaged code is available at http://poloclub.gatech.edu/mmap/.",
"title": ""
},
{
"docid": "fe126ffb1d1868539bf0ecae638afb38",
"text": "Networks, also called graphs by mathematicians, provide a useful abstraction of the structure of many complex systems, ranging from social systems and computer networks to biological networks and the state spaces of physical systems. In the past decade there have been significant advances in experiments to determine the topological structure of networked systems, but there remain substantial challenges in extracting scientific understanding from the large quantities of data produced by the experiments. A variety of basic measures and metrics are available that can tell us about small-scale structure in networks, such as correlations, connections and recurrent patterns, but it is considerably more difficult to quantify structure on medium and large scales, to understand the ‘big picture’. Important progress has been made, however, within the past few years, a selection of which is reviewed here.",
"title": ""
}
] |
scidocsrr
|
a3f195e484bb88140ae0511465acafcc
|
Triggering the sintering of silver nanoparticles at room temperature.
|
[
{
"docid": "2e964b14ff4e45e3f1c339d7247a50d0",
"text": "We report a method to additively build threedimensional (3-D) microelectromechanical systems (MEMS) and electrical circuitry by ink-jet printing nanoparticle metal colloids. Fabricating metallic structures from nanoparticles avoids the extreme processing conditions required for standard lithographic fabrication and molten-metal-droplet deposition. Nanoparticles typically measure 1 to 100 nm in diameter and can be sintered at plastic-compatible temperatures as low as 300 C to form material nearly indistinguishable from the bulk material. Multiple ink-jet print heads mounted to a computer-controlled 3-axis gantry deposit the 10% by weight metal colloid ink layer-by-layer onto a heated substrate to make two-dimensional (2-D) and 3-D structures. We report a high-Q resonant inductive coil, linear and rotary electrostatic-drive motors, and in-plane and vertical electrothermal actuators. The devices, printed in minutes with a 100 m feature size, were made out of silver and gold material with high conductivity,and feature as many as 400 layers, insulators, 10 : 1 vertical aspect ratios, and etch-released mechanical structure. These results suggest a route to a desktop or large-area MEMS fabrication system characterized by many layers, low cost, and data-driven fabrication for rapid turn-around time, and represent the first use of ink-jet printing to build active MEMS. [657]",
"title": ""
}
] |
[
{
"docid": "de6348bb8e3b4c1cfd1fa83557ae50c9",
"text": "Cerebellar lesions can cause motor deficits and/or the cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome). We used voxel-based lesion-symptom mapping to test the hypothesis that the cerebellar motor syndrome results from anterior lobe damage whereas lesions in the posterolateral cerebellum produce the CCAS. Eighteen patients with isolated cerebellar stroke (13 males, 5 females; 20-66 years old) were evaluated using measures of ataxia and neurocognitive ability. Patients showed a wide range of motor and cognitive performance, from normal to severely impaired; individual deficits varied according to lesion location within the cerebellum. Patients with damage to cerebellar lobules III-VI had worse ataxia scores: as predicted, the cerebellar motor syndrome resulted from lesions involving the anterior cerebellum. Poorer performance on fine motor tasks was associated primarily with strokes affecting the anterior lobe extending into lobule VI, with right-handed finger tapping and peg-placement associated with damage to the right cerebellum, and left-handed finger tapping associated with left cerebellar damage. Patients with the CCAS in the absence of cerebellar motor syndrome had damage to posterior lobe regions, with lesions leading to significantly poorer scores on language (e.g. right Crus I and II extending through IX), spatial (bilateral Crus I, Crus II, and right lobule VIII), and executive function measures (lobules VII-VIII). These data reveal clinically significant functional regions underpinning movement and cognition in the cerebellum, with a broad anterior-posterior distinction. Motor and cognitive outcomes following cerebellar damage appear to reflect the disruption of different cerebro-cerebellar motor and cognitive loops.",
"title": ""
},
{
"docid": "105fe384f9dfb13aef82f4ff16f87821",
"text": "Dengue hemorrhagic fever (DHF), a severe manifestation of dengue viral infection that can cause severe bleeding, organ impairment, and even death, affects between 15,000 and 105,000 people each year in Thailand. While all Thai provinces experience at least one DHF case most years, the distribution of cases shifts regionally from year to year. Accurately forecasting where DHF outbreaks occur before the dengue season could help public health officials prioritize public health activities. We develop statistical models that use biologically plausible covariates, observed by April each year, to forecast the cumulative DHF incidence for the remainder of the year. We perform cross-validation during the training phase (2000-2009) to select the covariates for these models. A parsimonious model based on preseason incidence outperforms the 10-y median for 65% of province-level annual forecasts, reduces the mean absolute error by 19%, and successfully forecasts outbreaks (area under the receiver operating characteristic curve = 0.84) over the testing period (2010-2014). We find that functions of past incidence contribute most strongly to model performance, whereas the importance of environmental covariates varies regionally. This work illustrates that accurate forecasts of dengue risk are possible in a policy-relevant timeframe.",
"title": ""
},
{
"docid": "0ebdf5dae3ce2265b9b740aba5484a7c",
"text": "The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.",
"title": ""
},
{
"docid": "8565471c18407fc0741548d11d44a7d2",
"text": "This study evaluated the clinical efficacy of 2% chlorhexidine (CHX) gel on intracanal bacteria reduction during root canal instrumentation. The additional antibacterial effect of an intracanal dressing (Ca[OH](2) mixed with 2% CHX gel) was also assessed. Forty-three patients with apical periodontitis were recruited. Four patients with irreversible pulpitis were included as negative controls. Teeth were instrumented using rotary instruments and 2% CHX gel as the disinfectant. Bacterial samples were taken upon access (S1), after instrumentation (S2), and after 2 weeks of intracanal dressing (S3). Anaerobic culture was performed. Four samples showed no bacteria growth at S1, which were excluded from further analysis. Of the samples cultured positively at S1, 10.3% (4/39) and 8.3% (4/36) sampled bacteria at S2 and S3, respectively. A significant difference in the percentage of positive culture between S1 and S2 (p < 0.001) but not between S2 and S3 (p = 0.692) was found. These results suggest that 2% CHX gel is an effective root canal disinfectant and additional intracanal dressing did not significantly improve the bacteria reduction on the sampled root canals.",
"title": ""
},
{
"docid": "9c9a410422360df950a16bdddc0c71ca",
"text": "We introduce a multiagent blackboard system for poetry generation with a special focus on emotional modelling. The emotional content is extracted from text, particularly blog posts, and is used as inspiration for generating poems. Our main objective is to create a system with an empathic emotional personality that would change its mood according to the affective content of the text, and express its feelings in the form of a poem. We describe here the system structure including experts with distinct roles in the process, and explain how they cooperate within the blackboard model by presenting an illustrative example of generation process. The system is evaluated considering the final outputs and the generation process. This computational creativity tool can be extended by incorporating new experts into the blackboard model, and used as an artistic enrichment of",
"title": ""
},
{
"docid": "6bb01ba5f5d20c9c1b9e14573a825d04",
"text": "Let K be a subset of the Euclidean sphere S d−1. As seen in Lecture #1, in analyzing how well a given random projection matrix S ∈ R m×d preserves vectors in K, a central object is the random variable Z(K) = sup u∈K Su 2 2 m − 1. (2.1) Suppose that our goal is to establish that, for some δ ∈ (0, 1), we have Z(K) ≤ δ with high probability. How large must the projection dimension m be, as a function of (δ, K), for this type of inequality to hold? In this lecture, we give a precise answer for Gaussian random projections.",
"title": ""
},
{
"docid": "e1efeca0d73be6b09f5cf80437809bdb",
"text": "Deep convolutional neural networks have been shown to be vulnerable to arbitrary geometric transformations. However, there is no systematic method to measure the invariance properties of deep networks to such transformations. We propose ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks. In particular, our algorithm measures the robustness of deep networks to geometric transformations in a worst-case regime as they can be problematic for sensitive applications. Our extensive experimental results show that ManiFool can be used to measure the invariance of fairly complex networks on high dimensional datasets and these values can be used for analyzing the reasons for it. Furthermore, we build on ManiFool to propose a new adversarial training scheme and we show its effectiveness on improving the invariance properties of deep neural networks.1",
"title": ""
},
{
"docid": "0ca3676df82502041647e3c5612b0ff2",
"text": "OBJECTIVE\nTo evaluate the effects of 6 months of pool exercise combined with a 6 session education program for patients with fibromyalgia syndrome (FM).\n\n\nMETHODS\nThe study population comprised 58 patients, randomized to a treatment or a control group. Patients were instructed to match the pool exercises to their threshold of pain and fatigue. The education focused on strategies for coping with symptoms and encouragement of physical activity. The primary outcome measurements were the total score of the Fibromyalgia Impact Questionnaire (FIQ) and the 6 min walk test, recorded at study start and after 6 mo. Several other tests and instruments assessing functional limitations, severity of symptoms, disabilities, and quality of life were also applied.\n\n\nRESULTS\nSignificant differences between the treatment group and the control group were found for the FIQ total score (p = 0.017) and the 6 min walk test (p < 0.0001). Significant differences were also found for physical function, grip strength, pain severity, social functioning, psychological distress, and quality of life.\n\n\nCONCLUSION\nThe results suggest that a 6 month program of exercises in a temperate pool combined with education will improve the consequences of FM.",
"title": ""
},
{
"docid": "cef6d9eb15f00eedcb7241d62e5a1b02",
"text": "There has been a rapid increase in the use of social networking websites in the last few years. People most conveniently express their views and opinions on a wide array of topics via such websites. Sentiment analysis of such data which comprises of people's views is very important in order to gauge public opinion on a particular topic of interest. This paper reviews a number of techniques, both lexicon-based approaches as well as learning based methods that can be used for sentiment analysis of text. In order to adapt these techniques for sentiment analysis of data procured from one of the social networking websites, Twitter, a number of issues and challenges need to be addressed, which are put forward in this paper.",
"title": ""
},
{
"docid": "99d99ce673dfc4a6f5bf3e7d808a5570",
"text": "We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.",
"title": ""
},
{
"docid": "fa3641ad1afc65ca0a96c68aaf87c261",
"text": "Recent work has explored methods for learning continuous vector space word representations reflecting the underlying semantics of words. Simple vector space arithmetic using cosine distances has been shown to capture certain types of analogies, such as reasoning about plurals from singulars, past tense from present tense, etc. In this paper, we introduce a new approach to capture analogies in continuous word representations, based on modeling not just individual word vectors, but rather the subspaces spanned by groups of words. We exploit the property that the set of subspaces in n-dimensional Euclidean space form a curved manifold space called the Grassmannian, a quotient subgroup of the Lie group of rotations in ndimensions. Based on this mathematical model, we develop a modified cosine distance model based on geodesic kernels that captures relation-specific distances across word categories. Our experiments on analogy tasks show that our approach performs significantly better than the previous approaches for the given task.",
"title": ""
},
{
"docid": "21a917abee792625539e7eabb3a81f4c",
"text": "This paper investigates the power operation in information system development (ISD) processes. Due to the fact that key actors in different departments possess different professional knowledge, their different contexts lead to some employees supporting IS, while others resist it to achieve their goals. We aim to interpret these power operations in ISD from the theory of technological frames. This study is based on qualitative data collected from KaoKang (pseudonym), a port authority in Taiwan. We attempt to understand the situations of different key actors (e.g. top manager, MIS professionals, employees of DP-1 division, consultants of KaoKang, and customers (outside users)) who wield power in ISD in different situations. In this respect, we interpret the data using a technological frame. Finally, we aim to gain fresh insight into power operation in ISD from this perspective.",
"title": ""
},
{
"docid": "acdcdae606f9c046aab912075d4ec609",
"text": "Community sensing, fusing information from populations of privately-held sensors, presents a great opportunity to create efficient and cost-effective sensing applications. Yet, reasonable privacy concerns often limit the access to such data streams. How should systems valuate and negotiate access to private information, for example in return for monetary incentives? How should they optimally choose the participants from a large population of strategic users with privacy concerns, and compensate them for information shared? In this paper, we address these questions and present a novel mechanism, SEQTGREEDY, for budgeted recruitment of participants in community sensing. We first show that privacy tradeoffs in community sensing can be cast as an adaptive submodular optimization problem. We then design a budget feasible, incentive compatible (truthful) mechanism for adaptive submodular maximization, which achieves near-optimal utility for a large class of sensing applications. This mechanism is general, and of independent interest. We demonstrate the effectiveness of our approach in a case study of air quality monitoring, using data collected from the Mechanical Turk platform. Compared to the state of the art, our approach achieves up to 30% reduction in cost in order to achieve a desired level of utility.",
"title": ""
},
{
"docid": "4620525bfbfd492f469e948b290d73a2",
"text": "This thesis contains the complete end-to-end simulation, development, implementation, and calibration of the wide bandwidth, low-Q, Kiwi-SAS synthetic aperture sonar (SAS). Through the use of a very stable towfish, a new novel wide bandwidth transducer design, and autofocus procedures, high-resolution diffraction limited imagery is produced. As a complete system calibration was performed, this diffraction limited imagery is not only geometrically calibrated, it is also calibrated for target cross-section or target strength estimation. Is is important to note that the diffraction limited images are formed without access to any form of inertial measurement information. Previous investigations applying the synthetic aperture technique to sonar have developed processors based on exact, but inefficient, spatial-temporal domain time-delay and sum beamforming algorithms, or they have performed equivalent operations in the frequency domain using fast-correlation techniques (via the fast Fourier transform (FFT)). In this thesis, the algorithms used in the generation of synthetic aperture radar (SAR) images are derived in their wide bandwidth forms and it is shown that these more efficient algorithms can be used to form diffraction limited SAS images. Several new algorithms are developed; accelerated chirp scaling algorithm represents an efficient method for processing synthetic aperture data, while modified phase gradient autofocus and a low-Q autofocus routine based on prominent point processing are used to focus both simulated and real target data that has been corrupted by known and unknown motion or medium propagation errors.",
"title": ""
},
{
"docid": "d5a343b290765b934b0dfdf553383bfa",
"text": "The advent of RGB-D cameras which provide synchronized range and video data creates new opportunities for exploiting both sensing modalities for various robotic applications. This paper exploits the strengths of vision and range measurements and develops a novel robust algorithm for localization using RGB-D cameras. We show how correspondences established by matching visual SIFT features can effectively initialize the generalized ICP algorithm as well as demonstrate situations where such initialization is not viable. We propose an adaptive architecture which computes the pose estimate from the most reliable measurements in a given environment and present thorough evaluation of the resulting algorithm against a dataset of RGB-D benchmarks, demonstrating superior or comparable performance in the absence of the global optimization stage. Lastly we demonstrate the proposed algorithm on a challenging indoor dataset and demonstrate improvements where pose estimation from either pure range sensing or vision techniques perform poorly.",
"title": ""
},
{
"docid": "260f7258c3739efec1910028ec429471",
"text": "Cryptography is considered to be a disciple of science of achieving security by converting sensitive information to an un-interpretable form such that it cannot be interpreted by anyone except the transmitter and intended recipient. An innumerable set of cryptographic schemes persist in which each of it has its own affirmative and feeble characteristics. In this paper we have we have developed a traditional or character oriented Polyalphabetic cipher by using a simple algebraic equation. In this we made use of iteration process and introduced a key K0 obtained by permuting the elements of a given key seed value. This key strengthens the cipher and it does not allow the cipher to be broken by the known plain text attack. The cryptanalysis performed clearly indicates that the cipher is a strong one.",
"title": ""
},
{
"docid": "8ac596c8360e2d56b24fee750d58a8b8",
"text": "Stemming is a process of reducing inflected words to their stem or root from a generally written word form. This process is used in many text mining application as a feature selection technique. Moreover, Arabic text summarization has increasingly become an important task in natural language processing area (NLP). Therefore, the aim of this paper is to evaluate the impact of three different Arabic stemmers (i.e. Khoja, Larekey and Alkhalil's stemmer) on the text summarization performance for Arabic language. The evaluation of the proposed system, with the three different stemmers and without stemming, on the dataset used shows that the best performance was achieved by Khoja stemmer in term of recall, precision and F1-measure. The evaluation also shows that the performances of the proposed system are significantly improved by applying the stemming process in the pre-processing stage.",
"title": ""
},
{
"docid": "7e6a3a04c24a0fc24012619d60ebb87b",
"text": "The recent trend toward democratization in countries throughout the globe has challenged scholars to pursue two potentially contradictory goals: to develop a differentiated conceptualization of democracy that captures the diverse experiences of these countries; and to extend the analysis to this broad range of cases without ‘stretching’ the concept. This paper argues that this dual challenge has led to a proliferation of conceptual innovations, including hundreds of subtypes of democracy—i.e., democracy ‘with adjectives.’ The paper explores the strengths and weaknesses of three important strategies of innovation that have emerged: ‘precising’ the definition of democracy; shifting the overarching concept with which democracy is associated; and generating various forms of subtypes. Given the complex structure of meaning produced by these strategies for refining the concept of democracy, we conclude by offering an old piece of advice with renewed urgency: It is imperative that scholars situate themselves in relation to this structure of meaning by clearly defining and explicating the conception of democracy they are employing.",
"title": ""
},
{
"docid": "5546cbb6fac77d2d9fffab8ba0a50ed8",
"text": "The next-generation electric power systems (smart grid) are studied intensively as a promising solution for energy crisis. One important feature of the smart grid is the integration of high-speed, reliable and secure data communication networks to manage the complex power systems effectively and intelligently. We provide in this paper a comprehensive survey on the communication architectures in the power systems, including the communication network compositions, technologies, functions, requirements, and research challenges. As these communication networks are responsible for delivering power system related messages, we discuss specifically the network implementation considerations and challenges in the power system settings. This survey attempts to summarize the current state of research efforts in the communication networks of smart grid, which may help us identify the research problems in the continued studies. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "11ed7e0742ddb579efe6e1da258b0d3c",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
}
] |
scidocsrr
|
635687dbe5ba0279dbfe86d389cbb6a9
|
Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features
|
[
{
"docid": "48c157638090b3168b6fd3cb50780184",
"text": "Adverse reactions to drugs are among the most common causes of death in industrialized nations. Expensive clinical trials are not sufficient to uncover all of the adverse reactions a drug may cause, necessitating systems for post-marketing surveillance, or pharmacovigilance. These systems have typically relied on voluntary reporting by health care professionals. However, self-reported patient data has become an increasingly important resource, with efforts such as MedWatch from the FDA allowing reports directly from the consumer. In this paper, we propose mining the relationships between drugs and adverse reactions as reported by the patients themselves in user comments to health-related websites. We evaluate our system on a manually annotated set of user comments, with promising performance. We also report encouraging correlations between the frequency of adverse drug reactions found by our system in unlabeled data and the frequency of documented adverse drug reactions. We conclude that user comments pose a significant natural language processing challenge, but do contain useful extractable information which merits further exploration.",
"title": ""
}
] |
[
{
"docid": "71265ce00f3e8823b126a1b892e2e15d",
"text": "Camera calibration has always been an essential component of photogrammetric measurement, with self-calibration nowadays being an integral and routinely applied operation within photogrammetric triangulation, especially in high-accuracy close-range measurement. With the very rapid growth in adoption of off-the-shelf digital cameras for a host of new 3D measurement applications, however, there are many situations where the geometry of the image network will not support robust recovery of camera parameters via on-the-job calibration. For this reason, stand-alone camera calibration has again emerged as an important issue in close-range photogrammetry, and it also remains a topic of research interest in computer vision. This paper overviews the current approaches adopted for camera calibration in close-range photogrammetry and computer vision, and discusses operational aspects for self-calibration. Also, the results of camera calibrations using different algorithms are summarized. Finally, the impact of chromatic aberration on modelled radial distortion is touched upon to highlight the fact that there are still issues of research interest in the photogrammetric calibration of consumer-grade digital cameras.",
"title": ""
},
{
"docid": "9b8d4b855bab5e2fdcadd1fe1632f197",
"text": "Men report more permissive sexual attitudes and behavior than do women. This experiment tested whether these differences might result from false accommodation to gender norms (distorted reporting consistent with gender stereotypes). Participants completed questionnaires under three conditions. Sex differences in self-reported sexual behavior were negligible in a bogus pipeline condition in which participants believed lying could be detected, moderate in an anonymous condition, and greatest in an exposure threat condition in which the experimenter could potentially view participants responses. This pattern was clearest for behaviors considered less acceptable for women than men (e.g., masturbation, exposure to hardcore & softcore erotica). Results suggest that some sex differences in self-reported sexual behavior reflect responses influenced by normative expectations for men and women.",
"title": ""
},
{
"docid": "b56467b5761a1294bb2b1739d6504ef2",
"text": "This paper presents the creation of a robot capable of drawing artistic portraits. The application is purely entertaining and based on existing tools for face detection and image reconstruction, as well as classical tools for trajectory planning of a 4 DOFs robot arm. The innovation of the application lies in the care we took to make the whole process as human-like as possible. The robot's motions and its drawings follow a style characteristic to humans. The portraits conserve the esthetics features of the original images. The whole process is interactive, using speech recognition and speech synthesis to conduct the scenario",
"title": ""
},
{
"docid": "a80c83fd7bdf2a8550c80c32b98352ec",
"text": "In this paper, we propose an online learning algorithm for optimal execution in the limit order book of a financial asset. Given a certain number of shares to sell and an allocated time window to complete the transaction, the proposed algorithm dynamically learns the optimal number of shares to sell via market orders at prespecified time slots within the allocated time interval. We model this problem as a Markov Decision Process (MDP), which is then solved by dynamic programming. First, we prove that the optimal policy has a specific form, which requires either selling no shares or the maximum allowed amount of shares at each time slot. Then, we consider the learning problem, in which the state transition probabilities are unknown and need to be learned on the fly. We propose a learning algorithm that exploits the form of the optimal policy when choosing the amount to trade. Interestingly, this algorithm achieves bounded regret with respect to the optimal policy computed based on the complete knowledge of the market dynamics. Our numerical results on several finance datasets show that the proposed algorithm performs significantly better than the traditional Q-learning algorithm by exploiting the structure of the problem.",
"title": ""
},
{
"docid": "69b78ff6fd67def0e0c1ee016630270b",
"text": "In the world of digitization, the growth of big data is raising at large scale with usage of high performance computing. The huge data in English and Hindi is available on internet and social media which need to be extracted or summarized in user required form. In this paper we are presenting Bilingual (Hindi and English) unsupervised automatic text summarization using deep learning. which is an important research area with in Natural Language Processing, Machine Learning and data mining, to improve result accuracy, we are using restricted Boltzmann machine to generate a shorter version of original document without losing its important information. In this algorithm we are exploring the features to improve the relevance of sentences in the dataset.",
"title": ""
},
{
"docid": "f16b013db80ad448ab31040f75b8bcb2",
"text": "In the world of recommender systems, it is a common practice to use public available datasets from different application environments (e.g. MovieLens, Book-Crossing, or Each-Movie) in order to evaluate recommendation algorithms. These datasets are used as benchmarks to develop new recommendation algorithms and to compare them to other algorithms in given settings. In this paper, we explore datasets that capture learner interactions with tools and resources. We use the datasets to evaluate and compare the performance of different recommendation algorithms for learning. We present an experimental comparison of the accuracy of several collaborative filtering algorithms applied to these TEL datasets and elaborate on implicit relevance data, such as downloads and tags, that can be used to improve the performance of recommendation algorithms.",
"title": ""
},
{
"docid": "3deced64cd17210f7e807e686c0221af",
"text": "How should we measure metacognitive (\"type 2\") sensitivity, i.e. the efficacy with which observers' confidence ratings discriminate between their own correct and incorrect stimulus classifications? We argue that currently available methods are inadequate because they are influenced by factors such as response bias and type 1 sensitivity (i.e. ability to distinguish stimuli). Extending the signal detection theory (SDT) approach of Galvin, Podd, Drga, and Whitmore (2003), we propose a method of measuring type 2 sensitivity that is free from these confounds. We call our measure meta-d', which reflects how much information, in signal-to-noise units, is available for metacognition. Applying this novel method in a 2-interval forced choice visual task, we found that subjects' metacognitive sensitivity was close to, but significantly below, optimality. We discuss the theoretical implications of these findings, as well as related computational issues of the method. We also provide free Matlab code for implementing the analysis.",
"title": ""
},
{
"docid": "b4e4cb13eae7915d101deabd02a02df8",
"text": "OBJECTIVE\nThe aim of this study was to investigate the effectiveness of a 6-week traditional exercise program with supplementary whole-body vibration (WBV) in improving health status, physical functioning, and main symptoms of fibromyalgia (FM) in women with FM.\n\n\nMETHODS\nThirty-six (36) women with FM (mean +/- standard error of the mean age 55.97 +/- 1.55) were randomized into 3 treatment groups: exercise and vibration (EVG), exercise (EG), and control (CG). Exercise therapy, consisting of aerobic activities, stretching, and relaxation techniques, was performed twice a week (90 min/day). Following each exercise session, the EVG underwent a protocol with WBV, whereas the EG performed the same protocol without vibratory stimulus. The Fibromyalgia Impact Questionnaire (FIQ) was administered at baseline and 6 weeks following the initiation of the treatments. Estimates of pain, fatigue, stiffness, and depression were also reported using the visual analogue scale.\n\n\nRESULTS\nA significant 3 x 2 (group x time)-repeated measures analysis of variance interaction was found for pain (p = 0.018) and fatigue (p = 0.002) but not for FIQ (p = 0.069), stiffness (p = 0.142), or depression (p = 0.654). Pain and fatigue scores were significantly reduced from baseline in the EVG, but not in the EG or CG. In addition, the EVG showed significantly lower pain and fatigue scores at week 6 compared to the CG, whereas no significant differences were found between the EG and CG (p > 0.05).\n\n\nCONCLUSION\nResults suggest that a 6-week traditional exercise program with supplementary WBV safely reduces pain and fatigue, whereas exercise alone fails to induce improvements.",
"title": ""
},
{
"docid": "924a9b5ff2a60a46ef3dfd8b40abb0fc",
"text": "We extend the conceptual model developed by Amelinckx et al. (2008) by relating electronic reverse auction (ERA) project outcomes to ERA project satisfaction. We formulate hypotheses about the relationships among organizational and project antecedents, a set of financial, operational, and strategic ERA project outcomes, and ERA project satisfaction. We empirically test the extended model with a sample of 180 buying professionals from ERA project teams at large global companies. Our results show that operational and strategic outcomes are positively related to ERA project satisfaction, while price savings are not. We also find positive relationships between financial outcomes and project team expertise; operational outcomes and organizational commitment, cross-functional project team composition, and procedural fairness ; and strategic outcomes and top management support, organizational commitment, and procedural fairness. An electronic reverse auction (ERA) is ''an online, real-time dynamic auction between a buying organization and a group of pre-qualified suppliers who compete against each other to win the business to supply goods or services that have clearly defined specifications for design, quantity, quality, delivery, and related terms and conditions. These suppliers compete by bidding against each other online over the Internet using specialized software by submitting successively lower priced bids during a scheduled time period'' (Beall et al. 2003). Over the past two decades, ERAs have been used in various industries, (Beall et al. 2003, Ray et al. 2011, Wang et al. 2013). ERAs are increasingly popular among buying organizations, although their use sparks controversy and ethical concerns in the sourcing world (Charki et al. 2010). Indeed, the one-sided focus on price savings in ERAs is considered to be at odds with the benefits of long-term cooperative buyer–supplier relationships (Beall et al. 2003, Hunt et al. 2006). However, several researchers have declared that ERAs are here to stay, as they are relatively easy to install and use and have resulted in positive outcomes across a range of offerings and contexts (Beall et al. 2003, Hur et al. 2006). In prior research work on ERAs, Amelinckx et al. (2008) developed a conceptual model based on an extensive review of the electronic sourcing literature and exploratory research involving multiple case studies. The authors identified operational and strategic outcomes that buying organizations can obtain in ERAs, in addition to financial gains. Furthermore, the authors asserted that the different outcomes can be obtained jointly, through the implementation of important organizational and project antecedents, and as such alleviate …",
"title": ""
},
{
"docid": "ea9f5956e09833c107d79d5559367e0e",
"text": "This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability.",
"title": ""
},
{
"docid": "040d94d33e04889e06ddcc2241f6a4b6",
"text": "Existing chatbot knowledge bases are mostly hand-constructed, which is time consuming and difficult to adapt to new domains. Automatic chatbot knowledge acquisition method from online forums is presented in this paper. It includes a classification model based on rough set, and the theory of ensemble learning is combined to make a decision. Given a forum, multiple rough set classifiers are constructed and trained first. Then all replies are classified with these classifiers. The final recognition results are drawn by voting to the output of these classifiers. Finally, the related replies are selected as chatbot knowledge. Relevant experiments on a child-care forum prove that the method based on rough set has high recognition efficiency to related replies and the combination of ensemble learning improves the results.",
"title": ""
},
{
"docid": "ab2ba70d4e1e9d59bebb89cb70632e4a",
"text": "Binarization is an extreme network compression approach that provides large computational speedups along with energy and memory savings, albeit at significant accuracy costs. We investigate the question of where to binarize inputs at layer-level granularity and show that selectively binarizing the inputs to specific layers in the network could lead to significant improvements in accuracy while preserving most of the advantages of binarization. We analyze the binarization tradeoff using a metric that jointly models the input binarization-error and computational cost and introduce an efficient algorithm to select layers whose inputs are to be binarized. Practical guidelines based on insights obtained from applying the algorithm to a variety of models are discussed. Experiments on Imagenet dataset using AlexNet and ResNet-18 models show 3-4% improvements in accuracy over fully binarized networks with minimal impact on compression and computational speed. The improvements are even more substantial on sketch datasets like TU-Berlin, where we match state-of-the-art accuracy as well, getting over 8% increase in accuracies. We further show that our approach can be applied in tandem with other forms of compression that deal with individual layers or overall model compression (e.g., SqueezeNets). Unlike previous quantization approaches, we are able to binarize the weights in the last layers of a network, which often have a large number of parameters, resulting in significant improvement in accuracy over fully binarized models.",
"title": ""
},
{
"docid": "f94ba438b2c5079069c25602c57ef705",
"text": "Search with local intent is becoming increasingly useful due to the popularity of the mobile device. The creation and maintenance of accurate listings of local businesses world wide is time consuming and expensive. In this paper, we propose an approach to automatically discover businesses that are visible on street level imagery. Precise business store-front detection enables accurate geo-location of bu sinesses, and further provides input for business categoriza tion, listing generation,etc. The large variety of business categories in different countries makes this a very challen ging problem. Moreover, manual annotation is prohibitive due to the scale of this problem. We propose the use of a MultiBox [4] based approach that takes input image pixels and directly outputs store front bounding boxes. This end-to-end learning approach instead preempts the need for hand modelling either the proposal generation phase or the post-processing phase, leveraging large labelled trai ning datasets. We demonstrate our approach outperforms the state of the art detection techniques with a large margin in terms of performance and run-time efficiency. In the evaluation, we show this approach achieves human accuracy in the low-recall settings. We also provide an end-to-end eval uation of business discovery in the real world.",
"title": ""
},
{
"docid": "48c9877043b59f3ed69aef3cbd807de7",
"text": "This paper presents an ontology-based approach for data quality inference on streaming observation data originating from large-scale sensor networks. We evaluate this approach in the context of an existing river basin monitoring program called the Intelligent River®. Our current methods for data quality evaluation are compared with the ontology-based inference methods described in this paper. We present an architecture that incorporates semantic inference into a publish/subscribe messaging middleware, allowing data quality inference to occur on real-time data streams. Our preliminary benchmark results indicate delays of 100ms for basic data quality checks based on an existing semantic web software framework. We demonstrate how these results can be maintained under increasing sensor data traffic rates by allowing inference software agents to work in parallel. These results indicate that data quality inference using the semantic sensor network paradigm is viable solution for data intensive, large-scale sensor networks.",
"title": ""
},
{
"docid": "1c915d0ffe515aa2a7c52300d86e90ba",
"text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.",
"title": ""
},
{
"docid": "06828ad9df335232621153273fd84942",
"text": "We propose a new paradigm for searching for sound by allowing users to graphically sketch their mental representation of sound as query. By conducting interviews with professional music producers and creators, we find that existing, text-based indexing and retrieval methods based on file names and tags to search for sound material in large collections (e.g., sample databases) do not reflect their mental concepts, which are often rooted in the visual domain and hence are far from their actual needs, work practices, and intuition. As a consequence, when creating new music on the basis of existing sounds, the process of finding these sounds is cumbersome and breaks their work flow by being forced to resort to browsing the collection. Prior work on organizing sound repositories aiming at bridging this conceptual gap between sound and vision builds upon psychological findings (often alluding to synaesthetic phenomena) or makes use of ad-hoc, technology-driven mappings. These methods foremost aim at visualizing the contents of collections or individual sounds and, again, facilitating browsing therein. For the purpose of indexing and querying, such methods have not been applied yet. We argue that the development of a search system that allows for visual queries to audio collections is desired by users and should inform and drive future research in audio retrieval. To explore this notion, we test the idea of a sketch interface with music producers in a semi-structured interview process by making use of a physical non-functional prototype. Based on the outcomes of this study, we propose a conceptual software prototype for visually querying sound repositories using image manipulation metaphors.",
"title": ""
},
{
"docid": "911b5f12a7939773605f63db1d71c049",
"text": "In this paper, we propose a tool, named Static UML Model Generator from Analysis of Requirements (SUGAR), which generates both use-case and class models by emphasizing on natural language requirements. SUGAR aims at integrating both requirement analysis and design phases by identifying use-cases, actors, classes along with its attributes and methods with proper association among classes. This tool extends the idea of previously existing tools and implemented with the help of efficient natural language processing tools of Stanford NLP Group, WordNet and JavaRAP using the modified approach of Rational Unified Process with better accuracy. SUGAR has added new features and also able to incorporate solution for those problems existed in previous tools by developing both analysis and design class models. SUGAR generates all static UML models in Java in conjunction with Rational Rose and provides all functionalities of the system even though the developer is having less domain knowledge.",
"title": ""
},
{
"docid": "ae0d8d1dec27539502cd7e3030a3fe42",
"text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.",
"title": ""
},
{
"docid": "6d813684a21e3ccc7fb2e09c866be1f1",
"text": "Cross-site scripting (XSS) is a code injection attack that allows an attacker to execute malicious script in another user’s browser. Once the attacker gains control over the Website vulnerable to XSS attack, it can perform actions like cookie-stealing, malware-spreading, session-hijacking and malicious redirection. Malicious JavaScripts are the most conventional ways of performing XSS attacks. Although several approaches have been proposed, XSS is still a live problem since it is very easy to implement, but di cult to detect. In this paper, we propose an e↵ective approach for XSS attack detection. Our method focuses on balancing the load between client and the server. Our method performs an initial checking in the client side for vulnerability using divergence measure. If the suspicion level exceeds beyond a threshold value, then the request is discarded. Otherwise, it is forwarded to the proxy for further processing. In our approach we introduce an attribute clustering method supported by rank aggregation technique to detect confounded JavaScripts. The approach is validated using real life data.",
"title": ""
},
{
"docid": "904c8b4be916745c7d1f0777c2ae1062",
"text": "In this paper, we address the problem of continuous access control enforcement in dynamic data stream environments, where both data and query security restrictions may potentially change in real-time. We present FENCE framework that ffectively addresses this problem. The distinguishing characteristics of FENCE include: (1) the stream-centric approach to security, (2) the symmetric model for security settings of both continuous queries and streaming data, and (3) two alternative security-aware query processing approaches that can optimize query execution based on regular and security-related selectivities. In FENCE, both data and query security restrictions are modeled symmetrically in the form of security metadata, called \"security punctuations\" embedded inside data streams. We distinguish between two types of security punctuations, namely, the data security punctuations (or short, dsps) which represent the access control policies of the streaming data, and the query security punctuations (or short, qsps) which describe the access authorizations of the continuous queries. We also present our encoding method to support XACML(eXtensible Access Control Markup Language) standard. We have implemented FENCE in a prototype DSMS and present our performance evaluation. The results of our experimental study show that FENCE's approach has low overhead and can give great performance benefits compared to the alternative security solutions for streaming environments.",
"title": ""
}
] |
scidocsrr
|
7520907d2132f6aad89f90922f81b147
|
Robust representations for face recognition: The power of averages
|
[
{
"docid": "58039fbc0550c720c4074c96e866c025",
"text": "We argue that to best comprehend many data sets, plotting judiciously selected sample statistics with associated confidence intervals can usefully supplement, or even replace, standard hypothesis-testing procedures. We note that most social science statistics textbooks limit discussion of confidence intervals to their use in between-subject designs. Our central purpose in this article is to describe how to compute an analogous confidence interval that can be used in within-subject designs. This confidence interval rests on the reasoning that because between-subject variance typically plays no role in statistical analyses of within-subject designs, it can legitimately be ignored; hence, an appropriate confidence interval can be based on the standard within-subject error term-that is, on the variability due to the subject × condition interaction. Computation of such a confidence interval is simple and is embodied in Equation 2 on p. 482 of this article. This confidence interval has two useful properties. First, it is based on the same error term as is the corresponding analysis of variance, and hence leads to comparable conclusions. Second, it is related by a known factor (√2) to a confidence interval of the difference between sample means; accordingly, it can be used to infer the faith one can put in some pattern of sample means as a reflection of the underlying pattern of population means. These two properties correspond to analogous properties of the more widely used between-subject confidence interval.",
"title": ""
}
] |
[
{
"docid": "c2b6708a14988e3af68ae9a6d55d8095",
"text": "Background: The Big Five are seen as stable personality traits. This study hypothesized that their measurement via self-ratings is differentially biased by participants’ emotions. The relationship between habitual emotions and personality should be mirrored in a patterned influence of emotional states upon personality scores. Methods: We experimentally induced emotional states and compared baseline Big Five scores of ninety-eight German participants (67 female; mean age 22.2) to their scores after the induction of happiness or sadness. Manipulation checks included the induced emotion’s intensity and durability. Results: The expected differential effect could be detected for neuroticism and extraversion and as a trend for agreeableness. Post-hoc analyses showed that only sadness led to increased neuroticism and decreased extraversion scores. Oppositely, happiness did not decrease neuroticism, but there was a trend for an elevation on extraversion scores. Conclusion: Results suggest a specific effect of sadness on self-reported personality traits, particularly on neuroticism. Sadness may trigger different self-concepts in susceptible people, biasing perceived personality. This bias could be minimised by tracking participants’ emotional states prior to personality measurement.",
"title": ""
},
{
"docid": "b9b0b6974353d4cad948b0681d8bf23b",
"text": "We describe a novel approach to modeling idiosyncra tic prosodic behavior for automatic speaker recognition. The approach computes various duration , pitch, and energy features for each estimated syl lable in speech recognition output, quantizes the featur s, forms N-grams of the quantized values, and mode ls normalized counts for each feature N-gram using sup port vector machines (SVMs). We refer to these features as “SNERF-grams” (N-grams of Syllable-base d Nonuniform Extraction Region Features). Evaluation of SNERF-gram performance is conducted o n two-party spontaneous English conversational telephone data from the Fisher corpus, using one co versation side in both training and testing. Resul ts show that SNERF-grams provide significant performance ga ins when combined with a state-of-the-art baseline system, as well as with two highly successful longrange feature systems that capture word usage and lexically constrained duration patterns. Further ex periments examine the relative contributions of fea tures by quantization resolution, N-gram length, and feature type. Results show that the optimal number of bins depends on both feature type and N-gram length, but is roughly in the range of 5 to 10 bins. We find t hat longer N-grams are better than shorter ones, and th at pitch features are most useful, followed by dura tion and energy features. The most important pitch features are those capturing pitch level, whereas the most important energy features reflect patterns of risin g a d falling. For duration features, nucleus dura tion is more important for speaker recognition than are dur ations from the onset or coda of a syllable. Overal l, we find that SVM modeling of prosodic feature sequence s yields valuable information for automatic speaker recognition. It also offers rich new opportunities for exploring how speakers differ from each other i n voluntary but habitual ways.",
"title": ""
},
{
"docid": "e4f5b27939721395a78beec0fe6a6de7",
"text": "We review recent work concerning optimal proposal scalings for Metropolis-Hastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly.",
"title": ""
},
{
"docid": "0b71777f8b4d03fb147ff41d1224136e",
"text": "Mobile broadband demand keeps growing at an overwhelming pace. Though emerging wireless technologies will provide more bandwidth, the increase in demand may easily consume the extra bandwidth. To alleviate this problem, we propose using the content available on individual devices as caches. Particularly, when a user reaches areas with dense clusters of mobile devices, \"data spots\", the operator can instruct the user to connect with other users sharing similar interests and serve the requests locally. This paper presents feasibility study as well as prototype implementation of this idea.",
"title": ""
},
{
"docid": "b12925c3dd50b2d350c705b8bbc982a3",
"text": "K-means clustering has been widely used in processing large datasets in many fields of studies. Advancement in many data collection techniques has been generating enormous amount of data, leaving scientists with the challenging task of processing them. Using General Purpose Processors or GPPs to process large datasets may take a long time, therefore many acceleration methods have been proposed in the literature to speed-up the processing of such large datasets. In this work, we propose a parameterized Field Programmable Gate Array (FPGA) implementation of the Kmeans algorithm and compare it with previous FPGA implementation as well as recent implementations on Graphics Processing Units (GPUs) and with GPPs. The proposed FPGA implementation has shown higher performance in terms of speed-up over previous FPGA GPU and GPP implementations, and is more energy efficient.",
"title": ""
},
{
"docid": "be3721ebf2c55972146c3e87aee475ba",
"text": "Advances in computation and communication are taking shape in the form of the Internet of Things, Machine-to-Machine technology, Industry 4.0, and Cyber-Physical Systems (CPS). The impact on engineering such systems is a new technical systems paradigm based on ensembles of collaborating embedded software systems. To successfully facilitate this paradigm, multiple needs can be identified along three axes: (i) online configuring an ensemble of systems, (ii) achieving a concerted function of collaborating systems, and (iii) providing the enabling infrastructure. This work focuses on the collaborative function dimension and presents a set of concrete examples of CPS challenges. The examples are illustrated based on a pick and place machine that solves a distributed version of the Towers of Hanoi puzzle. The system includes a physical environment, a wireless network, concurrent computing resources, and computational functionality such as, service arbitration, various forms of control, and processing of streaming video. The pick and place machine is of medium-size complexity. It is representative of issues occurring in industrial systems that are coming online. The entire study is provided at a computational model level, with the intent to contribute to the model-based research agenda in terms of design methods and implementation technologies necessary to make the next generation systems a reality.",
"title": ""
},
{
"docid": "2e0da6288ec95c989afa84811a0aea6e",
"text": "Graph keyword search has drawn many research interests, since graph models can generally represent both structured and unstructured databases and keyword searches can extract valuable information for users without the knowledge of the underlying schema and query language. In practice, data graphs can be extremely large, e.g., a Web-scale graph containing billions of vertices. The state-of-the-art approaches employ centralized algorithms to process graph keyword searches, and thus they are infeasible for such large graphs, due to the limited computational power and storage space of a centralized server. To address this problem, we investigate keyword search for Web-scale graphs deployed in a distributed environment. We first give a naive search algorithm to answer the query efficiently. However, the naive search algorithm uses a flooding search strategy that incurs large time and network overhead. To remedy this shortcoming, we then propose a signature-based search algorithm. Specifically, we design a vertex signature that encodes the shortest-path distance from a vertex to any given keyword in the graph. As a result, we can find query answers by exploring fewer paths, so that the time and communication costs are low. Moreover, we reorganize the graph data in the cluster after its initial random partitioning so that the signature-based techniques are more effective. Finally, our experimental results demonstrate the feasibility of our proposed approach in performing keyword searches over Web-scale graph data.",
"title": ""
},
{
"docid": "8014c32fa820e1e2c54e1004b62dc33e",
"text": "Signature-based malicious code detection is the standard technique in all commercial anti-virus software. This method can detect a virus only after the virus has appeared and caused damage. Signature-based detection performs poorly whe n attempting to identify new viruses. Motivated by the standard signature-based technique for detecting viruses, and a recent successful text classification method, n-grams analysis, we explo re the idea of automatically detecting new malicious code. We employ n-grams analysis to automatically generate signatures from malicious and benign software collections. The n-gramsbased signatures are capable of classifying unseen benign and malicious code. The datasets used are large compared to earlier applications of n-grams analysis.",
"title": ""
},
{
"docid": "20cd8e811764d1432eeb60a0f4fa58d2",
"text": "PURPOSE\nThis article presents a review of the literature on biomechanical factors affecting the treatment outcome of prosthetic treatment of structurally compromised dentitions, with the main emphasis on often-compromised endodontically treated teeth.\n\n\nMATERIALS AND METHODS\nArticles cited in a MEDLINE/PubMed search were reviewed with a focus on factors influencing the risk for fatigue failures.\n\n\nRESULTS\nTechnical failures in connection with fixed prosthodontics are often caused by fatigue fractures. The abutments, cement, and reconstruction are all subjected to stress caused by occlusal forces, and fatigue fracture may occur at the weakest point or where the maximum stress occurs. The weakest point is frequently in connection with endodontically treated teeth restored with posts and cores.\n\n\nCONCLUSION\nThe literature points to nonaxial forces as a risk for fatigue fracture of teeth, cement, and restorative material. Favorable occlusal prosthesis design is probably more important for survival of structurally compromised endodontically treated teeth than is the type of post used.",
"title": ""
},
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
},
{
"docid": "a92d7d73204f29f84e1213d8f1b8fbdd",
"text": "Directional arrays of branched microscopic setae constitute a dry adhesive on the toes of pad-bearing geckos, nature's supreme climbers. Geckos are easily and rapidly able to detach their toes as they climb. There are two known mechanisms of detachment: (1) on the microscale, the seta detaches when the shaft reaches a critical angle with the substrate, and (2) on the macroscale, geckos hyperextend their toes, apparently peeling like tape. This raises the question of how geckos prevent detachment while inverted on the ceiling, where body weight should cause toes to peel and setal angles to increase. Geckos use opposing feet and toes while inverted, possibly to maintain shear forces that prevent detachment of setae or peeling of toes. If detachment occurs by macroscale peeling of toes, the peel angle should monotonically decrease with applied force. In contrast, if adhesive force is limited by microscale detachment of setae at a critical angle, the toe detachment angle should be independent of applied force. We tested the hypothesis that adhesion is increased by shear force in isolated setal arrays and live gecko toes. We also tested the corollary hypotheses that (1) adhesion in toes and arrays is limited as on the microscale by a critical angle, or (2) on the macroscale by adhesive strength as predicted for adhesive tapes. We found that adhesion depended directly on shear force, and was independent of detachment angle. Therefore we reject the hypothesis that gecko toes peel like tape. The linear relation between adhesion and shear force is consistent with a critical angle of release in live gecko toes and isolated setal arrays, and also with our prior observations of single setae. We introduced a new model, frictional adhesion, for gecko pad attachment and compared it to existing models of adhesive contacts. In an analysis of clinging stability of a gecko on an inclined plane each adhesive model predicted a different force control strategy. The frictional adhesion model provides an explanation for the very low detachment forces observed in climbing geckos that does not depend on toe peeling.",
"title": ""
},
{
"docid": "73973ae6c858953f934396ab62276e0d",
"text": "The unsolicited bulk messages are widespread in the applications of short messages. Although the existing spam filters have satisfying performance, they are facing the challenge of an adversary who misleads the spam filters by manipulating samples. Until now, the vulnerability of spam filtering technique for short messages has not been investigated. Different from the other spam applications, a short message only has a few words and its length usually has an upper limit. The current adversarial learning algorithms may not work efficiently in short message spam filtering. In this paper, we investigate the existing good word attack and its counterattack method, i.e. the feature reweighting, in short message spam filtering in an effort to understand whether, and to what extent, they can work efficiently when the length of a message is limited. This paper proposes a good word attack strategy which maximizes the influence to a classifier with the least number of inserted characters based on the weight values and also the length of words. On the other hand, we also proposes the feature reweighting method with a new rescaling function which minimizes the importance of the feature representing a short word in order to require more inserted characters for a successful evasion. The methods are evaluated experimentally by using the SMS and the comment spam dataset. The results confirm that the length of words is a critical factor of the robustness of short message spam filtering to good word attack. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "aec14ffcc8e2f2cea1e00fd6f0a0d425",
"text": "BACKGROUND\nOne of the reasons women with macromastia chose to undergo a breast reduction is to relieve their complaints of back, neck, and shoulder pain. We hypothesized that changes in posture after surgery may be the reason for the pain relief and that patient posture may correlate with symptomatic macromastia and may serve as an objective measure for complaints. The purpose of our study was to evaluate the effect of reduction mammaplasty on the posture of women with macromastia.\n\n\nMETHODS\nA prospective controlled study at a university medical center. Forty-two patients that underwent breast reduction were studied before surgery and an average of 4.3 years following surgery. Thirty-seven healthy women served as controls. Standardized lateral photos were taken. The inclination angle of the back was measured. Regression analysis was performed for the inclination angle.\n\n\nRESULTS\nPreoperatively, the mean inclination angle was 1.61 degrees ventrally; this diminished postoperatively to 0.72 degrees ventrally. This change was not significant (P-value=0.104). In the control group that angle was 0.28 degrees dorsally. Univariate regression analysis revealed that the inclination was dependent on body mass index (BMI) and having symptomatic macromastia; on multiple regression it was only dependent on BMI.\n\n\nCONCLUSIONS\nThe inclination angle of the back in breast reduction candidates is significantly different from that of controls; however, this difference is small and probably does not account for the symptoms associated with macromastia. Back inclination should not be used as a surrogate \"objective\" measure for symptomatic macromastia.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "ca29fee64e9271e8fce675e970932af1",
"text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.",
"title": ""
},
{
"docid": "ca6b0e6e97054bf70cee8179114d94f1",
"text": "Although the maximum transmission speed in IEEE 802.11a WLAN is 54 Mbps, the real throughput is actually limited to 20~30 Mbps. Except for the main effect from multi-path, we should also consider some non-ideal effects from imperfect hardware design, such as the IQ imbalance from direct conversion in RF front-end. IQ imbalance is not apparent in lower-order QAM modulation. However, in higher-order QAM modulation, it will become serious interference. In this paper, an IQ imbalance compensation circuit in IEEE802.11a baseband receiver is proposed. A low complexity time-domain compensation algorithm is used to replace the traditional high-order equalizer. MATLAB is used to simulate the whole transceiver including the channel model. After system verification, we use Verilog to implement the IQ imbalance compensation circuit with UMC 0.18 mum CMOS 1p6m technology. Post-layout simulation results show that this scheme contributes to a very robust and easily implemented OFDM WLAN receiver",
"title": ""
},
{
"docid": "55bee435842ff69aec83c280d8ba506b",
"text": "We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions.",
"title": ""
},
{
"docid": "588fa44a37a2f932182f01f0f0010f3e",
"text": "High attrition rates in massive open online courses (MOOCs) have motivated growing interest in the automatic detection of student “stopout”. Stopout classifiers can be used to orchestrate an intervention before students quit, and to survey students dynamically about why they ceased participation. In this paper we expand on existing stop-out detection research by (1) exploring important elements of classifier design such as generalizability to new courses; (2) developing a novel framework inspired by control theory for how to use a classifier’s outputs to make intelligent decisions; and (3) presenting results from a “dynamic survey intervention” conducted on 2 HarvardX MOOCs, containing over 40000 students, in early 2015. Our results suggest that surveying students based on an automatic stopout classifier achieves higher response rates compared to traditional post-course surveys, and may boost students’ propensity to “come back” into the course.",
"title": ""
},
{
"docid": "3f1939623798f46dec5204793bedab9e",
"text": "Predictive business process monitoring exploits event logs to predict how ongoing (uncompleted) cases will unfold up to their completion. A predictive process monitoring framework collects a range of techniques that allow users to get accurate predictions about the achievement of a goal or about the time required for such an achievement for a given ongoing case. These techniques can be combined and their parameters configured in different framework instances. Unfortunately, a unique framework instance that is general enough to outperform others for every dataset, goal or type of prediction is elusive. Thus, the selection and configuration of a framework instance needs to be done for a given dataset. This paper presents a predictive process monitoring framework armed with a hyperparameter optimization method to select a suitable framework instance for a given dataset.",
"title": ""
},
{
"docid": "d8d9bc717157d03c884962999c514033",
"text": "Topic models have been widely used to identify topics in text corpora. It is also known that purely unsupervised models often result in topics that are not comprehensible in applications. In recent years, a number of knowledge-based models have been proposed, which allow the user to input prior knowledge of the domain to produce more coherent and meaningful topics. In this paper, we go one step further to study how the prior knowledge from other domains can be exploited to help topic modeling in the new domain. This problem setting is important from both the application and the learning perspectives because knowledge is inherently accumulative. We human beings gain knowledge gradually and use the old knowledge to help solve new problems. To achieve this objective, existing models have some major difficulties. In this paper, we propose a novel knowledge-based model, called MDK-LDA, which is capable of using prior knowledge from multiple domains. Our evaluation results will demonstrate its effectiveness.",
"title": ""
}
] |
scidocsrr
|
efbc9851e9e54f6f4ad4a13ee48083bc
|
Image Compression: Sparse Coding vs. Bottleneck Autoencoders
|
[
{
"docid": "a3960e34df2846baa277389ba01229de",
"text": "Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.,,We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.",
"title": ""
}
] |
[
{
"docid": "8fa9a91bb08c82830140e484456c5a16",
"text": "Artificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.",
"title": ""
},
{
"docid": "add873f3e77fff9be117bbc7e904b8ca",
"text": "As software continues to grow, locating code for maintenance tasks becomes increasingly difficult. Software search tools help developers find source code relevant to their maintenance tasks. One major challenge to successful search tools is locating relevant code when the user's query contains words with multiple meanings or words that occur frequently throughout the program. Traditional search techniques, which treat each word individually, are unable to distinguish relevant and irrelevant methods under these conditions. In this paper, we present a novel search technique that uses information such as the position of the query word and its semantic role to calculate relevance. Our evaluation shows that this approach is more consistently effective than three other state of the art search techniques.",
"title": ""
},
{
"docid": "269387b9115c35ea339184bd175224d2",
"text": "Whereas outdoor navigation systems typically rely upon GPS, indoor systems have to rely upon different techniques for localizing the user, as GPS signals cannot be received indoors. Over the past decade various indoor navigation systems have been developed. This paper provides a comprehensive overview of existing indoor navigation systems and analyzes the different techniques used for: (1) locating the user; (2) planning a path; (3) representing the environment; and (4) interacting with the user. Our survey identifies a number of research issues that could facilitate large scale deployment of indoor navigation systems.",
"title": ""
},
{
"docid": "d5a6e7add07b104e2a285c139ae1b727",
"text": "Breakfast skipping is common in adolescents, but research on the effects of breakfast skipping on school performance is scarce. This current cross-sectional survey study of 605 adolescents aged 11–18 years investigated whether adolescents who habitually skip breakfast have lower endof-term grades than adolescents who eat breakfast daily. Additionally, the roles of sleep behavior, namely chronotype, and attention were explored. Results showed that breakfast skippers performed lower at school than breakfast eaters. The findings were similar for younger and older adolescents and for boys and girls. Adolescents with an evening chronotype were more likely to skip breakfast, but chronotype was unrelated to school performance. Furthermore, attention problems partially mediated the relation between breakfast skipping and school performance. This large-scale study emphasizes the importance of breakfast as a determinant for school performance. The results give reason to investigate the mechanisms underlying the relation between skipping breakfast, attention, and school performance in more detail. Proper nutrition is commonly believed to be important for school performance; it is considered to be an essential prerequisite for the potential to learn in children (Taras, 2005). In the Western world, where most school-aged children are well nourished, emphasis is placed on eating breakfast for optimal school performance. Eating breakfast might be particularly important during adolescence. Adolescents have 1Department of Educational Neuroscience, VU University Amsterdam 2Centre for Learning Sciences and Technology, Open Universiteit Nederland 3School for Mental Health and Neuroscience, Maastricht University Address correspondence to Annemarie Boschloo, Faculty of Psychology and Education, VU University Amsterdam, Van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands; e-mail: a.m.boschloo@vu.nl. high nutritional needs, due to brain development processes and physical growth, while at the same time they have the highest rate of breakfast skipping among school-aged children (Hoyland, Dye, & Lawton, 2009; Rampersaud, 2009). However, not much is known about the effects of breakfast skipping on their school performance. Reviews indicate that only few studies have investigated the relationship between breakfast skipping and school performance in adolescents (Ells et al., 2008; Hoyland et al., 2009; Rampersaud, 2009; Taras, 2005). Therefore, the current study investigated the relation between habitual breakfast consumption and school performance in adolescents attending secondary school (age range 11–18 years). In addition, we explored two potentially important mechanisms underlying this relationship by investigating the roles of sleep behavior and attention. Depending on the definition of breakfast skipping, 10–30% of the adolescents (age range 11–18 years) can be classified as breakfast skippers (Rampersaud, Pereira, Girard, Adams, & Metzl, 2005). Adolescent breakfast skippers are more often girls and more often have a lower level of education (Keski-Rahkonen, Kaprio, Rissanen, Virkkunen, & Rose, 2003; Rampersaud et al., 2005; Shaw, 1998). Adolescent breakfast skippers are characterized by an unhealthy lifestyle, with behaviors such as smoking, irregular exercise, and alcohol and drug use. They make more unhealthy food choices and have a higher body mass index than breakfast eaters. Furthermore, they show more disinhibited behavior (Keski-Rahkonen et al., 2003; Rampersaud et al., 2005). Reasons adolescents give for skipping breakfast are that they are not hungry or do not have enough time (Shaw, 1998), although dieting seems to play a role as well (Rampersaud et al., 2005; Shaw, 1998). Experimental studies have investigated the relationship between breakfast skipping and cognitive functioning, which is assumed to underlie school performance. Breakfast skipping in children and adolescents appeared to affect memory and attention, especially toward the end of the morning (Ells et al., 2008; Hoyland et al., 2009; Rampersaud et al., 2005).",
"title": ""
},
{
"docid": "661a5c7f49d4232f61a4a2ee0c1ddbff",
"text": "Power is now a first-order design constraint in large-scale parallel computing. Used carefully, dynamic voltage scaling can execute parts of a program at a slower CPU speed to achieve energy savings with a relatively small (possibly zero) time delay. However, the problem of when to change frequencies in order to optimize energy savings is NP-complete, which has led to many heuristic energy-saving algorithms. To determine how closely these algorithms approach optimal savings, we developed a system that determines a bound on the energy savings for an application. Our system uses a linear programming solver that takes as inputs the application communication trace and the cluster power characteristics and then outputs a schedule that realizes this bound. We apply our system to three scientific programs, two of which exhibit load imbalance---particle simulation and UMT2K. Results from our bounding technique show particle simulation is more amenable to energy savings than UMT2K.",
"title": ""
},
{
"docid": "8c29f90a844a7f38d0b622d7729eaa9e",
"text": "One of the challenges in 3D shape matching arises from the fact that in many applications, models should be considered to be the same if they differ by a rotation. Consequently, when comparing two models, a similarity metric implicitly provides the measure of similarity at the optimal alignment. Explicitly solving for the optimal alignment is usually impractical. So, two general methods have been proposed for addressing this issue: (1) Every model is represented using rotation invariant descriptors. (2) Every model is described by a rotation dependent descriptor that is aligned into a canonical coordinate system defined by the model. In this paper, we describe the limitations of canonical alignment and discuss an alternate method, based on spherical harmonics, for obtaining rotation invariant representations. We describe the properties of this tool and show how it can be applied to a number of existing, orientation dependent descriptors to improve their matching performance. The advantages of this tool are two-fold: First, it improves the matching performance of many descriptors. Second, it reduces the dimensionality of the descriptor, providing a more compact representation, which in turn makes comparing two models more efficient.",
"title": ""
},
{
"docid": "b8fe5687c8b18a8cfdac14a198b77033",
"text": "1 Sia Siew Kien, Michael Rosemann and Phillip Yetton are the accepting senior editors for this article. 2 This research was partly funded by an Australian Research Council Discovery grant. The authors are grateful to the interviewees, whose willingness to share their valuable insights and experiences made this study possible, and to the senior editors and reviewers for their very helpful feedback and advice throughout the review process. 3 All quotes in this article are from employees of “RetailCo,” the subject of this case study. The names of the organization and its business divisions have been anonymized. 4 A digital business platform is “an integrated set of electronic business processes and the technologies, applications and data supporting those processes” Weill, P. and Ross, J. W. IT Savvy: What Top Executives Must Know to Go from Pain to Gain, Harvard Business School Publishing, 2009, p. 4; for more on digitized platforms, see pp. 67-87 of this publication. How an Australian Retailer Enabled Business Transformation Through Enterprise Architecture",
"title": ""
},
{
"docid": "72ad9915e3f4afb9be4528ac04a9e5aa",
"text": "A sensor isolation system was developed to reduce vibrational and noise effects on MEMS IMU sensors. A single degree of freedom model of an isolator was developed and simulated. Then a prototype was constructed for use with a Microstrain 3DM-GX3-25 IMU sensor and experimentally tested on a six DOF motion platform. An order of magnitude noise reduction was observed on the z accelerometer up to seven Hz. The isolator was then deployed on a naval ship along with a DMS TSS-25 IMU used as a truth measurement and a rigid mounted 3DM sensor was used for comparison. Signal quality improvements of the IMU were characterized and engine noise at 20 Hz was reduced by tenfold on x, y, and z accelerometers. A heave estimation algorithm was implemented and several types of filters were evaluated. Lab testing with a six DOF motion platform with pure sinusoidal motion, a fixed frequency four pole bandpass filter provided the least heave error at 12.5% of full scale or 0.008m error. When the experimental sea data was analyzed a fixed three pole highpass filter yielded the most accurate results of the filters tested. A heave period estimator was developed to adjust the filter cutoff frequencies for varying sea conditions. Since the ship motions were small, the errors w.r.t. full scale were rather large at 78% RMS as a worst case and 44% for a best case. In absolute terms when the variable filters and isolator were implemented, the best case peak and RMS errors were 0.015m and 0.050m respectively. The isolator improves the heave accuracy by 200% to 570% when compared with a rigidly mounted 3DM sensor.",
"title": ""
},
{
"docid": "611f7b5564c9168f73f778e7466d1709",
"text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.",
"title": ""
},
{
"docid": "cb2d8e7b01de6cdb5a303a38cc11e211",
"text": "Developing sensor network applications demands a new set of tools to aid programmers. A number of simulation environments have been developed that provide varying degrees of scalability, realism, and detail for understanding the behavior of sensor networks. To date, however, none of these tools have addressed one of the most important aspects of sensor application design: that of power consumption. While simple approximations of overall power usage can be derived from estimates of node duty cycle and communication rates, these techniques often fail to capture the detailed, low-level energy requirements of the CPU, radio, sensors, and other peripherals.\n In this paper, we present, a scalable simulation environment for wireless sensor networks that provides an accurate, per-node estimate of power consumption. PowerTOSSIM is an extension to TOSSIM, an event-driven simulation environment for TinyOS applications. In PowerTOSSIM, TinyOS components corresponding to specific hardware peripherals (such as the radio, EEPROM, LEDs, and so forth) are instrumented to obtain a trace of each device's activity during the simulation runPowerTOSSIM employs a novel code-transformation technique to estimate the number of CPU cycles executed by each node, eliminating the need for expensive instruction-level simulation of sensor nodes. PowerTOSSIM includes a detailed model of hardware energy consumption based on the Mica2 sensor node platform. Through instrumentation of actual sensor nodes, we demonstrate that PowerTOSSIM provides accurate estimation of power consumption for a range of applications and scales to support very large simulations.",
"title": ""
},
{
"docid": "f497ae2f4e4188f483fe8ffa10d2e0e9",
"text": "Contemporary deep neural networks exhibit impressive results on practical problems. These networks generalize well although their inherent capacity may extend significantly beyond the number of training examples. We analyze this behavior in the context of deep, infinite neural networks. We show that deep infinite layers are naturally aligned with Gaussian processes and kernel methods, and devise stochastic kernels that encode the information of these networks. We show that stability results apply despite the size, offering an explanation for their empir-",
"title": ""
},
{
"docid": "f17a6c34a7b3c6a7bf266f04e819af94",
"text": "BACKGROUND\nPatients with advanced squamous-cell non-small-cell lung cancer (NSCLC) who have disease progression during or after first-line chemotherapy have limited treatment options. This randomized, open-label, international, phase 3 study evaluated the efficacy and safety of nivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint-inhibitor antibody, as compared with docetaxel in this patient population.\n\n\nMETHODS\nWe randomly assigned 272 patients to receive nivolumab, at a dose of 3 mg per kilogram of body weight every 2 weeks, or docetaxel, at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival.\n\n\nRESULTS\nThe median overall survival was 9.2 months (95% confidence interval [CI], 7.3 to 13.3) with nivolumab versus 6.0 months (95% CI, 5.1 to 7.3) with docetaxel. The risk of death was 41% lower with nivolumab than with docetaxel (hazard ratio, 0.59; 95% CI, 0.44 to 0.79; P<0.001). At 1 year, the overall survival rate was 42% (95% CI, 34 to 50) with nivolumab versus 24% (95% CI, 17 to 31) with docetaxel. The response rate was 20% with nivolumab versus 9% with docetaxel (P=0.008). The median progression-free survival was 3.5 months with nivolumab versus 2.8 months with docetaxel (hazard ratio for death or disease progression, 0.62; 95% CI, 0.47 to 0.81; P<0.001). The expression of the PD-1 ligand (PD-L1) was neither prognostic nor predictive of benefit. Treatment-related adverse events of grade 3 or 4 were reported in 7% of the patients in the nivolumab group as compared with 55% of those in the docetaxel group.\n\n\nCONCLUSIONS\nAmong patients with advanced, previously treated squamous-cell NSCLC, overall survival, response rate, and progression-free survival were significantly better with nivolumab than with docetaxel, regardless of PD-L1 expression level. (Funded by Bristol-Myers Squibb; CheckMate 017 ClinicalTrials.gov number, NCT01642004.).",
"title": ""
},
{
"docid": "041b308fe83ac9d5a92e33fd9c84299a",
"text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.",
"title": ""
},
{
"docid": "13f1b9cf251b3b37de00cb68b17652c0",
"text": "This is an updated and expanded version of TR2000-26, but it is still in draft form. More importantly, our analysis lets us build on the progress made in statistical physics since Bethe’s approximation was introduced in 1935. Kikuchi and others have shown how to construct more accurate free energy approximations, of which Bethe’s approximation is the simplest. Exploiting the insights from our analysis, we derive generalized belief propagation (GBP) versions of these Kikuchi approximations. These new message passing algorithms can be significantly more accurate than ordinary BP, at an adjustable increase in complexity. We illustrate such a new GBP algorithm on a grid Markov network and show that it gives much more accurate marginal probabilities than those found using ordinary BP. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2001 201 Broadway, Cambridge, Massachusetts 02139",
"title": ""
},
{
"docid": "4ff50e433ba7a5da179c7d8e5e05cb22",
"text": "Social network information is now being used in ways for which it may have not been originally intended. In particular, increased use of smartphones capable ofrunning applications which access social network information enable applications to be aware of a user's location and preferences. However, current models forexchange of this information require users to compromise their privacy and security. We present several of these privacy and security issues, along withour design and implementation of solutions for these issues. Our work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security. We contend that it is important that such solutions be acceptedas mobile social networks continue to grow exponentially.",
"title": ""
},
{
"docid": "dd0074bd8b057002efc02e17f69d3ad1",
"text": "The purpose of this study is to recognize modeling methods for coal combustion and gasification in commercial process analysis codes. Many users have appreciated the reliability of commercial process analysis simulation codes; however, it is necessary to understand the physical meaning and limitations of the modeling results. Modeling of coal gasification phenomena has been embodied in commercial process analysis simulators such as Aspen. Commercial code deals with modeling of the gasification system with a number of reactor blocks supported by the specific code, not as a coal gasifier. However, the primary purpose of using process analysis simulation code is to interpret the whole plant cycle rather than an individual unit such as a gasifier. Equilibrium models of a coal gasifier are generally adopted in the commercial codes, where the method of Gibbs free energy minimization of chemical species is applied at the given temperature and pressure. The equilibrium model of the coal gasifier, RGibbs, in commercial codes provides users with helpful information, such as exit syngas temperature, composition, flow rate, performance of coal gasifier model, etc. with various input and operating conditions. This simulation code is being used to generate simple and fast response of results. Limitations and uncertainties are interpreted in the view of the gasification process, chemical reaction, char reactivity, and reactor geometry. In addition, case studies are introduced with examples. Finally, a way to improve the coal gasifier model is indicated, and a kinetically modified model considering reaction rate is proposed.",
"title": ""
},
{
"docid": "fc453b8e101a0eae542cc69881bbe7d4",
"text": "The statistical properties of Clarke's fading model with a finite number of sinusoids are analyzed, and an improved reference model is proposed for the simulation of Rayleigh fading channels. A novel statistical simulation model for Rician fading channels is examined. The new Rician fading simulation model employs a zero-mean stochastic sinusoid as the specular (line-of-sight) component, in contrast to existing Rician fading simulators that utilize a non-zero deterministic specular component. The statistical properties of the proposed Rician fading simulation model are analyzed in detail. It is shown that the probability density function of the Rician fading phase is not only independent of time but also uniformly distributed over [-pi, pi). This property is different from that of existing Rician fading simulators. The statistical properties of the new simulators are confirmed by extensive simulation results, showing good agreement with theoretical analysis in all cases. An explicit formula for the level-crossing rate is derived for general Rician fading when the specular component has non-zero Doppler frequency",
"title": ""
},
{
"docid": "e016d5fc261def252f819f350b155c1a",
"text": "Risk reduction is one of the key objectives pursued by transport safety policies. Particularly, the formulation and implementation of transport safety policies needs the systematic assessment of the risks, the specification of residual risk targets and the monitoring of progresses towards those ones. Risk and safety have always been considered critical in civil aviation. The purpose of this paper is to describe and analyse safety aspects in civil airports. An increase in airport capacity usually involves changes to runways layout, route structures and traffic distribution, which in turn effect the risk level around the airport. For these reasons third party risk becomes an important issue in airports development. To avoid subjective interpretations and to increase model accuracy, risk information are colleted and evaluated in a rational and mathematical manner. The method may be used to draw risk contour maps so to provide a guide to local and national authorities, to population who live around the airport, and to airports operators. Key-Words: Risk Management, Risk assessment methodology, Safety Civil aviation.",
"title": ""
},
{
"docid": "ce1384d061248cbb96e77ea482b2ba62",
"text": "Preventable behaviors contribute to many life threatening health problems. Behavior-change technologies have been deployed to modify these, but such systems typically draw on traditional behavioral theories that overlook affect. We examine the importance of emotion tracking for behavior change. First, we conducted interviews to explore how emotions influence unwanted behaviors. Next, we deployed a system intervention, in which 35 participants logged information for a self-selected, unwanted behavior (e.g., smoking or overeating) over 21 days. 16 participants engaged in standard behavior tracking using a Fact-Focused system to record objective information about goals. 19 participants used an Emotion-Focused system to record emotional consequences of behaviors. Emotion-Focused logging promoted more successful behavior change and analysis of logfiles revealed mechanisms for success: greater engagement of negative affect for unsuccessful days and increased insight were key to motivating change. We present design implications to improve behavior-change technologies with emotion tracking.",
"title": ""
},
{
"docid": "8a243ba8bb385373230719f733fb947b",
"text": "The insider threat is one of the most pernicious in computer security. Traditional approaches typically instrument systems with decoys or intrusion detection mechanisms to detect individuals who abuse their privileges (the quintessential \"insider\"). Such an attack requires that these agents have access to resources or data in order to corrupt or disclose them. In this work, we examine the application of process modeling and subsequent analyses to the insider problem. With process modeling, we first describe how a process works in formal terms. We then look at the agents who are carrying out particular tasks, perform different analyses to determine how the process can be compromised, and suggest countermeasures that can be incorporated into the process model to improve its resistance to insider attack.",
"title": ""
}
] |
scidocsrr
|
464f6bffb81fd218184ec334d9015921
|
A Wideband Voltage-Controlled Oscillator With Gain Linearized Varactor Bank
|
[
{
"docid": "de38b6e86542e8f8301d8c8ddb510d19",
"text": "A pseudo-exponential capacitor bank structure is proposed to implement a wide-band CMOS LC voltage-controlled oscillator (VCO) with linearized coarse tuning characteristics. An octave bandwidth VCO employing the proposed 6-bit pseudo-exponential capacitor bank structure has been realized in 0.18-mum CMOS. Compared to a conventional VCO employing a binary weighted capacitor bank, the proposed VCO has considerably reduced the variations of the VCO gain (K VCO) and the frequency step per a capacitor bank code (f step/code) by 2.7 and 2.1 times, respectively, across the tuning range of 924-1850 MHz. Measurement results have also shown that the VCO provides the phase noise of - 127.1 dBc/Hz at 1-MHz offset for 1.752-GHz output frequency while dissipating 6 mA from a 1.8-V supply.",
"title": ""
}
] |
[
{
"docid": "2eceea4794905434366fd140af1abe76",
"text": "The flow structure in the wake of a model wind turbine is explored under negligible and high turbulence in the freestream region of a wind tunnel at Re ∼ 7× 104. Attention is placed on the evolution of the integral scale and the contribution of the large-scale motions from the background flow. Hotwire anemometry was used to obtain the streamwise velocity at various streamwise and spanwise locations. The pre-multiplied spectral difference of the velocity fluctuations between the two cases shows a significant energy contribution from the background turbulence on scales larger than the rotor diameter. The integral scale along the rotor axis is found to grow linearly with distance, independent of the incoming turbulence levels. This scale appears to reach that of the incoming flow in the high turbulence case at x/d ∼ 35–40. The energy contribution from the turbine to the large-scale flow structures in the low turbulence case increases monotonically with distance. Its growth rate is reduced past x/d ∼ 6–7. There, motions larger than the rotor contribute ∼50% of the total energy, suggesting that the population of large-scale motions is more intense in the intermediate field. In contrast, the wake in the high incoming turbulence is quickly populated with large-scale motions and plateau at x/d ∼ 3.",
"title": ""
},
{
"docid": "303548167773a86d20a3ea13209a0ef3",
"text": "This paper reports empirical evidence that a neural network model is applicable to the prediction of foreign exchange rates. Time series data and technical indicators, such as moving average, are fed to neural networks to capture the underlying `rulesa of the movement in currency exchange rates. The exchange rates between American Dollar and \"ve other major currencies, Japanese Yen, Deutsch Mark, British Pound, Swiss Franc and Australian Dollar are forecast by the trained neural networks. The traditional rescaled range analysis is used to test the `e$ciencya of each market before using historical data to train the neural networks. The results presented here show that without the use of extensive market data or knowledge, useful prediction can be made and signi\"cant paper pro\"ts can be achieved for out-of-sample data with simple technical indicators. A further research on exchange rates between Swiss Franc and American Dollar is also conducted. However, the experiments show that with e$cient market it is not easy to make pro\"ts using technical indicators or time series input neural networks. This article also discusses several issues on the frequency of sampling, choice of network architecture, forecasting periods, and measures for evaluating the model's predictive power. After presenting the experimental results, a discussion on future research concludes the paper. ( 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "6df12ee53551f4a3bd03bca4ca545bf1",
"text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.",
"title": ""
},
{
"docid": "b3dbabfd0bac15007790d89de7d75f1d",
"text": "Microblogging platforms facilitate fast and frequent communication among very large numbers of users. Rumours, especially in times of crisis, tend to spread quickly, causing confusion, and impair people's ability to make decisions. Hence, it is of utmost importance to automatically detect a rumour as soon as possible. Using the PHEME dataset that contains rumours and non-rumours pertaining to five major events, we have developed a rumour detection system that classifies posts from Twitter, a popular microblogging website. We have first analyzed and ranked a number of content-based and user-based features. Some content-based features are derived using natural language processing techniques. We then trained multiple machine learning models (Naive Bayes, Random Forests and Support Vector Machines) using different combinations of the features. Finally, we compared the performance of these models. The performance of the models on one such event resulted in 78% accuracy.",
"title": ""
},
{
"docid": "aa29363af27241f29d7fe272c09d9020",
"text": "The person re-identification problem is a well known retrieval task that requires finding a person of interest in a network of cameras. In a real-world scenario, state of the art algorithms are likely to fail due to serious perspective and pose changes as well as variations in lighting conditions across the camera network. The most effective approaches try to cope with all these changes by applying metric learning tools to find a transfer function between a camera pair. Unfortunately, this transfer function is usually dependent on the camera pair and requires labeled training data for each camera. This might be unattainable in a large camera network. In this paper, instead of learning the transfer function that addresses all appearance changes, we propose to learn a generic metric pool that only focuses on pose changes. This pool consists of metrics, each one learned to match a specific pair of poses. Automatically estimated poses determine the proper metric, thus improving matching. We show that metrics learned using a single camera improve the matching across the whole camera network, providing a scalable solution. We validated our approach on a publicly available dataset demonstrating increase in the re-identification performance.",
"title": ""
},
{
"docid": "04013595912b4176574fb81b38beade5",
"text": "This chapter presents an overview of the current state of cognitive task analysis (CTA) in research and practice. CTA uses a variety of interview and observation strategies to capture a description of the explicit and implicit knowledge that experts use to perform complex tasks. The captured knowledge is most often transferred to training or the development of expert systems. The first section presents descriptions of a variety of CTA techniques, their common characteristics, and the typical strategies used to elicit knowledge from experts and other sources. The second section describes research on the impact of CTA and synthesizes a number of studies and reviews pertinent to issues underlying knowledge elicitation. In the third section, we discuss the integration of CTA with training design. Finally, in the fourth section, we present a number of recommendations for future research and conclude with general comments.",
"title": ""
},
{
"docid": "d924282668c0c5dfc0908205402dfabf",
"text": "Performance appraisal (PA) is a crucial HR process that enables an organization to periodically measure and evaluate every employee’s performance and also to drive performance improvements. In this paper, we describe a novel system called HiSPEED to analyze PA data using automated statistical, data mining and text mining techniques, to generate novel and actionable insights/patterns and to help in improving the quality and effectiveness of the PA process. The goal is to produce insights that can be used to answer (in part) the crucial “business questions” that HR executives and business leadership face in talent management. The business questions pertain to (1) improving the quality of the goal setting process, (2) improving the quality of the self-appraisal comments and supervisor feedback comments, (3) discovering high-quality supervisor suggestions for performance improvements, (4) discovering evidence provided by employees to support their self-assessments, (5) measuring the quality of supervisor assessments, (6) understanding the root causes of poor and exceptional performances, (7) detecting instances of personal and systemic biases and so forth. The paper discusses specially designed algorithms to answer these business questions and illustrates them by reporting the insights produced on a real-life PA dataset from a large multinational IT services organization.",
"title": ""
},
{
"docid": "f0cd43ff855d6b10623504bf24a40fdc",
"text": "Neural network-based encoder-decoder models are among recent attractive methodologies for tackling natural language generation tasks. This paper investigates the usefulness of structural syntactic and semantic information additionally incorporated in a baseline neural attention-based model. We encode results obtained from an abstract meaning representation (AMR) parser using a modified version of Tree-LSTM. Our proposed attention-based AMR encoder-decoder model improves headline generation benchmarks compared with the baseline neural attention-based model.",
"title": ""
},
{
"docid": "6469b318a84d5865e304a8afd4408cfa",
"text": "5-hydroxytryptamine (5-HT, serotonin) is an ancient biochemical manipulated through evolution to be utilized extensively throughout the animal and plant kingdoms. Mammals employ 5-HT as a neurotransmitter within the central and peripheral nervous systems, and also as a local hormone in numerous other tissues, including the gastrointestinal tract, the cardiovascular system and immune cells. This multiplicity of function implicates 5-HT in a vast array of physiological and pathological processes. This plethora of roles has consequently encouraged the development of many compounds of therapeutic value, including various antidepressant, antipsychotic and antiemetic drugs.",
"title": ""
},
{
"docid": "eff45b92173acbc2f6462c3802d19c39",
"text": "There are shortcomings in traditional theorizing about effective ways of coping with bereavement, most notably, with respect to the so-called \"grief work hypothesis.\" Criticisms include imprecise definition, failure to represent dynamic processing that is characteristic of grieving, lack of empirical evidence and validation across cultures and historical periods, and a limited focus on intrapersonal processes and on health outcomes. Therefore, a revised model of coping with bereavement, the dual process model, is proposed. This model identifies two types of stressors, loss- and restoration-oriented, and a dynamic, regulatory coping process of oscillation, whereby the grieving individual at times confronts, at other times avoids, the different tasks of grieving. This model proposes that adaptive coping is composed of confrontation--avoidance of loss and restoration stressors. It also argues the need for dosage of grieving, that is, the need to take respite from dealing with either of these stressors, as an integral part of adaptive coping. Empirical research to support this conceptualization is discussed, and the model's relevance to the examination of complicated grief, analysis of subgroup phenomena, as well as interpersonal coping processes, is described.",
"title": ""
},
{
"docid": "2118c129bd6246f57d990bca41a6b5da",
"text": "We describe PRISM, a video coding paradigm based on the principles of lossy distributed compression (also called source coding with side information or Wyner-Ziv coding) from multiuser information theory. PRISM represents a major departure from conventional video coding architectures (e.g., the MPEGx, H.26x families) that are based on motion-compensated predictive coding, with the goal of addressing some of their architectural limitations. PRISM allows for two key architectural enhancements: (1) inbuilt robustness to \"drift\" between encoder and decoder and (2) the feasibility of a flexible distribution of computational complexity between encoder and decoder. Specifically, PRISM enables transfer of the computationally expensive video encoder motion-search module to the video decoder. Based on this capability, we consider an instance of PRISM corresponding to a near reversal in codec complexities with respect to today's codecs (leading to a novel light encoder and heavy decoder paradigm), in this paper. We present encouraging preliminary results on real-world video sequences, particularly in the realm of transmission losses, where PRISM exhibits the characteristic of rapid recovery, in contrast to contemporary codecs. This renders PRISM as an attractive candidate for wireless video applications.",
"title": ""
},
{
"docid": "44febfc1f3c0ac795f19d31b6eeb4e4a",
"text": "What are the neural correlates of attractiveness? Using functional MRI (fMRI), the authors addressed this question in the specific context of the apprehension of faces. When subjects judged facial beauty explicitly, neural activity in a widely distributed network involving the ventral occipital, anterior insular, dorsal posterior parietal, inferior dorsolateral, and medial prefrontal cortices correlated parametrically with the degree of facial attractiveness. When subjects were not attending explicitly to attractiveness, but rather were judging facial identity, the ventral occipital region remained responsive to facial beauty. The authors propose that this region, which includes the fusiform face area (FFA), the lateral occipital cortex (LOC), and medially adjacent regions, is activated automatically by beauty and may serve as a neural trigger for pervasive effects of attractiveness in social interactions.",
"title": ""
},
{
"docid": "9ff9732a71ab0ac540fee31ad4af40a2",
"text": "The Internet of Things (IoT) is undeniably transforming the way that organizations communicate and organize everyday businesses and industrial procedures. Its adoption has proven well suited for sectors that manage a large number of assets and coordinate complex and distributed processes. This survey analyzes the great potential for applying IoT technologies (i.e., data-driven applications or embedded automation and intelligent adaptive systems) to revolutionize modern warfare and provide benefits similar to those in industry. It identifies scenarios where Defense and Public Safety (PS) could leverage better commercial IoT capabilities to deliver greater survivability to the warfighter or first responders, while reducing costs and increasing operation efficiency and effectiveness. This article reviews the main tactical requirements and the architecture, examining gaps and shortcomings in existing IoT systems across the military field and mission-critical scenarios. The review characterizes the open challenges for a broad deployment and presents a research roadmap for enabling an affordable IoT for defense and PS.",
"title": ""
},
{
"docid": "567d165eb9ad5f9860f3e0602cbe3e03",
"text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.",
"title": ""
},
{
"docid": "647ede4f066516a0343acef725e51d01",
"text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.",
"title": ""
},
{
"docid": "190bf6cd8a2e9a5764b42d01b7aec7c8",
"text": "We propose a method for compiling a class of Σ-protocols (3-move public-coin protocols) into non-interactive zero-knowledge arguments. The method is based on homomorphic encryption and does not use random oracles. It only requires that a private/public key pair is set up for the verifier. The method applies to all known discrete-log based Σ-protocols. As applications, we obtain non-interactive threshold RSA without random oracles, and non-interactive zero-knowledge for NP more efficiently than by previous methods.",
"title": ""
},
{
"docid": "c0c74dfff1e87a83586f3953ec39b595",
"text": "When designing an industrial installation, construction engineers often make use of a library of standardized CAD components. For instance, in the case of a servicing plant, such a library contains descriptions of simple components such as straight pipes, elbows, and T-junctions. A new installation is constructed by selecting and connecting the appropriate components from the library. This article demonstrates that one can use the same approach for reverse engineering by photogrammetry. In our technique, the operator interprets images and selects the appropriate CAD component from a library. By aligning the edges of the component’s wire frame to the visible edges in the images, we implicitly determine the position, orientation, and shape of the real component. For a fast object reconstruction the alignment process has been split in two parts. Initially, the operator approximately aligns a component to the images. In a second step a fitting algorithm is invoked for an automatic and precise alignment. Further improvement in the efficiency of the reconstruction is obtained by imposing geometric constraints on the CAD components of adjacent object",
"title": ""
},
{
"docid": "80adf87179f4b3b61bf99d946da4cb2a",
"text": "In modern intensive care units (ICUs) a vast and varied amount of physiological data is measured and collected, with the intent of providing clinicians with detailed information about the physiological state of each patient. The data include measurements from the bedside monitors of heavily instrumented patients, imaging studies, laboratory test results, and clinical observations. The clinician’s task of integrating and interpreting the data, however, is complicated by the sheer volume of information and the challenges of organizing it appropriately. This task is made even more difficult by ICU patients’ frequently-changing physiological state. Although the extensive clinical information collected in ICUs presents a challenge, it also opens up several opportunities. In particular, we believe that physiologically-based computational models and model-based estimation methods can be harnessed to better understand and track patient state. These methods would integrate a patient’s hemodynamic data streams by analyzing and interpreting the available information, and presenting resultant pathophysiological hypotheses to the clinical staff in an efficient manner. In this thesis, such a possibility is developed in the context of cardiovascular dynamics. The central results of this thesis concern averaged models of cardiovascular dynamics and a novel estimation method for continuously tracking cardiac output and total peripheral resistance. This method exploits both intra-beat and inter-beat dynamics of arterial blood pressure, and incorporates a parametrized model of arterial compliance. We validated our method with animal data from laboratory experiments and ICU patient data. The resulting root-mean-square-normalized errors – at most 15% depending on the data set – are quite low and clinically acceptable. In addition, we describe a novel estimation scheme for continuously monitoring left ventricular ejection fraction and left ventricular end-diastolic volume. We validated this method on an animal data set. Again, the resulting root-mean-square-normalized errors were quite low – at most 13%. By continuously monitoring cardiac output, total peripheral resistance, left ventricular ejection fraction, left ventricular end-diastolic volume, and arterial blood pressure, one has the basis for distinguishing between cardiogenic, hypovolemic, and septic shock. We hope that the results in this thesis will contribute to the development of a next-generation patient monitoring system. Thesis Supervisor: Professor George C. Verghese Title: Professor of Electrical Engineering Thesis Supervisor: Dr. Thomas Heldt Title: Postdoctoral Associate",
"title": ""
},
{
"docid": "4c8e08daa7310e0a21c234565a033e56",
"text": "Using a cross-panel design and data from 2 successive cohorts of college students (N = 357), we examined the stability of maladaptive perfectionism, procrastination, and psychological distress across 3 time points within a college semester. Each construct was substantially stable over time, with procrastination being especially stable. We also tested, but failed to support, a mediational model with Time 2 (mid-semester) procrastination as a hypothesized mechanism through which Time 1 (early-semester) perfectionism would affect Time 3 (end-semester) psychological distress. An alternative model with Time 2 perfectionism as a mediator of the procrastination-distress association also was not supported. Within-time analyses revealed generally consistent strength of effects in the correlations between the 3 constructs over the course of the semester. A significant interaction effect also emerged. Time 1 procrastination had no effect on otherwise high levels of psychological distress at the end of the semester for highly perfectionistic students, but at low levels of Time 1 perfectionism, the most distressed students by the end of the term were those who were more likely to have procrastinated earlier in the semester. Implications of the stability of the constructs and their association over time, as well as the moderating effects of procrastination, are discussed in the context of maladaptive perfectionism and problematic procrastination.",
"title": ""
},
{
"docid": "3d77bbe5b8ad89417900ed7b2d30bf72",
"text": "This paper presents a joint model for performing unsupervised morphological analysis on words, and learning a character-level composition function from morphemes to word embeddings. Our model splits individual words into segments, and weights each segment according to its ability to predict context words. Our morphological analysis is comparable to dedicated morphological analyzers at the task of morpheme boundary recovery, and also performs better than word-based embedding models at the task of syntactic analogy answering. Finally, we show that incorporating morphology explicitly into character-level models helps them produce embeddings for unseen words which correlate better with human judgments.",
"title": ""
}
] |
scidocsrr
|
8cdee7101e0e22ea85dd0ee171513909
|
Phoneme recognition using time-delay neural networks
|
[
{
"docid": "9ff2e30bbd34906f6a57f48b1e63c3f1",
"text": "In this paper, we extend hidden Markov modeling to speaker-independent phone recognition. Using multiple codebooks of various LPC parameters and discrete HMMs, we obtain a speakerindependent phone recognition accuracy of 58.8% to 73.8% on the TIMTT database, depending on the type of acoustic and language models used. In comparison, the performance of expert spectrogram readers is only 69% without use of higher level knowledge. We also introduce the co-occurrence smoothing algorithm which enables accurate recognition even with very limited training data. Since our results were evaluated on a standard database, they can be used as benchmarks to evaluate future systems. This research was partly sponsored by a National Science Foundation Graduate Fellowship, and by Defense Advanced Research Projects Agency Contract N00039-85-C-0163. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Defense Advanced Research Projects Agency, or the US Government.",
"title": ""
}
] |
[
{
"docid": "41eab64d00f1a4aaea5c5899074d91ca",
"text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.",
"title": ""
},
{
"docid": "7cc41229d0368f702a4dde3ccf597604",
"text": "State Machines",
"title": ""
},
{
"docid": "7ec9f6b40242a732282520f1a4808d49",
"text": "In this paper, a novel technique to enhance the bandwidth of substrate integrated waveguide cavity backed slot antenna is demonstrated. The feeding technique to the cavity backed antenna has been modified by introducing offset feeding of microstrip line along with microstrip to grounded coplanar waveguide transition which helps to excite TE120 mode in the cavity and also to get improvement in impedance matching to the slot antenna simultaneously. The proposed antenna is designed to resonate in X band (8-12 GHz) and shows a resonance at 10.2 GHz with a bandwidth of 4.2% and a gain of 5.6 dBi, 15.6 dB front to back ratio and -30 dB maximum cross polarization level.",
"title": ""
},
{
"docid": "43baeb87f1798d52399ba8c78ffa7fef",
"text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-",
"title": ""
},
{
"docid": "fcf6136271b04ac78717799d43017d74",
"text": "STUDY DESIGN\nPragmatic, multicentered randomized controlled trial, with 12-month follow-up.\n\n\nOBJECTIVE\nTo evaluate the effect of adding specific spinal stabilization exercises to conventional physiotherapy for patients with recurrent low back pain (LBP) in the United Kingdom.\n\n\nSUMMARY OF BACKGROUND DATA\nSpinal stabilization exercises are a popular form of physiotherapy management for LBP, and previous small-scale studies on specific LBP subgroups have identified improvement in outcomes as a result.\n\n\nMETHODS\nA total of 97 patients (18-60 years old) with recurrent LBP were recruited. Stratified randomization was undertaken into 2 groups: \"conventional,\" physiotherapy consisting of general active exercise and manual therapy; and conventional physiotherapy plus specific spinal stabilization exercises. Stratifying variables used were laterality of symptoms, duration of symptoms, and Roland Morris Disability Questionnaire score at baseline. Both groups received The Back Book, by Roland et al. Back-specific functional disability (Roland Morris Disability Questionnaire) at 12 months was the primary outcome. Pain, quality of life, and psychologic measures were also collected at 6 and 12 months. Analysis was by intention to treat.\n\n\nRESULTS\nA total of 68 patients (70%) provided 12-month follow-up data. Both groups showed improved physical functioning, reduced pain intensity, and an improvement in the physical component of quality of life. Mean change in physical functioning, measured by the Roland Morris Disability Questionnaire, was -5.1 (95% confidence interval -6.3 to -3.9) for the specific spinal stabilization exercises group and -5.4 (95% confidence interval -6.5 to -4.2) for the conventional physiotherapy group. No statistically significant differences between the 2 groups were shown for any of the outcomes measured, at any time.\n\n\nCONCLUSIONS\nPatients with LBP had improvement with both treatment packages to a similar degree. There was no additional benefit of adding specific spinal stabilization exercises to a conventional physiotherapy package for patients with recurrent LBP.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "1b0046cbee1afd3e7471f92f115f3d74",
"text": "We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-ofthe-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines.",
"title": ""
},
{
"docid": "563045a67d06819b0b79c8232e2e16fa",
"text": "The impacts of climate change are felt by most critical systems, such as infrastructure, ecological systems, and power-plants. However, contemporary Earth System Models (ESM) are run at spatial resolutions too coarse for assessing effects this localized. Local scale projections can be obtained using statistical downscaling, a technique which uses historical climate observations to learn a low-resolution to high-resolution mapping. Depending on statistical modeling choices, downscaled projections have been shown to vary significantly terms of accuracy and reliability. The spatio-temporal nature of the climate system motivates the adaptation of super-resolution image processing techniques to statistical downscaling. In our work, we present DeepSD, a generalized stacked super resolution convolutional neural network (SRCNN) framework for statistical downscaling of climate variables. DeepSD augments SRCNN with multi-scale input channels to maximize predictability in statistical downscaling. We provide a comparison with Bias Correction Spatial Disaggregation as well as three Automated-Statistical Downscaling approaches in downscaling daily precipitation from 1 degree (~100km) to 1/8 degrees (~12.5km) over the Continental United States. Furthermore, a framework using the NASA Earth Exchange (NEX) platform is discussed for downscaling more than 20 ESM models with multiple emission scenarios.",
"title": ""
},
{
"docid": "4cdf61ea145da38c37201b85d38bf8a2",
"text": "Ontologies are powerful to support semantic based applications and intelligent systems. While ontology learning are challenging due to its bottleneck in handcrafting structured knowledge sources and training data. To address this difficulty, many researchers turn to ontology enrichment and population using external knowledge sources such as DBpedia. In this paper, we propose a method using DBpedia in a different manner. We utilize relation instances in DBpedia to supervise the ontology learning procedure from unstructured text, rather than populate the ontology structure as a post-processing step. We construct three language resources in areas of computer science: enriched Wikipedia concept tree, domain ontology, and gold standard from NSFC taxonomy. Experiment shows that the result of ontology learning from corpus of computer science can be improved via the relation instances extracted from DBpedia in the same field. Furthermore, making distinction between the relation instances and applying a proper weighting scheme in the learning procedure lead to even better result.",
"title": ""
},
{
"docid": "13a777b2c5edcf9cb342b1290ec50a3c",
"text": "Call for Book Chapters Introduction The history of robotics and artificial intelligence in many ways is also the history of humanity’s attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It will be comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book would be the first textbook to address challenges of constructing safe and secure advanced machine intelligence.",
"title": ""
},
{
"docid": "7d5d2f819a5b2561db31645d534836b8",
"text": "Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to model the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, clarifying what guarantees can and cannot be associated with such a structure.",
"title": ""
},
{
"docid": "b38603115c4dbce4ea5f11767a7a49ab",
"text": "Hydroa vacciniforme (HV) is a rare and chronic pediatric disorder that is characterized by photosensitivity and recurrent vesicles that heal with vacciniforme scarring. The pathogenesis of HV is unknown; no chromosome abnormality has been identified. HV patients have no abnormal laboratory results, so the diagnosis of HV is based on identifying the associated histological findings in a biopsy specimen and using repetitive ultraviolet phototesting to reproduce the characteristic vesicles on a patient's skin. Herein, we present a case of HV in a 7-year-old female who was diagnosed with HV according to histopathology and ultraviolet phototesting.",
"title": ""
},
{
"docid": "7292ceb6718d0892a154d294f6434415",
"text": "This article illustrates the application of a nonlinear system identification technique to the problem of STLF. Five NARX models are estimated using fixed-size LS-SVM, and two of the models are later modified into AR-NARX structures following the exploration of the residuals. The forecasting performance, assessed for different load series, is satisfactory. The MSE levels on the test data are below 3% in most cases. The models estimated with fixed-size LS-SVM give better results than a linear model estimated with the same variables and also better than a standard LS-SVM in dual space estimated using only the last 1000 data points. Furthermore, the good performance of the fixed-size LS-SVM is obtained based on a subset of M = 1000 initial support vectors, representing a small fraction of the available sample. Further research on a more dedicated definition of the initial input variables (for example, incorporation of external variables to reflect industrial activity, use of explicit seasonal information) might lead to further improvements and the extension toward other types of load series.",
"title": ""
},
{
"docid": "f3cb6de57ba293be0b0833a04086b2ce",
"text": "Due to increasing globalization, urban societies are becoming more multicultural. The availability of large-scale digital mobility traces e.g. from tweets or checkins provides an opportunity to explore multiculturalism that until recently could only be addressed using survey-based methods. In this paper we examine a basic facet of multiculturalism through the lens of language use across multiple cities in Switzerland. Using data obtained from Foursquare over 330 days, we present a descriptive analysis of linguistic differences and similarities across five urban agglomerations in a multicultural, western European country.",
"title": ""
},
{
"docid": "a1e6a95d2eb2f5f36caf43b5133bd384",
"text": "The RealSense F200 represents a new generation of economically viable 4-dimensional imaging (4D) systems for home use. However, its 3D geometric (depth) accuracy has not been clinically tested. Therefore, this study determined the depth accuracy of the RealSense, in a cohort of patients with a unilateral facial palsy (n = 34), by using the clinically validated 3dMD system as a gold standard. The patients were simultaneously recorded with both systems, capturing six Sunnybrook poses. This study has shown that the RealSense depth accuracy was not affected by a facial palsy (1.48 ± 0.28 mm), compared to a healthy face (1.46 ± 0.26 mm). Furthermore, the Sunnybrook poses did not influence the RealSense depth accuracy (p = 0.76). However, the distance of the patients to the RealSense was shown to affect the accuracy of the system, where the highest depth accuracy of 1.07 mm was measured at a distance of 35 cm. Overall, this study has shown that the RealSense can provide reliable and accurate depth data when recording a range of facial movements. Therefore, when the portability, low-costs, and availability of the RealSense are taken into consideration, the camera is a viable option for 4D close range imaging in telehealth.",
"title": ""
},
{
"docid": "b123916f2795ab6810a773ac69bdf00b",
"text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.",
"title": ""
},
{
"docid": "b011b5e9ed5c96a59399603f4200b158",
"text": "The word list memory test from the Consortium to establish a registry for Alzheimer's disease (CERAD) neuropsychological battery (Morris et al. 1989) was administered to 230 psychiatric outpatients. Performance of a selected, age-matched psychiatric group and normal controls was compared using an ANCOVA design with education as a covariate. Results indicated that controls performed better than psychiatric patients on most learning and recall indices. The exception to this was the savings index that has been found to be sensitive to the effects of progressive dementias. The current data are compared and integrated with published CERAD data for Alzheimer's disease patients. The CERAD list memory test is recommended as a brief, efficient, and sensitive memory measure that can be used with a range of difficult patients.",
"title": ""
},
{
"docid": "e41079edd8ad3d39b22397d669f7af61",
"text": "Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes. (PsycINFO Database Record",
"title": ""
},
{
"docid": "1164e5b54ce970b55cf65cca0a1fbcb1",
"text": "We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.",
"title": ""
},
{
"docid": "b039138e9c0ef8456084891c45d7b36d",
"text": "Over the last few years or so, the use of artificial neural networks (ANNs) has increased in many areas of engineering. In particular, ANNs have been applied to many geotechnical engineering problems and have demonstrated some degree of success. A review of the literature reveals that ANNs have been used successfully in pile capacity prediction, modelling soil behaviour, site characterisation, earth retaining structures, settlement of structures, slope stability, design of tunnels and underground openings, liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and classification of soils. The objective of this paper is to provide a general view of some ANN applications for solving some types of geotechnical engineering problems. It is not intended to describe the ANNs modelling issues in geotechnical engineering. The paper also does not intend to cover every single application or scientific paper that found in the literature. For brevity, some works are selected to be described in some detail, while others are acknowledged for reference purposes. The paper then discusses the strengths and limitations of ANNs compared with the other modelling approaches.",
"title": ""
}
] |
scidocsrr
|
cf67efe5867d322be8bafa5244d5bfb8
|
A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots
|
[
{
"docid": "fdbca2e02ac52afd687331048ddee7d3",
"text": "Type-2 fuzzy sets let us model and minimize the effects of uncertainties in rule-base fuzzy logic systems. However, they are difficult to understand for a variety of reasons which we enunciate. In this paper, we strive to overcome the difficulties by: 1) establishing a small set of terms that let us easily communicate about type-2 fuzzy sets and also let us define such sets very precisely, 2) presenting a new representation for type-2 fuzzy sets, and 3) using this new representation to derive formulas for union, intersection and complement of type-2 fuzzy sets without having to use the Extension Principle.",
"title": ""
},
{
"docid": "c2aed51127b8753e4b71da3b331527cd",
"text": "In this paper, we present the theory and design of interval type-2 fuzzy logic systems (FLSs). We propose an efficient and simplified method to compute the input and antecedent operations for interval type-2 FLSs; one that is based on a general inference formula for them. We introduce the concept of upper and lower membership functions (MFs) and illustrate our efficient inference method for the case of Gaussian primary MFs. We also propose a method for designing an interval type-2 FLS in which we tune its parameters. Finally, we design type-2 FLSs to perform time-series forecasting when a nonstationary time-series is corrupted by additive noise where SNR is uncertain and demonstrate improved performance over type-1 FLSs.",
"title": ""
},
{
"docid": "338a8efaaf4a790b508705f1f88872b2",
"text": "During the past several years, fuzzy control has emerged as one of the most active and fruitful areas for research in the applications of fuzzy set theory, especially in the realm of industrial processes, which do not lend themselves to control by conventional methods because of a lack of quantitative data regarding the input-output relations. Fuzzy control is based on fuzzy logic-a logical system that is much closer in spirit to human thinking and natural language than traditional logical systems. The fuzzy logic controller (FLC) based on fuzzy logic provides a means of converting a linguistic control strategy based on expert knowledge into an automatic control strategy. A survey of the FLC is presented ; a general methodology for constructing an FLC and assessing its performance is described; and problems that need further research are pointed out. In particular, the exposition includes a discussion of fuzzification and defuzzification strategies, the derivation of the database and fuzzy control rules, the definition of fuzzy implication, and an analysis of fuzzy reasoning mechanisms. A may be regarded as a means of emulating a skilled human operator. More generally, the use of an FLC may be viewed as still another step in the direction of model-ing human decisionmaking within the conceptual framework of fuzzy logic and approximate reasoning. In this context, the forward data-driven inference (generalized modus ponens) plays an especially important role. In what follows, we shall investigate fuzzy implication functions, the sentence connectives and and also, compositional operators, inference mechanisms, and other concepts that are closely related to the decisionmaking logic of an FLC. In general, a fuzzy control rule is a fuzzy relation which is expressed as a fuzzy implication. In fuzzy logic, there are many ways in which a fuzzy implication may be defined. The definition of a fuzzy implication may be expressed as a fuzzy implication function. The choice of a fuzzy implication function reflects not only the intuitive criteria for implication but also the effect of connective also. I) Basic Properties of a Fuuy Implication Function: The choice of a fuzzy implication function involves a number of criteria, which are discussed in considered the following basic characteristics of a fuzzy implication function: fundamental property, smoothness property, unrestricted inference, symmetry of generalized modus ponens and generalized modus tollens, and a measure of propagation of fuzziness. All of these properties are justified on purely intuitive grounds. We prefer to say …",
"title": ""
}
] |
[
{
"docid": "9a921d579e9a9a213939b6cf9fa2ac9a",
"text": "This paper presents a generic methodology to optimize constellations based on their geometrical shaping for bit-interleaved coded modulation (BICM) systems. While the method can be applicable to any wireless standard design it has been tailored to two delivery scenarios typical of broadcast systems: 1) robust multimedia delivery and 2) UHDTV quality bitrate services. The design process is based on maximizing the BICM channel capacity for a given power constraint. The major contribution of this paper is a low complexity optimization algorithm for the design of optimal constellation schemes. The proposal consists of a set of initial conditions for a particle swarm optimization algorithm, and afterward, a customized post processing procedure for further improving the constellation alphabet. According to the broadcast application cases, the sizes of the constellations proposed range from 16 to 4096 symbols. The BICM channel capacities and performance of the designed constellations are compared to conventional quadrature amplitude modulation constellations for different application scenarios. The results show a significant improvement in terms of system performance and BICM channel capacities under additive white Gaussian noise and Rayleigh independently and identically distributed channel conditions.",
"title": ""
},
{
"docid": "f370a8ff8722d341d6e839ec2c7217c1",
"text": "We give the first O(mpolylog(n)) time algorithms for approximating maximum flows in undirected graphs and constructing polylog(n)-quality cut-approximating hierarchical tree decompositions. Our algorithm invokes existing algorithms for these two problems recursively while gradually incorporating size reductions. These size reductions are in turn obtained via ultra-sparsifiers, which are key tools in solvers for symmetric diagonally dominant (SDD) linear systems.",
"title": ""
},
{
"docid": "7e439ac3ff2304b6e1aaa098ff44b0cb",
"text": "Geological structures, such as faults and fractures, appear as image discontinuities or lineaments in remote sensing data. Geologic lineament mapping is a very important issue in geo-engineering, especially for construction site selection, seismic, and risk assessment, mineral exploration and hydrogeological research. Classical methods of lineaments extraction are based on semi-automated (or visual) interpretation of optical data and digital elevation models. We developed a freely available Matlab based toolbox TecLines (Tectonic Lineament Analysis) for locating and quantifying lineament patterns using satellite data and digital elevation models. TecLines consists of a set of functions including frequency filtering, spatial filtering, tensor voting, Hough transformation, and polynomial fitting. Due to differences in the mathematical background of the edge detection and edge linking procedure as well as the breadth of the methods, we introduce the approach in two-parts. In this first study, we present the steps that lead to edge detection. We introduce the data pre-processing using selected filters in spatial and frequency domains. We then describe the application of the tensor-voting framework to improve position and length accuracies of the detected lineaments. We demonstrate the robustness of the approach in a complex area in the northeast of Afghanistan using a panchromatic QUICKBIRD-2 image with 1-meter resolution. Finally, we compare the results of TecLines with manual lineament extraction, and other lineament extraction algorithms, as well as a published fault map of the study area. OPEN ACCESS Remote Sens. 2014, 6 5939",
"title": ""
},
{
"docid": "453af7094a854afd1dfb2e7dc36a7cca",
"text": "In this paper, we propose a new approach for the static detection of malicious code in executable programs. Our approach rests on a semantic analysis based on behaviour that even makes possible the detection of unknown malicious code. This analysis is carried out directly on binary code. Static analysis offers techniques for predicting properties of the behaviour of programs without running them. The static analysis of a given binary executable is achieved in three major steps: construction of an intermediate representation, flow-based analysis that catches securityoriented program behaviour, and static verification of critical behaviours against security policies (model checking). 1. Motivation and Background With the advent and the rising popularity of networks, Internet, intranets and distributed systems, security is becoming one of the focal points of research. As a matter of fact, more and more people are concerned with malicious code that could exist in software products. A malicious code is a piece of code that can affect the secrecy, the integrity, the data and control flow, and the functionality of a system. Therefore, ∗This research is jointly funded by a research grant from the Natural Sciences and Engineering Research Council, NSERC, Canada and also by a research contract from the Defence Research Establishment, Valcartier (DREV), 2459, Pie XI Nord, Val-Bélair, QC, Canada, G3J 1X5 their detection is a major concern within the computer science community as well as within the user community. As malicious code can affect the data and control flow of a program, static flow analysis may naturally be helpful as part of the detection process. In this paper, we address the problem of static detection of malicious code in binary executables. The primary objective of this research initiative is to elaborate practical methods and tools with robust theoretical foundations for the static detection of malicious code. The rest of the paper is organized in the following way. Section 2 is devoted to a comparison of static and dynamic approaches. Section 3 presents our approach to the detection of malices in binary executable code. Section 4 discusses the implementation of our approach. Finally, a few remarks and a discussion of future research are ultimately sketched as a conclusion in Section 5. 2. Static vs dynamic analysis There are two main approaches for the detection of malices : static analysis and dynamic analysis. Static analysis consists in examining the code of programs to determine properties of the dynamic execution of these programs without running them. This technique has been used extensively in the past by compiler developers to carry out various analyses and transformations aiming at optimizing the code [10]. Static analysis is also used in reverse engineering of software systems and for program understanding [3, 4]. Its use for the detection of malicious code is fairly recent. Dynamic analysis mainly consists in monitoring the execution of a program to detect malicious behaviour. Static analysis has the following advantages over dynamic analysis: • Static analysis techniques permit to make exhaustive analysis. They are not bound to a specific execution of a program and can give guarantees that apply to all executions of the program. In contrast, dynamic analysis techniques only allow examination of behaviours that correspond to selected test cases. • A verdict can be given before execution, where it may be difficult to determine the proper action to take in the presence of malices. • There is no run-time overhead. However, it may be impossible to certify statically that certain properties hold (e.g., due to undecidability). In this case, dynamic monitoring may be the only solution. Thus, static analysis and dynamic analysis are complementary. Static analysis can be used first, and properties that cannot be asserted statically can be monitored dynamically. As mentioned in the introduction, in this paper, we are concerned with static analysis techniques. Not much has been published about their use for the detection of malicious code. In [8], the authors propose a method for statically detecting malicious code in C programs. Their method is based on so-called tell-tale signs, which are program properties that allow one to distinguish between malicious and benign programs. The authors combine the tell-tale sign approach with program slicing in order to produce small fragments of large programs that can be easily analyzed. 3. Description of the Approach Static analysis techniques are generally used to operate on source code. However, as we explained in the introduction, we need to apply them to binary code, and thus, we had to adapt and evolve these techniques. Our approach is structured in three major steps: Firstly, the binary code is translated into an internal intermediate form (see Section 3.1) ; secondly, this intermediate form is abstracted through flowbased analysis as various relevant graphs (controlflow graph, data-flow graph, call graph, critical-API 1 graph, etc.) (Section 3.2); the third step is the static verification and consists in checking these graphs against security policies (Section 3.3). 3.1 Intermediate Representation A binary executable is the machine code version of a high-level or assembly program that has been compiled (or assembled) and linked for a particular platform and operating system. The general format of binary executables varies widely among operating systems. For example, the Portable Executable format (PE) is used by the Windows NT/98/95 operating system. The PE format includes comprehensive information about the different sections of the program that form the main part of the file, including the following segments: • .text, which contains the code and the entry point of the application, • .data, which contains various type of data, • .idata and .edata, which contain respectively the list of imported and exported APIs for an application or a Dynamic-Linking Library (DLL). The code segment (.text) constitutes the main part of the file; in fact, this section contains all the code that is to be analyzed. In order to translate an executable program into an equivalent high-level-language program, we use the disassembly tool IDA32 Pro [7], which can disassemble various types of executable files (ELF, EXE, PE, etc.) for several processors and operating systems (Windows 98, Windows NT, etc.). Also, IDA32 automatically recognizes calls to the standard libraries (i.e., API calls) for a long list of compilers. Statically analysing a program requires the construction of the syntax tree of this program, also called intermediate representation. The various techniques of static analysis are based on this abstract representation. The goal of the first step is to disassemble the binary code and then to parse the assembly code thus generated to produce the syntax tree (Figure 1). API: Application Program Interface.",
"title": ""
},
{
"docid": "406e6a8966aa43e7538030f844d6c2f0",
"text": "The idea of developing software components was envisioned more than forty years ago. In the past two decades, Component-Based Software Engineering (CBSE) has emerged as a distinguishable approach in software engineering, and it has attracted the attention of many researchers, which has led to many results being published in the research literature. There is a huge amount of knowledge encapsulated in conferences and journals targeting this area, but a systematic analysis of that knowledge is missing. For this reason, we aim to investigate the state-of-the-art of the CBSE area through a detailed literature review. To do this, 1231 studies dating from 1984 to 2012 were analyzed. Using the available evidence, this paper addresses five dimensions of CBSE: main objectives, research topics, application domains, research intensity and applied research methods. The main objectives found were to increase productivity, save costs and improve quality. The most addressed application domains are homogeneously divided between commercial-off-the-shelf (COTS), distributed and embedded systems. Intensity of research showed a considerable increase in the last fourteen years. In addition to the analysis, this paper also synthesizes the available evidence, identifies open issues and points out areas that call for further research. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1adc476c1e322d7cc7a0c93e726a8e2c",
"text": "A wireless body area network is a radio-frequency- based wireless networking technology that interconnects tiny nodes with sensor or actuator capabilities in, on, or around a human body. In a civilian networking environment, WBANs provide ubiquitous networking functionalities for applications varying from healthcare to safeguarding of uniformed personnel. This article surveys pioneer WBAN research projects and enabling technologies. It explores application scenarios, sensor/actuator devices, radio systems, and interconnection of WBANs to provide perspective on the trade-offs between data rate, power consumption, and network coverage. Finally, a number of open research issues are discussed.",
"title": ""
},
{
"docid": "d6bcf73a0237416318896154dfb0a764",
"text": "Singular Value Decomposition (SVD) is a popular approach in various network applications, such as link prediction and network parameter characterization. Incremental SVD approaches are proposed to process newly changed nodes and edges in dynamic networks. However, incremental SVD approaches suffer from serious error accumulation inevitably due to approximation on incremental updates. SVD restart is an effective approach to reset the aggregated error, but when to restart SVD for dynamic networks is not addressed in literature. In this paper, we propose TIMERS, Theoretically Instructed Maximum-Error-bounded Restart of SVD, a novel approach which optimally sets the restart time in order to reduce error accumulation in time. Specifically, we monitor the margin between reconstruction loss of incremental updates and the minimum loss in SVD model. To reduce the complexity of monitoring, we theoretically develop a lower bound of SVD minimum loss for dynamic networks and use the bound to replace the minimum loss in monitoring. By setting a maximum tolerated error as a threshold, we can trigger SVD restart automatically when the margin exceeds this threshold. We prove that the time complexity of our method is linear with respect to the number of local dynamic changes, and our method is general across different types of dynamic networks. We conduct extensive experiments on several synthetic and real dynamic networks. The experimental results demonstrate that our proposed method significantly outperforms the existing methods by reducing 27% to 42% in terms of the maximum error for dynamic network reconstruction when fixing the number of restarts. Our method reduces the number of restarts by 25% to 50% when fixing the maximum error tolerated.",
"title": ""
},
{
"docid": "8ac0bb34c0c393dddf91e81182632551",
"text": "The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, f(x) = x · sigmoid(βx), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.",
"title": ""
},
{
"docid": "56d31440ed955158ecb29ff743029bb2",
"text": "We propose a systematic method for creating constellations of unitary space-time signals for multipleantenna communication links. Unitary space-time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transmitter nor the receiver knows the fading coefficients. The signals can achieve low probability of error by exploiting multiple-antenna diversity. Because the fading coefficients are not known, the criterion for creating and evaluating the constellation is nonstandard and differs markedly from the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation—an oblong complex-valued matrix whose columns are orthonormal—and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space. This construction easily produces large constellations of high-dimensional signals. We demonstrate its efficacy through examples involving one, two, and three transmitter antennas. Index Terms —Multi-element antenna arrays, wireless communications, fading channels, transmit diversity, receive diversity, Unitary Space-Time Modulation",
"title": ""
},
{
"docid": "76dcd35124d95bffe47df5decdc5926a",
"text": "While kernel drivers have long been know to poses huge security risks, due to their privileged access and lower code quality, bug-finding tools for drivers are still greatly lacking both in quantity and effectiveness. This is because the pointer-heavy code in these drivers present some of the hardest challenges to static analysis, and their tight coupling with the hardware make dynamic analysis infeasible in most cases. In this work, we present DR. CHECKER, a soundy (i.e., mostly sound) bug-finding tool for Linux kernel drivers that is based on well-known program analysis techniques. We are able to overcome many of the inherent limitations of static analysis by scoping our analysis to only the most bug-prone parts of the kernel (i.e., the drivers), and by only sacrificing soundness in very few cases to ensure that our technique is both scalable and precise. DR. CHECKER is a fully-automated static analysis tool capable of performing general bug finding using both pointer and taint analyses that are flow-sensitive, context-sensitive, and fieldsensitive on kernel drivers. To demonstrate the scalability and efficacy of DR. CHECKER, we analyzed the drivers of nine production Linux kernels (3.1 million LOC), where it correctly identified 158 critical zero-day bugs with an overall precision of 78%.",
"title": ""
},
{
"docid": "8f0ed599cec42faa0928a0931ee77b28",
"text": "This paper describes the Connector and Acceptor patterns. The intent of these patterns is to decouple the active and passive connection roles, respectively, from the tasks a communication service performs once connections are established. Common examples of communication services that utilize these patterns include WWW browsers, WWW servers, object request brokers, and “superservers” that provide services like remote login and file transfer to client applications. This paper illustrates how the Connector and Acceptor patterns can help decouple the connection-related processing from the service processing, thereby yielding more reusable, extensible, and efficient communication software. When used in conjunction with related patterns like the Reactor [1], Active Object [2], and Service Configurator [3], the Acceptor and Connector patterns enable the creation of highly extensible and efficient communication software frameworks [4] and applications [5]. This paper is organized as follows: Section 2 outlines background information on networking and communication protocols necessary to appreciate the patterns in this paper; Section 3 motivates the need for the Acceptor and Connector patterns and illustrates how they have been applied to a production application-level Gateway; Section 4 describes the Acceptor and Connector patterns in detail; and Section 5 presents concluding remarks.",
"title": ""
},
{
"docid": "df00815ab7f96a286ca336ecd85ed821",
"text": "In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K +K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms.",
"title": ""
},
{
"docid": "6a6063c05941c026b083bfcc573520f8",
"text": "This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records (Koopman et al. in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/2702613.2732781 , 2015b). In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results”—the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.",
"title": ""
},
{
"docid": "b776307764d3946fc4e7f6158b656435",
"text": "Recent development advances have allowed silicon (Si) semiconductor technology to approach the theoretical limits of the Si material; however, power device requirements for many applications are at a point that the present Si-based power devices can not handle. The requirements include higher blocking voltages, switching frequencies, efficiency, and reliability. To overcome these limitations, new semiconductor materials for power device applications are needed. For high power requirements, wide band gap semiconductors like silicon carbide (SiC), gallium nitride (GaN), and diamond with their superior electrical properties are likely candidates to replace Si in the near future. This paper compares all the aforementioned wide bandgap semiconductors with respect to their promise and applicability for power applications and predicts the future of power device semiconductor materials.",
"title": ""
},
{
"docid": "0d2e9d514586f083007f5e93d8bb9844",
"text": "Recovering Matches: Analysis-by-Synthesis Results Starting point: Unsupervised learning of image matching Applications: Feature matching, structure from motion, dense optical flow, recognition, motion segmentation, image alignment Problem: Difficult to do directly (e.g. based on video) Insights: Image matching is a sub-problem of frame interpolation Frame interpolation can be learned from natural video sequences",
"title": ""
},
{
"docid": "796af76343bbf770afb521b6c096fbdf",
"text": "This paper presents a rapid hierarchical radiosity algorithm for illuminating scenes containing large polygonal patches. The algorithm constructs a hierarchical representation of the form factor matrix by adaptively subdividing patches into subpatches according to a user-supplied error bound. The algorithm guarantees that all form factors are calculated to the same precision, removing many common image artifacts due to inaccurate form factors. More importantly, the algorithm decomposes the form factor matrix into at most O(n) blocks (where n is the number of elements). Previous radiosity algorithms represented the element-to-element transport interactions with n2 form factors. Visibility algorithms are given that work well with this approach. Standard techniques for shooting and gathering can be used with the hierarchical representation to solve for equilibrium radiosities, but we also discuss using a brightness-weighted error criteria, in conjunction with multigridding, to even more rapidly progressively refine the image.",
"title": ""
},
{
"docid": "df83a6388ce2b16060aa9da62a86894a",
"text": "Embodied agents have received large amounts of interest in recent years. They are often equipped with the ability to express emotion, but without understanding the impact this can have on the user. Given the amount of research studies that are utilising agent technology with affective capabilities, now is an important time to review the influence of synthetic agent emotion on user attitudes, perceptions and behaviour. We therefore present a structured overview of the research into emotional simulation in agents, providing a summary of the main studies, re-formulating appropriate results in terms of the emotional effects demonstrated, and an in-depth analysis illustrating the similarities and inconsistencies between different experiments across a variety of different domains. We highlight important lessons, future areas for research, and provide a set of guidelines for conducting further research. r 2009 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "4122375a509bf06cc7e8b89cb30357ff",
"text": "Textile-based sensors offer an unobtrusive method of continually monitoring physiological parameters during daily activities. Chemical analysis of body fluids, noninvasively, is a novel and exciting area of personalized wearable healthcare systems. BIOTEX was an EU-funded project that aimed to develop textile sensors to measure physiological parameters and the chemical composition of body fluids, with a particular interest in sweat. A wearable sensing system has been developed that integrates a textile-based fluid handling system for sample collection and transport with a number of sensors including sodium, conductivity, and pH sensors. Sensors for sweat rate, ECG, respiration, and blood oxygenation were also developed. For the first time, it has been possible to monitor a number of physiological parameters together with sweat composition in real time. This has been carried out via a network of wearable sensors distributed around the body of a subject user. This has huge implications for the field of sports and human performance and opens a whole new field of research in the clinical setting.",
"title": ""
},
{
"docid": "31d03e6933e1e289cd8cda641bd08b68",
"text": "BACKGROUND\nAnterior cruciate ligament reconstruction (ACLR) has been established as the gold standard for treatment of complete ruptures of the anterior cruciate ligament (ACL) in active, symptomatic individuals. In contrast, treatment of partial tears of the ACL remains controversial. Biologically augmented ACL-repair techniques are expanding in an attempt to regenerate and improve healing and outcomes of both the native ACL and the reconstructed graft tissue.\n\n\nPURPOSE\nTo review the biologic treatment options for partial tears of the ACL.\n\n\nSTUDY DESIGN\nReview.\n\n\nMETHODS\nA literature review was performed that included searches of PubMed, Medline, and Cochrane databases using the following keywords: partial tear of the ACL, ACL repair, bone marrow concentrate, growth factors/healing enhancement, platelet-rich plasma (PRP), stem cell therapy.\n\n\nRESULTS\nThe use of novel biologic ACL repair techniques, including growth factors, PRP, stem cells, and bioscaffolds, have been reported to result in promising preclinical and short-term clinical outcomes.\n\n\nCONCLUSION\nThe potential benefits of these biological augmentation approaches for partial ACL tears are improved healing, better proprioception, and a faster return to sport and activities of daily living when compared with standard reconstruction procedures. However, long-term studies with larger cohorts of patients and with technique validation are necessary to assess the real effect of these approaches.",
"title": ""
},
{
"docid": "fb0e9f6f58051b9209388f81e1d018ff",
"text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.",
"title": ""
}
] |
scidocsrr
|
ebd178ab1d662c33810ae7939e795523
|
HEMD: a highly efficient random forest-based malware detection framework for Android
|
[
{
"docid": "3e26fe227e8c270fda4fe0b7d09b2985",
"text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.",
"title": ""
}
] |
[
{
"docid": "e4d1a0be0889aba00b80a2d6cdc2335b",
"text": "This study uses a multi-period structural model developed by Chen and Yeh (2006), which extends the Geske-Johnson (1987) compound option model to evaluate the performance of capital structure arbitrage under a multi-period debt structure. Previous studies exploring capital structure arbitrage have typically employed single-period structural models, which have very limited empirical scopes. In this paper, we predict the default situations of a firm using the multi-period Geske-Johnson model that assumes endogenous default barriers. The Geske-Johnson model is the only model that accounts for the entire debt structure and imputes the default barrier to the asset value of the firm. This study also establishes trading strategies and analyzes the arbitrage performance of 369 North American obligators from 2004 to 2008. Comparing the performance of capital structure arbitrage between the Geske-Johnson and CreditGrades models, we find that the extended Geske-Johnson model is more suitable than the CreditGrades model for exploiting the mispricing between equity prices and credit default swap spreads.",
"title": ""
},
{
"docid": "75ce2ccca2afcae56101e141a42ac2a2",
"text": "Hip disarticulation is an amputation through the hip joint capsule, removing the entire lower extremity, with closure of the remaining musculature over the exposed acetabulum. Tumors of the distal and proximal femur were treated by total femur resection; a hip disarticulation sometimes is performance for massive trauma with crush injuries to the lower extremity. This article discusses the design a system for rehabilitation of a patient with bilateral hip disarticulations. The prosthetics designed allowed the patient to do natural gait suspended between parallel articulate crutches with the body weight support between the crutches. The care of this patient was a challenge due to bilateral amputations at such a high level and the special needs of a patient mobility. Keywords— Amputation, prosthesis, mobility,",
"title": ""
},
{
"docid": "add72d66c626f1a4df3e0820c629c75f",
"text": "Cybersecurity is a complex and dynamic area where multiple actors act against each other through computer networks largely without any commonly accepted rules of engagement. Well-managed cybersecurity operations need a clear terminology to describe threats, attacks and their origins. In addition, cybersecurity tools and technologies need semantic models to be able to automatically identify threats and to predict and detect attacks. This paper reviews terminology and models of cybersecurity operations, and proposes approaches for semantic modelling of cybersecurity threats and attacks.",
"title": ""
},
{
"docid": "64a3fec90138f6786dd8257a5ecd73e4",
"text": "Unlabeled high-dimensional text-image web news data are produced every day, presenting new challenges to unsupervised feature selection on multi-view data. State-of-the-art multi-view unsupervised feature selection methods learn pseudo class labels by spectral analysis, which is sensitive to the choice of similarity metric for each view. For text-image data, the raw text itself contains more discriminative information than similarity graph which loses information during construction, and thus the text feature can be directly used for label learning, avoiding information loss as in spectral analysis. We propose a new multi-view unsupervised feature selection method in which image local learning regularized orthogonal nonnegative matrix factorization is used to learn pseudo labels and simultaneously robust joint $l_{2,1}$-norm minimization is performed to select discriminative features. Cross-view consensus on pseudo labels can be obtained as much as possible. We systematically evaluate the proposed method in multi-view text-image web news datasets. Our extensive experiments on web news datasets crawled from two major US media channels: CNN and FOXNews demonstrate the efficacy of the new method over state-of-the-art multi-view and single-view unsupervised feature selection methods.",
"title": ""
},
{
"docid": "a3e6b2dabc7c84fef5894a2e946f46a1",
"text": "Developing a technique for the automatic analysis of surveillance videos in order to identify the presence of violence is of broad interest. In this work, we propose a deep neural network for the purpose of recognizing violent videos. A convolutional neural network is used to extract frame level features from a video. The frame level features are then aggregated using a variant of the long short term memory that uses convolutional gates. The convolutional neural network along with the convolutional long short term memory is capable of capturing localized spatio-temporal features which enables the analysis of local motion taking place in the video. We also propose to use adjacent frame differences as the input to the model thereby forcing it to encode the changes occurring in the video. The performance of the proposed feature extraction pipeline is evaluated on three standard benchmark datasets in terms of recognition accuracy. Comparison of the results obtained with the state of the art techniques revealed the promising capability of the proposed method in recognizing violent videos.",
"title": ""
},
{
"docid": "2dc2b9d60244e819a85b33581800ae56",
"text": "In this study, a simple and effective silver ink formulation was developed to generate silver tracks with high electrical conductivity on flexible substrates at low sintering temperatures. Diethanolamine (DEA), a self-oxidizing compound at moderate temperatures, was mixed with a silver ammonia solution to form a clear and stable solution. After inkjet-printed or pen-written on plastic sheets, DEA in the silver ink decomposes at temperatures higher than 50 °C and generates formaldehyde, which reacts spontaneously with silver ammonia ions to form silver thin films. The electrical conductivity of the inkjet-printed silver films can be 26% of the bulk silver after heating at 75 °C for 20 min and show great adhesion on plastic sheets.",
"title": ""
},
{
"docid": "50b6028ad48757789cfe203b9120bca1",
"text": "Social sensing is a new paradigm that inherits the main ideas of sensor networks and considers the users as new sensor types. For instance, by the time the users find out that an event has happened, they start to share the related posts and express their feelings through the social networks. Consequently, these networks are becoming a powerful news media in a wide range of topics. Existing event detection methods mostly focus on either the keyword burst or sentiment of posts, and ignore some natural aspects of social networks such as the dynamic rate of arriving posts. In this paper, we devised Dynamic Social Event Detection approach that exploits a new dynamic windowing method. Besides, we add a mechanism to combine the sentiment of posts with the keywords burst in the dynamic windows. The combination of sentiment analysis and the frequently used keywords enhances our approach to detect events with a different level of user engagement. To analyze the behavior of the devised approach, we use a wide range of metrics including histogram of window sizes, sentiment oscillations of posts, topic recall, keyword precision, and keyword recall on two benchmarked datasets. One of the significant outcomes of the devised method is the topic recall of 100% for FA Cup dataset.",
"title": ""
},
{
"docid": "f672df401b24571f81648066b3181890",
"text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.",
"title": ""
},
{
"docid": "68ff3d26d1990a138095e2222cf06d98",
"text": "In this paper, we report an experimental investigation of the effect of framing on social preferences, as revealed in a one-shot linear public goods game. We use two indicators to measure social preferences: self-reported emotional responses; and, as a behavioural indicator of disapproval, punishment. Our findings are that, for a given pattern of contributions, neither punishment nor emotion depends on the Give versus Take framing that we manipulate. To this extent, they suggest that the social preferences we observe are robust to framing effects.",
"title": ""
},
{
"docid": "deec27aef7ae2e498fdc9466f4783910",
"text": "This paper considers the problem of obtaining high quality attitude extraction and gyros bias estimation from typical low cost intertial measurement units for applications in control of unmanned aerial vehiccles. Two different non-linear complementary filters are proposed: Direct complementary filter and Passive non-linear complementary filter. Both filters evolve explicity on the special orthogonal group SO(3) and can be expressed in quaternion form for easy implementation. An extension to the passive ocmplementary filter is proposed to provide adaptive gyro bias estimation.",
"title": ""
},
{
"docid": "c2edf373d60d4165afec75d70117530d",
"text": "In her book Introducing Arguments, Linda Pylkkänen distinguishes between the core and noncore arguments of verbs by means of a detailed discussion of applicative and causative constructions. The term applicative refers to structures that in more general linguistic terms are defined as ditransitive, i.e. when both a direct and an indirect object are associated with the verb, as exemplified in (1) (Pylkkänen, 2008: 13):",
"title": ""
},
{
"docid": "ff2b53e0cecb849d1cbb503300f1ab9a",
"text": "Receiving rapid, accurate and comprehensive knowledge about the conditions of damaged buildings after earthquake strike and other natural hazards is the basis of many related activities such as rescue, relief and reconstruction. Recently, commercial high-resolution satellite imagery such as IKONOS and QuickBird is becoming more powerful data resource for disaster management. In this paper, a method for automatic detection and classification of damaged buildings using integration of high-resolution satellite imageries and vector map is proposed. In this method, after extracting buildings position from vector map, they are located in the pre-event and post-event satellite images. By measuring and comparing different textural features for extracted buildings in both images, buildings conditions are evaluated through a Fuzzy Inference System. Overall classification accuracy of 74% and kappa coefficient of 0.63 were acquired. Results of the proposed method, indicates the capability of this method for automatic determination of damaged buildings from high-resolution satellite imageries.",
"title": ""
},
{
"docid": "32a97a3d9f010c7cdd542c34f02afb46",
"text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we delve into the logical design of ETL scenarios and provide a generic and customizable framework in order to support the DW designer in his task. First, we present a metamodel particularly customized for the definition of ETL activities. We follow a workflow-like approach, where the output of a certain activity can either be stored persistently or passed to a subsequent activity. Also, we employ a declarative database programming language, LDL, to define the semantics of each activity. The metamodel is generic enough to capture any possible ETL activity. Nevertheless, in the pursuit of higher reusability and flexibility, we specialize the set of our generic metamodel constructs with a palette of frequently-used ETL activities, which we call templates. Moreover, in order to achieve a uniform extensibility mechanism for this library of built-ins, we have to deal with specific language issues. Therefore, we also discuss the mechanics of template instantiation to concrete activities. The design concepts that we introduce have been implemented in a tool, ARKTOS II, which is also presented.",
"title": ""
},
{
"docid": "9b61ddcc5312a33ac9b22fe185a95e18",
"text": "INTRODUCTION Treatments for gait pathologies associated with neuromuscular disorders (such as dropfoot, spasticity, etc.) may employ a passive mechanical brace [1]. Depending on the gait pathology, the brace may be applied to the hip, knee, ankle, or any combination thereof. While passive mechanical braces provide certain benefits, they may lead to additional medical problems. For example, an ankle-foot orthotic (AFO) is typically used to prevent the toe from dragging on the ground in the case of drop-foot. Rigid versions of the AFO constrain the ankle to a specific position. By limiting the range of motion, the toe can clear the ground, thus allowing gait to progress more naturally. However, the use of the AFO may result in a reduction in power generation at the ankle, as it limits active plantar flexion. Moreover, the AFO may lead to disuse atrophy of the muscles, such as the tibialis anterior muscle, potentially leading to long-term dependence [2]. While previous researchers have examined actuating a rigid orthotic [3], we examine using NiTi shape memory alloy (SMA) wires to embed actuation within a soft material. In this manner, the orthotic can provide variable assistance depending on the gait cycle phase, activity level, and needs of the wearer. Thus, the subject can have individualized control, causing the muscles to be used more appropriately, possibly leading to a reeducation of the motor system and eventual independence from the orthotic system.",
"title": ""
},
{
"docid": "4b8823bffcc77968b7ac087579ab84c9",
"text": "Numerous complains have been made by Android users who severely suffer from the sluggish response when interacting with their devices. However, very few studies have been conducted to understand the user-perceived latency or mitigate the UI-lagging problem. In this paper, we conduct the first systematic measurement study to quantify the user-perceived latency using typical interaction-intensive Android apps in running with and without background workloads. We reveal the insufficiency of Android system in ensuring the performance of foreground apps and therefore design a new system to address the insufficiency accordingly. We develop a lightweight tracker to accurately identify all delay-critical threads that contribute to the slow response of user interactions. We then build a resource manager that can efficiently schedule various system resources including CPU, I/O, and GPU, for optimizing the performance of these threads. We implement the proposed system on commercial smartphones and conduct comprehensive experiments to evaluate our implementation. Evaluation results show that our system is able to significantly reduce the user-perceived latency of foreground apps in running with aggressive background workloads, up to 10x, while incurring negligible system overhead of less than 3.1 percent CPU and 7 MB memory.",
"title": ""
},
{
"docid": "a71bfbdbb8c78578d186caaef55d593b",
"text": "[Excerpt] Entrepreneurship is the process by which \"opportunities to create future goods and services are discovered, evaluated, and exploited\" (Shane and Venkataraman, 2000: 218). In other words, it is the process by which organizations and individuals convert new knowledge into new opportunities in the form of new products and services. Strategic human resource management (SHRM) has been defined as the system of organizational practices and policies used to manage employees in a manner that leads to higher organizational performance (Wright and McMahan, 1992). Further, one perspective suggests that sets of HR practices do not themselves create competitive advantage; instead, they foster the development of organizational capabilities which in turn create such advantages (Lado and Wilson, 1994; Wright, Dunford, and Snell, 2001). Specifically, this body of literature suggests that HR practices lead to firm performance when they are aligned to work together to create and support the employee-based capabilities that lead to competitive advantage (Wright and Snell, 2000; Wright, Dunford, and Snell, 2001). Thus, entrepreneurial human resource strategy is best defined as the set or sets of human resources practices that will increase the likelihood that new knowledge will be converted to new products or services.",
"title": ""
},
{
"docid": "3fd551696803695056dd759d8f172779",
"text": "The aim of this research essay is to examine the structural nature of theory in Information Systems. Despite the impor tance of theory, questions relating to its form and structure are neglected in comparison with questions relating to episte mology. The essay addresses issues of causality, explanation, prediction, and generalization that underlie an understanding of theory. A taxonomy is proposed that classifies information systems theories with respect to the manner in which four central goals are addressed: analysis, explanation, predic tion, and prescription. Five interrelated types of theory are distinguished: (I) theory for analyzing, (2) theory for ex plaining, (3) theory for predicting, (4) theory for explaining and predicting, and (5) theory for design and action. Examples illustrate the nature of each theory type. The appli cability of the taxonomy is demonstrated by classifying a sample of journal articles. The paper contributes by showing that multiple views of theory exist and by exposing the assumptions underlying different viewpoints. In addition, it is suggested that the type of theory under development can influence the choice of an epistemological approach. Support Allen Lee was the accepting senior editor for this paper. M. Lynne Markus, Michael D. Myers, and Robert W. Zmud served as reviewers. is given for the legitimacy and value of each theory type. The building of integrated bodies of theory that encompass all theory types is advocated.",
"title": ""
},
{
"docid": "0e06a34cbce3e212423a3d4eb8ba373e",
"text": "Cooperative sensors are an emerging technology consisting of autonomous sensor units working in concert to measure physiological signals requiring distant sensing points, such as biopotential (e.g., ECG) or bioimpedance (e.g., EIT). Their advantage with respect to the state-of-the-art technology is that they do not require shielded and even insulated cables to measure best quality biopotential or bioimpedance signals. Moreover, as all sensors are simply connected to a single electrical connection (which can be for instance a conductive vest) there is no connecting limitation to the miniaturization of the system or to its extension to large numbers of sensors. This results in an increase of wearability and comfort, as well as in a decrease of costs and integration challenges. However, cooperative sensors must communicate to be synchronized and to centralize the data. This paper presents possible communication strategies and focuses on the implementation of one of them that is particularly well suited for biopotential and bioimpedance measurements.",
"title": ""
},
{
"docid": "e0e00fdfecc4a23994315579938f740e",
"text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.",
"title": ""
},
{
"docid": "30604dca66bbf3f0abe63c101f02e434",
"text": "This paper presents a novel feature based parameterization approach of human bodies from the unorganized cloud points and the parametric design method for generating new models based on the parameterization. The parameterization consists of two phases. Firstly, the semantic feature extraction technique is applied to construct the feature wireframe of a human body from laser scanned 3D unorganized points. Secondly, the symmetric detail mesh surface of the human body is modeled. Gregory patches are utilized to generate G 1 continuous mesh surface interpolating the curves on feature wireframe. After that, a voxel-based algorithm adds details on the smooth G 1 continuous surface by the cloud points. Finally, the mesh surface is adjusted to become symmetric. Compared to other template fitting based approaches, the parameterization approach introduced in this paper is more efficient. The parametric design approach synthesizes parameterized sample models to a new human body according to user input sizing dimensions. It is based on a numerical optimization process. The strategy of choosing samples for synthesis is also introduced. Human bodies according to a wide range of dimensions can be generated by our approach. Different from the mathematical interpolation function based human body synthesis methods, the models generated in our method have the approximation errors minimized. All mannequins constructed by our approach have consistent feature patches, which benefits the design automation of customized clothes around human bodies a lot.",
"title": ""
}
] |
scidocsrr
|
e905f15547256fb7462ba5d1d09f4d39
|
Diatom Autofocusing in Brightfield Microscopy: a Comparative Study
|
[
{
"docid": "134173c98bceafddbf7f12a108525ff4",
"text": "Rough surfaces pose a challenging shape extraction problem. Images of rough surfaces are often characterized by high frequency intensity variations, and it is difficult to perceive the shapes of these surfaces from their images. The shape-from-focus method described in this paper uses different focus levels to obtain a sequence of object images. The sum-modified-Laplacian (SML) operator is developed to compute local measures of the quality of image focus. The SML operator is applied to the image sequence, and the set of focus measures obtained at each image point are used to compute local depth estimates. We present two algorithms for depth estimation. The first algorithm simply looks for the focus level that maximizes the focus measure at each point. The other algorithm models the SML focus measure variations at each point as a Gaussian distribution and use this model to interpolate the computed focus measures to obtain more accurate depth estimates. The algorithms were implemented and tested using surfaces of different roughness and reflectance properties. We conclude with a brief discussion on how the proposed method can be applied to smooth textured and smooth non-textured surfaces.",
"title": ""
}
] |
[
{
"docid": "959478680a53cab7ea7ab6c453480b24",
"text": "LSB techniques generally embed data in the same LSB position of consecutive samples which helps intruders to extract secret information easily. This paper solve this problem by introducing a robust audio steganography technique where data is embedded in multiple layers of LSB chosen randomly and in non-consecutive samples. The choice of random LSB layers and non-consecutive pixels for embedding increases robustness as well as the strength of proposed steganography algorithm. It is seriously a problem that the data hiding at non-contiguous sample locations loses the capacity of stego audio. This problem is solved here by embedding three bits within a target samples. The capacity is also increased by using 6 bits ASCII representation of secret message instead of 7. The proposed technique is tested by embedding text of different payloads within the cover audio and also compared with existing techniques based on quality and capacity.",
"title": ""
},
{
"docid": "c10d33abc6ed1d47c11bf54ed38e5800",
"text": "The past decade has seen a steady growth of interest in statistical language models for information retrieval, and much research work has been conducted on this subject. This book by ChengXiang Zhai summarizes most of this research. It opens with an introduction covering the basic concepts of information retrieval and statistical languagemodels, presenting the intuitions behind these concepts. This introduction is then followed by a chapter providing an overview of:",
"title": ""
},
{
"docid": "6da632d61dbda324da5f74b38f25b1b9",
"text": "Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.",
"title": ""
},
{
"docid": "fdf31df593f5e3349e5610a5a760559f",
"text": "OBJECTIVE\nFilicide, or parental murder of offspring, constitutes a major portion of lethal violence perpetrated against children worldwide. Despite the global nature of the phenomenon, researchers have focused their studies on the developed industrialized societies with the consequent neglect of small, developing societies. Second, there is a paucity of empirical data on child homicide committed by fathers. This study therefore explores the nature and extent of paternal filicides in Fiji, a non-Western society, and the social and cultural forces underlying them in order to enhance our knowledge of the phenomenon.\n\n\nMETHOD\nInformation was obtained from a number of sources, including (a) a police homicide logbook, (b) newspaper reports of homicide, and (c) detailed interviews conducted with criminal justice and medical personnel. Information from these data sources were consolidated to construct case histories of paternal filicides. These cases were then analyzed for dominant themes. Case illustrations are presented in the text.\n\n\nRESULTS\nSeveral of the study's findings are congruent with other studies of paternal filicides: poor, working class fathers were the offenders in all cases. As a corollary, their victims were from low socioeconomic backgrounds. Regarding location, paternal filicides occurred in the home of the offender and victim. The filicides were the culmination of stresses and strains associated with marital disharmony and excessive corporal child-control strategies.\n\n\nCONCLUSIONS\nThe general conclusion of this study is that further research in non-Western societies has the potential to increase our understanding of the social factors and processes involved in paternal child murders. We will then be better positioned to craft effective intervention strategies.",
"title": ""
},
{
"docid": "6a757e3bb48be08ce6a56a08a3fc84d4",
"text": "The completion of a high-quality, comprehensive sequence of the human genome, in this fiftieth anniversary year of the discovery of the double-helical structure of DNA, is a landmark event. The genomic era is now a reality. In contemplating a vision for the future of genomics research,it is appropriate to consider the remarkable path that has brought us here. The rollfold (Figure 1) shows a timeline of landmark accomplishments in genetics and genomics, beginning with Gregor Mendel’s discovery of the laws of heredity and their rediscovery in the early days of the twentieth century.Recognition of DNA as the hereditary material, determination of its structure, elucidation of the genetic code, development of recombinant DNA technologies, and establishment of increasingly automatable methods for DNA sequencing set the stage for the Human Genome Project (HGP) to begin in 1990 (see also www.nature.com/nature/DNA50). Thanks to the vision of the original planners, and the creativity and determination of a legion of talented scientists who decided to make this project their overarching focus, all of the initial objectives of the HGP have now been achieved at least two years ahead of expectation, and a revolution in biological research has begun. The project’s new research strategies and experimental technologies have generated a steady stream of ever-larger and more complex genomic data sets that have poured into public databases and have transformed the study of virtually all life processes. The genomic approach of technology development and large-scale generation of community resource data sets has introduced an important new dimension into biological and biomedical research. Interwoven advances in genetics, comparative genomics, highthroughput biochemistry and bioinformatics",
"title": ""
},
{
"docid": "325796828b9d25d50eb69f62d9eabdbb",
"text": "We present a new algorithm to reduce the space complexity of heuristic search. It is most effective for problem spaces that grow polynomially wi th problem size, but contain large numbers of short cycles. For example, the problem of finding a lowest-cost corner-to-corner path in a d-dimensional grid has application to gene sequence alignment in computational biology. The main idea is to perform a bidirectional search, but saving only the Open lists and not the Closed lists. Once the search completes, we have one node on an optimal path, but don't have the solution path itself. The path is then reconstructed by recursively applying the same algorithm between the in i t ia l node and the in termediate node, and also between the intermediate node and the goal node. If n is the length of the grid in each dimension, and d is the number of dimensions, this algorithm reduces the memory requirement from to The time complexity only increases by a constant factor of in two dimensions, and 1.8 in three dimensions.",
"title": ""
},
{
"docid": "c56063a72110b03e7cadcedc2982cbb5",
"text": "We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.",
"title": ""
},
{
"docid": "90bc3c8dfdbf9de97be4417d57e7abf9",
"text": "Autonomous vehicles are being developed rapidly in recent years. In advance implementation stages, many particular problems must be solved to bring this technology into the market place. This paper focuses on the problem of driving in snow and wet road surface environments. First, the quality of laser imaging detection and ranging (LIDAR) reflectivity decreases on wet road surfaces. Therefore, an accumulation strategy is designed to increase the density of online LIDAR images. In order to enhance the texture of the accumulated images, principal component analysis is used to understand the geometrical structures and texture patterns in the map images. The LIDAR images are then reconstructed using the leading principal components with respect to the variance distribution accounted by each eigenvector. Second, the appearance of snow lines deforms the expected road context in LIDAR images. Accordingly, the edge profiles of the LIDAR and map images are extracted to encode the lane lines and roadside edges. Edge matching between the two profiles is then calculated to improve localization in the lateral direction. The proposed method has been tested and evaluated using real data that are collected during the winter of 2016–2017 in Suzu and Kanazawa, Japan. The experimental results show that the proposed method increases the robustness of autonomous driving on wet road surfaces, provides a stable performance in laterally localizing the vehicle in the presence of snow lines, and significantly reduces the overall localization error at a speed of 60 km/h.",
"title": ""
},
{
"docid": "fce11219cdd4d85dde1d3d893f252e14",
"text": "Smartphones and tablets with rich graphical user interfaces (GUI) are becoming increasingly popular. Hundreds of thousands of specialized applications, called apps, are available for such mobile platforms. Manual testing is the most popular technique for testing graphical user interfaces of such apps. Manual testing is often tedious and error-prone. In this paper, we propose an automated technique, called Swift-Hand, for generating sequences of test inputs for Android apps. The technique uses machine learning to learn a model of the app during testing, uses the learned model to generate user inputs that visit unexplored states of the app, and uses the execution of the app on the generated inputs to refine the model. A key feature of the testing algorithm is that it avoids restarting the app, which is a significantly more expensive operation than executing the app on a sequence of inputs. An important insight behind our testing algorithm is that we do not need to learn a precise model of an app, which is often computationally intensive, if our goal is to simply guide test execution into unexplored parts of the state space. We have implemented our testing algorithm in a publicly available tool for Android apps written in Java. Our experimental results show that we can achieve significantly better coverage than traditional random testing and L*-based testing in a given time budget. Our algorithm also reaches peak coverage faster than both random and L*-based testing.",
"title": ""
},
{
"docid": "d78cd7f5736a0ee5f4feaf390971da61",
"text": "Cloud computing is changing the way that organizations manage their data, due to its robustness, low cost and ubiquitous nature. Privacy concerns arise whenever sensitive data is outsourced to the cloud. This paper introduces a cloud database storage architecture that prevents the local administrator as well as the cloud administrator to learn about the outsourced database content. Moreover, machine readable rights expressions are used in order to limit users of the database to a need-to-know basis. These limitations are not changeable by administrators after the database related application is launched, since a new role of rights editors is defined once an application is launced. Furthermore, trusted computing is applied to bind cryptographic key information to trusted states. By limiting the necessary trust in both corporate as well as external administrators and service providers, we counteract the often criticized privacy and confidentiality risks of corporate cloud computing.",
"title": ""
},
{
"docid": "cec6cf7e47b87f148e187a11c98d251f",
"text": "With the rise of user-generated content in social media coupled with almost non-existent moderation in many such systems, aggressive contents have been observed to rise in such forums. In this paper, we work on the problem of aggression detection in social media. Aggression can sometimes be expressed directly or overtly or it can be hidden or covert in the text. On the other hand, most of the content in social media is non-aggressive in nature. We propose an ensemble based system to classify an input post into one of three classes, namely, Overtly Aggressive, Covertly Aggressive, and Non-aggressive. Our approach uses three deep learning methods, namely, Convolutional Neural Networks (CNN) with five layers (input, convolution, pooling, hidden, and output), Long Short Term Memory networks (LSTM), and Bi-directional Long Short Term Memory networks (Bi-LSTM). A majority voting based ensemble method is used to combine these classifiers (CNN, LSTM, and Bi-LSTM). We trained our method on Facebook comments dataset and tested on Facebook comments (in-domain) and other social media posts (cross-domain). Our system achieves the F1-score (weighted) of 0.604 for Facebook posts and 0.508 for social media posts.",
"title": ""
},
{
"docid": "2e5981a41d13ee2d588ee0e9fe04e1ec",
"text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.",
"title": ""
},
{
"docid": "de0482515de1d6134b8ff907be49d4dc",
"text": "In this paper, we describe the Adaptive Place Advi sor, a conversational recommendation system designed to he lp users decide on a destination. We view the selection of destinations a an interactive, conversational process, with the advisory system in quiring about desired item characteristics and the human responding. The user model, which contains preferences regarding items, attributes, values and v lue combinations, is also acquired during the conversation. The system enhanc es the user’s requirements with the user model and retrieves suitable items fr om a case-base. If the number of items found by the system is unsuitable (too hig h, too low) the next attribute to be constrained or relaxed is selected based on t he information gain associated with the attributes. We also describe the current s tatu of the system and future work.",
"title": ""
},
{
"docid": "4ec5cb3b9b6d2fdf492b64ca695217b1",
"text": "Deep Neural Network (DNN) acoustic models have yielded many state-of-the-art results in Automatic Speech Recognition (ASR) tasks. More recently, Recurrent Neural Network (RNN) models have been shown to outperform DNNs counterparts. However, state-of-the-art DNN and RNN models tend to be impractical to deploy on embedded systems with limited computational capacity. Traditionally, the approach for embedded platforms is to either train a small DNN directly, or to train a small DNN that learns the output distribution of a large DNN. In this paper, we utilize a state-of-the-art RNN to transfer knowledge to small DNN. We use the RNN model to generate soft alignments and minimize the Kullback-Leibler divergence against the small DNN. The small DNN trained on the soft RNN alignments achieved a 3.93 WER on the Wall Street Journal (WSJ) eval92 task compared to a baseline 4.54 WER or more than 13% relative improvement.",
"title": ""
},
{
"docid": "ef1bc2fc31f465300ed74863c350298a",
"text": "Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018). This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives. The primary results of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models. However, a closer look at these results reveals worryingly strong baselines and strikingly varied results across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.",
"title": ""
},
{
"docid": "1e2a64369279d178ee280ed7e2c0f540",
"text": "We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.",
"title": ""
},
{
"docid": "19a0954fb21092853d9577e25019aaee",
"text": "In this paper the design of a CMOS cascoded operational amplifier is described. Due to technology scaling the design of a former developed operational amplifier has now overcome its stability problems. A stable three stage operational amplifier is presented. A layout has been created automatically by using the ALADIN tool. With help of the extracted layout the performance data of the amplifier is simulated.",
"title": ""
},
{
"docid": "b2768017b8db6d8d4d0697800a556a49",
"text": "The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization). To date, evidence of this phenomenon has been indirect and aroused controversy due to theoretical and practical complications. In particular, it has been pointed out that the MI with the input is theoretically infinite in many cases of interest, and that the MI with the target is fundamentally difficult to estimate in high dimensions. As a consequence, the validity of this theory has been questioned. In this paper, we overcome these obstacles by two means. First, as previously suggested, we replace the MI with the input by a noise-regularized version, which ensures it is finite. As we show, this modified penalty in fact acts as a form of weight-decay regularization. Second, to obtain accurate (noise regularized) MI estimates between an intermediate representation and the input, we incorporate the strong prior-knowledge we have about their relation, into the recently proposed MI estimator of Belghazi et al. (2018). With this scheme, we are able to stably train each layer independently to explicitly optimize the IB functional. Surprisingly, this leads to enhanced prediction accuracy, thus directly validating the IB theory of deep nets for the first time.",
"title": ""
},
{
"docid": "48ad56eb4b866806bc99e941fbde49b9",
"text": "Mosaic trisomy 8 is a relatively common chromosomal abnormality, which shows a great variability in clinical expression, however cases with phenotypic abnormalities tend to present with a distinct, recognizable clinical syndrome with a characteristic facial appearance, a long, slender trunk, limitation of movement in multiple joints, and mild-to-moderate mental retardation; the deep plantar furrows are a typical finding, the agenesis of the corpus callosum occurs frequently. We report a case, which in addition to certain characteristic features of mosaic trisomy 8, presented with craniofacial midline defects, including notched nasal tip, cleft maxillary alveolar ridge, bifid tip of tongue, grooved uvula and left choanal atresia, previously not described in this chromosomal disorder and a severe delay in psychomotor development, uncommon in trisomy 8 mosaicism.",
"title": ""
},
{
"docid": "b18ee7faf7d9fff2cc62a49c4ca3d69d",
"text": "In this paper, we present a novel approach of face identification by formulating the pattern recognition problem in terms of linear regression. Using a fundamental concept that patterns from a single-object class lie on a linear subspace, we develop a linear model representing a probe image as a linear combination of class-specific galleries. The inverse problem is solved using the least-squares method and the decision is ruled in favor of the class with the minimum reconstruction error. The proposed Linear Regression Classification (LRC) algorithm falls in the category of nearest subspace classification. The algorithm is extensively evaluated on several standard databases under a number of exemplary evaluation protocols reported in the face recognition literature. A comparative study with state-of-the-art algorithms clearly reflects the efficacy of the proposed approach. For the problem of contiguous occlusion, we propose a Modular LRC approach, introducing a novel Distance-based Evidence Fusion (DEF) algorithm. The proposed methodology achieves the best results ever reported for the challenging problem of scarf occlusion.",
"title": ""
}
] |
scidocsrr
|
e9ed20e73a8daba9692ece80db45bae1
|
Work engagement and Machiavellianism in the ethical leadership process
|
[
{
"docid": "edf548598375ea1e36abd57dd3bad9c7",
"text": "processes associated with social identity. Group identification, as self-categorization, constructs an intragroup prototypicality gradient that invests the most prototypical member with the appearance of having influence; the appearance arises because members cognitively and behaviorally conform to the prototype. The appearance of influence becomes a reality through depersonalized social attraction processes that makefollowers agree and comply with the leader's ideas and suggestions. Consensual social attraction also imbues the leader with apparent status and creates a status-based structural differentiation within the group into leader(s) and followers, which has characteristics ofunequal status intergroup relations. In addition, afundamental attribution process constructs a charismatic leadership personality for the leader, which further empowers the leader and sharpens the leader-follower status differential. Empirical supportfor the theory is reviewed and a range of implications discussed, including intergroup dimensions, uncertainty reduction and extremism, power, and pitfalls ofprototype-based leadership.",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
}
] |
[
{
"docid": "884121d37d1b16d7d74878fb6aff9cdb",
"text": "All models are wrong, but some are useful. 2 Acknowledgements The authors of this guide would like to thank David Warde-Farley, Guillaume Alain and Caglar Gulcehre for their valuable feedback. Special thanks to Ethan Schoonover, creator of the Solarized color scheme, 1 whose colors were used for the figures. Feedback Your feedback is welcomed! We did our best to be as precise, informative and up to the point as possible, but should there be anything you feel might be an error or could be rephrased to be more precise or com-prehensible, please don't refrain from contacting us. Likewise, drop us a line if you think there is something that might fit this technical report and you would like us to discuss – we will make our best effort to update this document. Source code and animations The code used to generate this guide along with its figures is available on GitHub. 2 There the reader can also find an animated version of the figures.",
"title": ""
},
{
"docid": "bee18c0e11ec5db199861ef74b06bfe1",
"text": "Financial time series are complex, non-stationary and deterministically chaotic. Technical indicators are used with principal component analysis (PCA) in order to identify the most influential inputs in the context of the forecasting model. Neural networks (NN) and support vector regression (SVR) are used with different inputs. Our assumption is that the future value of a stock price depends on the financial indicators although there is no parametric model to explain this relationship. This relationship comes from technical analysis. Comparison shows that SVR and MLP networks require different inputs. The MLP networks outperform the SVR technique.",
"title": ""
},
{
"docid": "15bf072dd0195fa8a9eb19fb82862a4e",
"text": "Recent developments in Graphics Processing Units (GPUs) have enabled inexpensive high performance computing for general-purpose applications. Due to GPU's tremendous computing capability, it has emerged as the co-processor of the CPU to achieve a high overall throughput. CUDA programming model provides the programmers adequate C language like APIs to better exploit the parallel power of the GPU. K-nearest neighbor (KNN) is a widely used classification technique and has significant applications in various domains, especially in text classification. The computational-intensive nature of KNN requires a high performance implementation. In this paper, we present a CUDA-based parallel implementation of KNN, CUKNN, using CUDA multi-thread model, where the data elements are processed in a data-parallel fashion. Various CUDA optimization techniques are applied to maximize the utilization of the GPU. CUKNN outperforms the serial KNN on an HP xw8600 workstation significantly, achieving up to 46.71X speedup including I/O time. It also shows good scalability when varying the dimension of the reference dataset, the number of records in the reference dataset, and the number of records in the query dataset.",
"title": ""
},
{
"docid": "fac9d443e9b9aab923576de449dc1c38",
"text": "The construct of mindfulness appears to be compatible with theories of !ow and peak performance in sport. The present study assessed how Mindful Sport Performance Enhancement (MSPE), a new 4-week program, affected !ow states, performance, and psychological characteristics of 11 archers and 21 golfers from the community. Participants completed trait measures of anxiety, perfectionism, thought disruption, con\"dence, mindfulness, and !ow. They additionally provided data on their performances and state levels of mindfulness and !ow. Analyses revealed that some signi\"cant changes in dimensions of the trait variables occurred during the training. Levels of state !ow attained by the athletes also increased between the \"rst and \"nal sessions. The \"ndings suggest that MSPE is a promising intervention to enhance !ow, mindfulness, and aspects of sport con\"dence. An expanded workshop to allot more time for mindfulness practice is recommended for future studies.",
"title": ""
},
{
"docid": "35e73af4b9f6a32c0fd4e31fde871f8a",
"text": "In this paper, a novel three-phase soft-switching inverter is presented. The inverter-switch turn on and turn off are performed under zero-voltage switching condition. This inverter has only one auxiliary switch, which is also soft switched. Having one auxiliary switch simplifies the control circuit considerably. The proposed inverter is analyzed, and its operating modes are explained in details. The design considerations of the proposed inverter are presented. The experimental results of the prototype inverter confirm the theoretical analysis.",
"title": ""
},
{
"docid": "059463f31fcb83c346f96ed8345ff9a6",
"text": "Cancer incidence is projected to increase in the future and an effectual preventive strategy is required to face this challenge. Alteration of dietary habits is potentially an effective approach for reducing cancer risk. Assessment of biological effects of a specific food or bioactive component that is linked to cancer and prediction of individual susceptibility as a function of nutrient-nutrient interactions and genetics is an essential element to evaluate the beneficiaries of dietary interventions. In general, the use of biomarkers to evaluate individuals susceptibilities to cancer must be easily accessible and reliable. However, the response of individuals to bioactive food components depends not only on the effective concentration of the bioactive food components, but also on the target tissues. This fact makes the response of individuals to food components vary from one individual to another. Nutrigenomics focuses on the understanding of interactions between genes and diet in an individual and how the response to bioactive food components is influenced by an individual's genes. Nutrients have shown to affect gene expression and to induce changes in DNA and protein molecules. Nutrigenomic approaches provide an opportunity to study how gene expression is regulated by nutrients and how nutrition affects gene variations and epigenetic events. Finding the components involved in interactions between genes and diet in an individual can potentially help identify target molecules important in preventing and/or reducing the symptoms of cancer.",
"title": ""
},
{
"docid": "7e2b4e6a887d99a58e4ae9d9666d05e0",
"text": "Much has been written on shortest path problems with weight, or resource, constraints. However, relatively little of it has provided systematic computational comparisons for a representative selection of algorithms. Furthermore, there has been almost no work showing numerical performance of scaling algorithms, although worst-case complexity guarantees for these are well known, nor has the effectiveness of simple preprocessing techniques been fully demonstrated. Here, we provide a computational comparison of three scaling techniques and a standard label-setting method. We also describe preprocessing techniques which take full advantage of cost and upper-bound information that can be obtained from simple shortest path information. We show that integrating information obtained in preprocessing within the label-setting method can lead to very substantial improvements in both memory required and run time, in some cases, by orders of magnitude. Finally, we show how the performance of the label-setting method can be further improved by making use of all Lagrange multiplier information collected in a Lagrangean relaxation first step. © 2003 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "90c91082dd6bb7c23e0597b56b5a72f4",
"text": "Cache-partitioned architectures allow subsections of the shared last-level cache (LLC) to be exclusively reserved for some applications. This technique dramatically limits interactions between applications that are concurrently executing on a multi-core machine. Consider n applications that execute concurrently, with the objective to minimize the makespan, defined as the maximum completion time of the n applications. Key scheduling questions are: (i) which proportion of cache and (ii) how many processors should be given to each application? In this paper, we provide answers to (i) and (ii) for Amdahl applications. Even though the problem is shown to be NP-complete, we give key elements to determine the subset of applications that should share the LLC (while remaining ones only use their smaller private cache). Building upon these results, we design efficient heuristics for Amdahl applications. Extensive simulations demonstrate the usefulness of co-scheduling when our efficient cache partitioning strategies are deployed.",
"title": ""
},
{
"docid": "0c8d6441b5756d94cd4c3a0376f94fdc",
"text": "Electronic word of mouth (eWOM) has been an important factor influencing consumer purchase decisions. Using the ABC model of attitude, this study proposes a model to explain how eWOM affects online discussion forums. Specifically, we propose that platform (Web site reputation and source credibility) and customer (obtaining buying-related information and social orientation through information) factors influence purchase intentions via perceived positive eWOM review credibility, as well as product and Web site attitudes in an online community context. A total of 353 online discussion forum users in an online community (Fashion Guide) in Taiwan were recruited, and structural equation modeling (SEM) was used to test the research hypotheses. The results indicate that Web site reputation, source credibility, obtaining buying-related information, and social orientation through information positively influence perceived positive eWOM review credibility. In turn, perceived positive eWOM review credibility directly influences purchase intentions and also indirectly influences purchase intentions via product and Web site attitudes. Finally, we discuss the theoretical and managerial implications of the findings.",
"title": ""
},
{
"docid": "d62b3a328257253bcb41bf0fbdeb9242",
"text": "Logging has been a common practice for monitoring and diagnosing performance issues. However, logging comes at a cost, especially for large-scale online service systems. First, the overhead incurred by intensive logging is non-negligible. Second, it is costly to diagnose a performance issue if there are a tremendous amount of redundant logs. Therefore, we believe that it is important to limit the overhead incurred by logging, without sacrificing the logging effectiveness. In this paper we propose Log2, a cost-aware logging mechanism. Given a “budget” (defined as the maximum volume of logs allowed to be output in a time interval), Log2 makes the “whether to log” decision through a two-phase filtering mechanism. In the first phase, a large number of irrelevant logs are discarded efficiently. In the second phase, useful logs are cached and output while complying with logging budget. In this way, Log2 keeps the useful logs and discards the less useful ones. We have implemented Log2 and evaluated it on an open source system as well as a real-world online service system from Microsoft. The experimental results show that Log2 can control logging overhead while preserving logging effectiveness.",
"title": ""
},
{
"docid": "a5cd7d46dc74d15344e2f3e9b79388a3",
"text": "A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.",
"title": ""
},
{
"docid": "907489a354a9d711fb4bd613fb93b874",
"text": "A popular approach for dimensionality reduction and data analysis is principal component analysis (PCA). A limiting factor with PCA is that it does not inform us on which of the original features are important. There is a recent interest in sparse PCA (SPCA). By applying an L1 regularizer to PCA, a sparse transformation is achieved. However, true feature selection may not be achieved as non-sparse coefficients may be distributed over several features. Feature selection is an NP-hard combinatorial optimization problem. This paper relaxes and re-formulates the feature selection problem as a convex continuous optimization problem that minimizes a mean-squared-reconstruction error (a criterion optimized by PCA) and considers feature redundancy into account (an important property in PCA and feature selection). We call this new method Convex Principal Feature Selection (CPFS). Experiments show that CPFS performed better than SPCA in selecting features that maximize variance or minimize the mean-squaredreconstruction error.",
"title": ""
},
{
"docid": "dfc6455cb7c12037faeb8c02c0027570",
"text": "This paper proposes efficient and powerful deep networks for action prediction from partially observed videos containing temporally incomplete action executions. Different from after-the-fact action recognition, action prediction task requires action labels to be predicted from these partially observed videos. Our approach exploits abundant sequential context information to enrich the feature representations of partial videos. We reconstruct missing information in the features extracted from partial videos by learning from fully observed action videos. The amount of the information is temporally ordered for the purpose of modeling temporal orderings of action segments. Label information is also used to better separate the learned features of different categories. We develop a new learning formulation that enables efficient model training. Extensive experimental results on UCF101, Sports-1M and BIT datasets demonstrate that our approach remarkably outperforms state-of-the-art methods, and is up to 300x faster than these methods. Results also show that actions differ in their prediction characteristics, some actions can be correctly predicted even though only the beginning 10% portion of videos is observed.",
"title": ""
},
{
"docid": "4fb372431e28398691c079936e2f9fe6",
"text": "Entity matching (EM) is a critical part of data integration. We study how to synthesize entity matching rules from positive-negative matching examples. The core of our solution is program synthesis, a powerful tool to automatically generate rules (or programs) that satisfy a given highlevel specification, via a predefined grammar. This grammar describes a General Boolean Formula (GBF) that can include arbitrary attribute matching predicates combined by conjunctions ( Ź",
"title": ""
},
{
"docid": "023d51f3dadddd872f41f75af7f63494",
"text": "The term sequential I/O is widely used in systems research with the intuitive understanding that it means consecutive access. From a survey of the literature, though, this intuitive understanding has translated into numerous, inconsistent definitions. Since sequential I/O is such a fundamental concept in systems research, we believe that a sequentiality metric should allow us to compare access patterns in a meaningful way. We explore access properties that could be incorporated into potential metrics for sequential I/O including: access size, gaps between accesses, multi-stream, and inter-arrival time. We then analyze hundreds of largescale storage traces and discuss how potential metrics compare. Interestingly, we find I/O traces considered highly sequential by one metric can be highly random to another metric. We further demonstrate that many plausible metrics are weakly correlated, though metrics weighted by size have more consistency. While there may not be a single metric for sequential I/O that is best in all cases, we believe systems researchers should more carefully consider, and state, which definition they use.",
"title": ""
},
{
"docid": "0a7db914781aacb79a7139f3da41efbb",
"text": "This work studies the reliability behaviour of gate oxides grown by in situ steam generation technology. A comparison with standard steam oxides is performed, investigating interface and bulk properties. A reduced conduction at low fields and an improved reliability is found for ISSG oxide. The initial lower bulk trapping, but with similar degradation rate with respect to standard oxides, explains the improved reliability results. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fb2ab8efc11c371e7183eacaee707f71",
"text": "Direct current (DC) motors are controlled easily and have very high performance. The speed of the motors could be adjusted within a wide range. Today, classical control techniques (such as Proportional Integral Differential PID) are very commonly used for speed control purposes. However, it is observed that the classical control techniques do not have an adequate performance in the case of nonlinear systems. Thus, instead, a modern technique is preferred: fuzzy logic. In this paper the control system is modelled using MATLAB/Simulink. Using both PID controller and fuzzy logic techniques, the results are compared for different speed values.",
"title": ""
},
{
"docid": "3df56e37955b28b0c4f16a82b574e7d1",
"text": "Enterprises are creating domain-specific knowledge bases by curating and integrating all their business data, structured, unstructured and semi-structured, and using them in enterprise applications to derive better business decisions. One distinct characteristic of these enterprise knowledge bases, compared to the open-domain general purpose knowledge bases like DBpedia [16] and Freebase [6], is their deep domain specialization. This deep domain understanding empowers many applications in various domains, such as health care and finance. Exploring such knowledge bases, and operational data stores requires different querying capabilities. In addition to search, these databases also require very precise structured queries, including aggregations, as well as complex graph queries to understand the various relationships between various entities of the domain. For example, in a financial knowledge base, users may want to find out “which startups raised the most VC funding in the first quarter of 2017”; a very precise query that is best expressed in SQL. The users may also want to find all possible relationships between two specific board members of these startups, a query which is naturally expressed as an all-paths graph query. It is important to note that general purpose knowledge bases could also benefit from different query capabilities, but in this paper we focus on domain-specific knowledge graphs and their query needs. Instead of learning and using many complex query languages, one natural way to query the data in these cases is using natural language interfaces to explore the data. In fact, human interaction with technology through conversational services is making big strides in many application domains in recent years [13]. Such interfaces are very desirable because they do not require the users to learn a complex query language, such as SQL, and the users do not need to know the exact schema of the data, or how it is stored. There are several challenges in building a natural language interface to query data sets. The most difficult task is understanding the semantics of the query, hence the user intent. Early systems [3, 30] allowed only a set of keywords, which had very limited expressive power. There have been works to interpret the semantics of a full-blown English language query. These works in general try to disambiguate among the potentially multiple meanings of the words and their relationships. Some of these are machine-learning based [5, 24, 29] that require good training sets, which are hard to obtain. Others require user feedback [14, 17, 18]. However, excessive user interaction to resolve ambiguities can be detrimental to user experience. In this paper, we describe a unique end-to-end ontology-based system for natural language querying over complex data sets. The system uses domain ontologies, which describe the semantic entities and their relationships, to reason about and capture user intent. To support multiple query types, the system provides a poly store",
"title": ""
},
{
"docid": "6bca96b8ae247f9333dee176aa6adab5",
"text": "A total of 43 plant substances provided as raw material and different kinds of extracts (aqueous, ethanol, and heptane) from 18 different organic wastes obtained from the food/feed industry were investigated for their in vitro activities against clonal cultures of Histomonas meleagridis, Tetratrichomonas gallinarum, and Blastocystis sp. Ethanolic extracts of thyme, saw palmetto, grape seed, and pumpkin fruit proved to be most efficacious. Thus, these extracts were further tested in vivo in turkeys experimentally infected with H. meleagridis by administrating the substances to the birds through the drinking water. Even though a delayed mortality was noticed in some birds medicated with the extracts of thyme, grape seed, and pumpkin fruit, all birds died or had to be euthanized the latest within 5 weeks post infection—with the exception of one bird which was probably never infected with histomonads—due to a severe typhlohepatitis indicative for histomonosis. In addition, none of the substances were able to prevent the spreading of H. meleagridis from infected to in-contact birds. Thus, these studies clearly demonstrate that in vitro studies are of limited value to assess the efficacy of plant substances against histomonosis.",
"title": ""
},
{
"docid": "5c444fcd85dd89280eee016fd1cbd175",
"text": "Over the last years, object detection has become a more and more active field of research in robotics. An important problem in object detection is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google’s 3D Warehouse to train an object detection system for 3D point clouds collected by robots navigating through both urban and indoor environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled point clouds and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real-world environments.",
"title": ""
}
] |
scidocsrr
|
a161b1e58e28a32e1a520e5155de115d
|
Measurement of individual differences: lessons from memory assessment in research and clinical practice.
|
[
{
"docid": "58039fbc0550c720c4074c96e866c025",
"text": "We argue that to best comprehend many data sets, plotting judiciously selected sample statistics with associated confidence intervals can usefully supplement, or even replace, standard hypothesis-testing procedures. We note that most social science statistics textbooks limit discussion of confidence intervals to their use in between-subject designs. Our central purpose in this article is to describe how to compute an analogous confidence interval that can be used in within-subject designs. This confidence interval rests on the reasoning that because between-subject variance typically plays no role in statistical analyses of within-subject designs, it can legitimately be ignored; hence, an appropriate confidence interval can be based on the standard within-subject error term-that is, on the variability due to the subject × condition interaction. Computation of such a confidence interval is simple and is embodied in Equation 2 on p. 482 of this article. This confidence interval has two useful properties. First, it is based on the same error term as is the corresponding analysis of variance, and hence leads to comparable conclusions. Second, it is related by a known factor (√2) to a confidence interval of the difference between sample means; accordingly, it can be used to infer the faith one can put in some pattern of sample means as a reflection of the underlying pattern of population means. These two properties correspond to analogous properties of the more widely used between-subject confidence interval.",
"title": ""
}
] |
[
{
"docid": "2c9ee5e88db62a4de92dbefb72bb61de",
"text": "Surveys are probably the most commonly-used research method world-wide. Survey work is visible not only because we see many examples of it in software engineering research, but also because we are often asked to participate in surveys in our private capacity, as electors, consumers, or service users. This widespread use of surveys may give us the impression that surveybased research is straightforward, an easy option for researchers to gather important information about products, context, processes, workers and more. In our personal experience with applying and evaluating research methods and their results, we certainly did not expect to encounter major problems with a survey that we planned, to investigate issues associated with technology adoption. This article and subsequent ones in this series describe how wrong we were. We do not want to give the impression that there is any way of turning a bad survey into a good one; if a survey is a lemon, it stays a lemon. However, we believe that learning from our mistakes is the way to make lemonade from lemons. So this series of articles shares with you our lessons learned, in the hope of improving survey research in software engineering.",
"title": ""
},
{
"docid": "768336582eb1aece4454ec461f3840d2",
"text": "This paper presents an Iterative Linear Quadratic Regulator (ILQR) me thod for locally-optimal feedback control of nonlinear dynamical systems. The method is applied to a musculo-s ke etal arm model with 10 state dimensions and 6 controls, and is used to compute energy-optimal reach ing movements. Numerical comparisons with three existing methods demonstrate that the new method converge s substantially faster and finds slightly better solutions.",
"title": ""
},
{
"docid": "16b78e470af247cc65fd1ef4e17ace4b",
"text": "OBJECTIVES\nTo examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information.\n\n\nDESIGN\nTo obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken.\n\n\nSETTING\nBarts and the London School of Medicine and Dentistry, University of London.\n\n\nSUBJECTS\n50 second- and third-year medical students.\n\n\nRESULTS\nRecall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%).\n\n\nCONCLUSION\nMind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.",
"title": ""
},
{
"docid": "106fefb169c7e95999fb411b4e07954e",
"text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.",
"title": ""
},
{
"docid": "951f79f828d3375c7544129cdb575940",
"text": "In this paper, we deal with imitation learning of arm movements in humanoid robots. Hidden Markov models (HMM) are used to generalize movements demonstrated to a robot multiple times. They are trained with the characteristic features (key points) of each demonstration. Using the same HMM, key points that are common to all demonstrations are identified; only those are considered when reproducing a movement. We also show how HMM can be used to detect temporal dependencies between both arms in dual-arm tasks. We created a model of the human upper body to simulate the reproduction of dual-arm movements and generate natural-looking joint configurations from tracked hand paths. Results are presented and discussed",
"title": ""
},
{
"docid": "7650efa35c554fd282f841f7348fb49e",
"text": "By inserting a microlens array at the intermediate image plane of an optical microscope, one can record four-dimensional light fields of biological specimens in a single snapshot. Unlike a conventional photograph, light fields permit manipulation of viewpoint and focus after the snapshot has been taken, subject to the resolution of the camera and the diffraction limit of the optical system. By inserting a second microlens array and video projector into the microscope’s illumination path, one can control the incident light field falling on the specimen in a similar way. In this paper, we describe a prototype system we have built that implements these ideas, and we demonstrate two applications for it: simulating exotic microscope illumination modalities and correcting for optical aberrations digitally.",
"title": ""
},
{
"docid": "7023226f1e77729ec38eeb5158e8811d",
"text": "Combinatory Categorial Grammar (CCG) is a grammar formalism used for natural language parsing. CCG assigns structured lexical categories to words and uses a small set of combinatory rules to combine these categories in order to parse sentences. In this work we describe and implement a new approach to CCG parsing that relies on Answer Set Programming (ASP) — a declarative programming paradigm. Different from previous work, we present an encoding that is inspired by the algorithm due to Cocke, Younger, and Kasami (CYK). We also show encoding extensions for parse tree normalization and best-effort parsing and outline possible future extensions which are possible due to the usage of ASP as computational mechanism. We analyze performance of our approach on a part of the Brown corpus and discuss lessons learned during experiments with the ASP tools dlv, gringo, and clasp. The new approach is available in the open source CCG parsing toolkit AspCcgTk which uses the C&C supertagger as a preprocessor to achieve wide-coverage natural language parsing.",
"title": ""
},
{
"docid": "4b426740b321249b63eaca19dbc29370",
"text": "The healthcare data is an important asset and rich source of healthcare intellect. Medical databases, if created properly, will be large, complex, heterogeneous and time varying. The main challenge nowadays is to store and process this data efficiently so that it can benefit humans. Heterogeneity in the healthcare sector in the form of medical data is also considered to be one of the biggest challenges for researchers. Sometimes, this data is referred to as large-scale data or big data. Blockchain technology and the Cloud environment have proved their usability separately. Though these two technologies can be combined to enhance the exciting applications in healthcare industry. Blockchain is a highly secure and decentralized networking platform of multiple computers called nodes. It is changing the way medical information is being stored and shared. It makes the work easier, keeps an eye on the security and accuracy of the data and also reduces the cost of maintenance. A Blockchain-based platform is proposed that can be used for storing and managing electronic medical records in a Cloud environment.",
"title": ""
},
{
"docid": "e93eaa695003cb409957e5c7ed19bf2a",
"text": "Prominent research argues that consumers often use personal budgets to manage self-control problems. This paper analyzes the link between budgeting and selfcontrol problems in consumption-saving decisions. It shows that the use of goodspecific budgets depends on the combination of a demand for commitment and the demand for flexibility resulting from uncertainty about intratemporal trade-offs between goods. It explains the subtle mechanism which renders budgets useful commitments, their interaction with minimum-savings rules (another widely-studied form of commitment), and how budgeting depends on the intensity of self-control problems. This theory matches several empirical findings on personal budgeting. JEL CLASSIFICATION: D23, D82, D86, D91, E62, G31",
"title": ""
},
{
"docid": "3dcf6c5e59d4472c0b0e25c96b992f3e",
"text": "This paper presents the design of Ultra Wideband (UWB) microstrip antenna consisting of a circular monopole patch antenna with 3 block stepped (wing). The antenna design is an improvement from previous research and it is simulated using CST Microwave Studio software. This antenna was designed on Rogers 5880 printed circuit board (PCB) with overall size of 26 × 40 × 0.787 mm3 and dielectric substrate, εr = 2.2. The performance of the designed antenna was analyzed in term of bandwidth, gain, return loss, radiation pattern, and verified through actual measurement of the fabricated antenna. 10 dB return loss bandwidth from 3.37 GHz to 10.44 GHz based on 50 ohm characteristic impedance for the transmission line model was obtained.",
"title": ""
},
{
"docid": "01e35a79d042275d7935e7531b4c7fde",
"text": "Biometrics technologies are gaining popularity today since they provide more reliable and efficient means of authentication and verification. Keystroke Dynamics is one of the famous biometric technologies, which will try to identify the authenticity of a user when the user is working via a keyboard. The authentication process is done by observing the change in the typing pattern of the user. A comprehensive survey of the existing keystroke dynamics methods, metrics, different approaches are given in this study. This paper also discusses about the various security issues and challenges faced by keystroke dynamics. KeywordsBiometris; Keystroke Dynamics; computer Security; Information Security; User Authentication.",
"title": ""
},
{
"docid": "d01b8d59f5e710bcf75978d1f7dcdfa3",
"text": "Over the last few decades, the use of electroencephalography (EEG) signals for motor imagery based brain-computer interface (MI-BCI) has gained widespread attention. Deep learning have also gained widespread attention and used in various application such as natural language processing, computer vision and speech processing. However, deep learning has been rarely used for MI EEG signal classification. In this paper, we present a deep learning approach for classification of MI-BCI that uses adaptive method to determine the threshold. The widely used common spatial pattern (CSP) method is used to extract the variance based CSP features, which is then fed to the deep neural network for classification. Use of deep neural network (DNN) has been extensively explored for MI-BCI classification and the best framework obtained is presented. The effectiveness of the proposed framework has been evaluated using dataset IVa of the BCI Competition III. It is found that the proposed framework outperforms all other competing methods in terms of reducing the maximum error. The framework can be used for developing BCI systems using wearable devices as it is computationally less expensive and more reliable compared to the best competing methods.",
"title": ""
},
{
"docid": "a3cb6d84445bea04c5da888d34928c94",
"text": "In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-the-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. Demo1 and code2 are provided.",
"title": ""
},
{
"docid": "256b56bf5eb3a99de4b889d8e1eb735b",
"text": "This paper presents the design of a single layer, compact, tapered balun with a >20:1 bandwidth and less than λ/17 in length at the lowest frequency of operation. The balun operates from 0.7GHz to over 15GHz. It can provide both impedance transformation as well as a balanced feed for tightly coupled arrays. Its performance is compared with that of a full-length balun operating over the same frequency band. There is a high degree of agreement between the two baluns.",
"title": ""
},
{
"docid": "4060ad2f4d733aa5687fe585bbe8c550",
"text": "The vector space model is the usual representation of texts database for computational treatment. However, in such representation synonyms and/or related terms are treated as independent. Furthermore, there are some terms that do not add any information at all to the set of text documents, on the contrary they even might harm the performance of the information retrieval techniques. In an attempt to reduce this problem, some techniques have been proposed in the literature. In this work we present a method to tackle this problem. In order to validate our approach, we carried out a serie of experiments on four databases and we compare the achieved results with other well known techniques. The evaluation results is such that our method obtained in all cases a better or equal performance compared to the other literature techniques.",
"title": ""
},
{
"docid": "4b3c69e446dcf1d237db63eb4f106dd7",
"text": "Creating linguistic annotations requires more than just a reliable annotation scheme. Annotation can be a complex endeavour potentially involving many people, stages, and tools. This chapter outlines the process of creating end-toend linguistic annotations, identifying specific tasks that researchers often perform. Because tool support is so central to achieving high quality, reusable annotations with low cost, the focus is on identifying capabilities that are necessary or useful for annotation tools, as well as common problems these tools present that reduce their utility. Although examples of specific tools are provided in many cases, this chapter concentrates more on abstract capabilities and problems because new tools appear continuously, while old tools disappear into disuse or disrepair. The two core capabilities tools must have are support for the chosen annotation scheme and the ability to work on the language under study. Additional capabilities are organized into three categories: those that are widely provided; those that often useful but found in only a few tools; and those that have as yet little or no available tool support. 1 Annotation: More than just a scheme Creating manually annotated linguistic corpora requires more than just a reliable annotation scheme. A reliable scheme, of course, is a central ingredient to successful annotation; but even the most carefully designed scheme will not answer a number of practical questions about how to actually create the annotations, progressing from raw linguistic data to annotated linguistic artifacts that can be used to answer interesting questions or do interesting things. Annotation, especially high-quality annotation of large language datasets, can be a complex process potentially involving many people, stages, and tools, and the scheme only specifies the conceptual content of the annotation. By way of example, the following questions are relevant to a text annotation project and are not answered by a scheme: How should linguistic artifacts be prepared? Will the originals be annotated directly, or will their textual content be extracted into separate files for annotation? In the latter case, what layout or formatting will be kept (lines, paragraphs page breaks, section headings, highlighted text)? What file format will be used? How will typographical errors be handled? Will typos be ignored, changed in the original, changed in extracted content, or encoded as an additional annotation? Who will be allowed to make corrections: the annotators themselves, adjudicators, or perhaps only the project manager? How will annotators be provided artifacts to annotate? How will the order of annotation be specified (if at all), and how will this order be enforced? How will the project manager ensure that each document is annotated the appropriate number of times (e.g., by two different people for double annotation). What inter-annotator agreement measures (IAAs) will be measured, and when? Will IAAs be measured continuously, on batches, or on other subsets of the corpus? How will their measurement at the right time be enforced? Will IAAs be used to track annotator training? If so, what level of IAA will be considered to indicate that training has succeeded? These questions are only a small selection of those that arise during the practical process of conducting annotation. The first goal of this chapter is to give an overview of the process of annotation from start to finish, pointing out these sorts of questions and subtasks for each stage. We will start with a known conceptual framework for the annotation process, the MATTER framework (Pustejovsky & Stubbs, 2013) and expand upon it. Our expanded framework is not guaranteed to be complete, but it will give a reader a very strong flavor of the kind of issues that arise so that they can start to anticipate them in the design of their own annotation project. The second goal is to explore the capabilities required by annotation tools. Tool support is central to effecting high quality, reusable annotations with low cost. The focus will be on identifying capabilities that are necessary or useful for annotation tools. Again, this list will not be exhaustive but it will be fairly representative, as the majority of it was generated by surveying a number of annotation experts about their opinions of available tools. Also listed are common problems that reduce tool utility (gathered during the same survey). Although specific examples of tools will be provided in many cases, the focus will be on more abstract capabilities and problems because new tools appear all the time while old tools disappear into disuse or disrepair. Before beginning, it is well to first introduce a few terms. By linguistic artifact, or just artifact, we mean the object to which annotations are being applied. These could be newspaper articles, web pages, novels, poems, TV 2 Mark A. Finlayson and Tomaž Erjavec shows, radio broadcasts, images, movies, or something else that involves language being captured in a semipermanent form. When we use the term document we will generally mean textual linguistic artifacts such as books, articles, transcripts, and the like. By annotation scheme, or just scheme, we follow the terminology as given in the early chapters of this volume, where a scheme comprises a linguistic theory, a derived model of a phenomenon of interest, a specification that defines the actual physical format of the annotation, and the guidelines that explain to an annotator how to apply the specification to linguistic artifacts. (citation to Chapter III by Ide et al.) By computing platform, or just platform, we mean any computational system on which an annotation tool can be run; classically this has meant personal computers, either desktops or laptops, but recently the range of potential computing platforms has expanded dramatically, to include on the one hand things like web browsers and mobile devices, and, on the other, internet-connected annotation servers and service oriented architectures. Choice of computing platform is driven by many things, including the identity of the annotators and their level of sophistication. We will speak of the annotation process or just process within an annotation project. By process, we mean any procedure or activity, at any level of granularity, involved in the production of annotation. This potentially encompasses everything from generating the initial idea, applying the annotation to the artifacts, to archiving the annotated documents for distribution. Although traditionally not considered part of annotation per se, we might also include here writing academic papers about the results of the annotation, as these activities also sometimes require annotation-focused tool support. We will also speak of annotation tools. By tool we mean any piece of computer software that runs on a computing platform that can be used to implement or carry out a process in the annotation project. Classically conceived annotation tools include software such as the Alembic workbench, Callisto, or brat (Day et al., 1997; Day, McHenry, Kozierok, & Riek, 2004; Stenetorp et al., 2012), but tools can also include software like Microsoft Word or Excel, Apache Tomcat (to run web servers), Subversion or Git (for document revision control), or mobile applications (apps). Tools usually have user interfaces (UIs), but they are not always graphical, fully functional, or even all that helpful. There is a useful distinction between a tool and a component (also called an NLP component, or an NLP algorithm; in UIMA (Apache, 2014) called an annotator), which are pieces of software that are intended to be integrated as libraries into software and can often be strung together in annotation pipelines for applying automatic annotations to linguistic artifacts. Software like tokenizers, part of speech taggers, parsers (Manning et al., 2014), multiword expression detectors (Kulkarni & Finlayson, 2011) or coreference resolvers (Pradhan et al., 2011) are all components. Sometimes the distinction between a tool and a component is not especially clear cut, but it is a useful one nonetheless. The main reason a chapter like this one is needed is that there is no one tool that does everything. There are multiple stages and tasks within every annotation project, typically requiring some degree of customization, and no tool does it all. That is why one needs multiple tools in annotation, and why a detailed consideration of the tool capabilities and problems is needed. 2 Overview of the Annotation Process The first step in an annotation project is, naturally, defining the scheme, but many other tasks must be executed to go from an annotation scheme to an actual set of cleanly annotated files useful for other tasks. 2.1 MATTER & MAMA A good starting place for organizing our conception of the various stages of the process of annotation is the MATTER cycle, proposed by Pustejovsky & Stubbs (2013). This framework outlines six major stages to annotation, corresponding to each letter in the word, defined as follows: M = Model: In this stage, the first of the process, the project leaders set up the conceptual framework for the project. Subtasks may include: Search background work to understand existing theories of the phenomena Create or adopt an abstract model of the phenomenon Define an annotation scheme based on the model Overview of Annotation Creation: Processes & Tools 3 Search libraries, the web, and online repositories for potential linguistic artifacts Create corpus artifacts if appropriate artifacts do not exist Measure overall characteristics of artifacts to ground estimates of representativeness and balance Collect the artifacts on which the annotation will be performed Track artifact licenses Measure various statistics of the collected corpus Choose an annotation specification language Build an annotation specification that disti",
"title": ""
},
{
"docid": "50afcbdf0482c75ae41afd8525274933",
"text": "Adhesive devices of digital pads of gecko lizards are formed by microscopic hair-like structures termed setae that derive from the interaction between the oberhautchen and the clear layer of the epidermis. The two layers form the shedding complex and permit skin shedding in lizards. Setae consist of a resistant but flexible corneous material largely made of keratin-associated beta-proteins (KA beta Ps, formerly called beta-keratins) of 8-22 kDa and of alpha-keratins of 45-60 kDa. In Gekko gecko, 19 sauropsid keratin-associated beta-proteins (sKAbetaPs) and at least two larger alpha-keratins are expressed in the setae. Some sKA beta Ps are rich in cysteine (111-114 amino acids), while others are rich in glycine (169-219 amino acids). In the entire genome of Anolis carolinensis 40 Ka beta Ps are present and participate in the formation of all types of scales, pad lamellae and claws. Nineteen sKA beta Ps comprise cysteine-rich 9.2-14.4 kDa proteins of 89-142 amino acids, and 19 are glycine-rich 16.5-22.0 kDa proteins containing 162-225 amino acids, and only two types of sKA beta Ps are cysteine- and glycine-poor proteins. Genes coding for these proteins contain an intron in the 5'-non-coding region, a typical characteristic of most sauropsid Ka beta Ps. Gecko KA beta Ps show a central amino acid region of high homology and a beta-pleated conformation that is likely responsible for the polymerization of Ka beta Ps into long and resistant filaments. The association of numerous filaments, probably over a framework of alpha-keratins, permits the formation of bundles of corneous material for the elongation of setae, which may be over 100 microm long. The terminals branching off each seta may derive from the organization of the cytoskeleton and from the mechanical separation of keratin bundles located at the terminal apex of setae.",
"title": ""
},
{
"docid": "4d389e4f6e33d9f5498e3071bf116a49",
"text": "This paper reviews the origins and definitions of social capital in the writings of Bourdieu, Loury, and Coleman, among other authors. It distinguishes four sources of social capital and examines their dynamics. Applications of the concept in the sociological literature emphasize its role in social control, in family support, and in benefits mediated by extrafamilial networks. I provide examples of each of these positive functions. Negative consequences of the same processes also deserve attention for a balanced picture of the forces at play. I review four such consequences and illustrate them with relevant examples. Recent writings on social capital have extended the concept from an individual asset to a feature of communities and even nations. The final sections describe this conceptual stretch and examine its limitations. I argue that, as shorthand for the positive consequences of sociability, social capital has a definite place in sociological theory. However, excessive extensions of the concept may jeopardize its heuristic value. Alejandro Portes: Biographical Sketch Alejandro Portes is professor of sociology at Princeton University and faculty associate of the Woodrow Wilson School of Public Affairs. He formerly taught at Johns Hopkins where he held the John Dewey Chair in Arts and Sciences, Duke University, and the University of Texas-Austin. In 1997 he held the Emilio Bacardi distinguished professorship at the University of Miami. In the same year he was elected president of the American Sociological Association. Born in Havana, Cuba, he came to the United States in 1960. He was educated at the University of Havana, Catholic University of Argentina, and Creighton University. He received his MA and PhD from the University of Wisconsin-Madison. 0360-0572/98/0815-0001$08.00 1 A nn u. R ev . S oc io l. 19 98 .2 4: 124 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g A cc es s pr ov id ed b y St an fo rd U ni ve rs ity M ai n C am pu s R ob er t C ro w n L aw L ib ra ry o n 03 /1 0/ 17 . F or p er so na l u se o nl y. Portes is the author of some 200 articles and chapters on national development, international migration, Latin American and Caribbean urbanization, and economic sociology. His most recent books include City on the Edge, the Transformation of Miami (winner of the Robert Park award for best book in urban sociology and of the Anthony Leeds award for best book in urban anthropology in 1995); The New Second Generation (Russell Sage Foundation 1996); Caribbean Cities (Johns Hopkins University Press); and Immigrant America, a Portrait. The latter book was designated as a centennial publication by the University of California Press. It was originally published in 1990; the second edition, updated and containing new chapters on American immigration policy and the new second generation, was published in 1996.",
"title": ""
},
{
"docid": "8f54f2c6e9736a63ea4a99f89090e0a2",
"text": "This article demonstrates how documents prepared in hypertext or word processor format can be saved in portable document format (PDF). These files are self-contained documents that that have the same appearance on screen and in print, regardless of what kind of computer or printer are used, and regardless of what software package was originally used to for their creation. PDF files are compressed documents, invariably smaller than the original files, hence allowing rapid dissemination and download.",
"title": ""
},
{
"docid": "26de3db8dd19ed38c6c85368c4c59573",
"text": "This article reviews the recent literature on physical findings related to the hymen in pubertal and prepubertal girls with and without a history of sexual abuse. Characteristics of normal hymenal anatomy, acute traumatic findings, and characteristics of healed trauma are discussed, particularly with regard to changes in the interpretation of these findings that have occurred over time.",
"title": ""
}
] |
scidocsrr
|
ca8fa8d74354af18b5e72679d6565533
|
IntentNet: Learning to Predict Intention from Raw Sensor Data
|
[
{
"docid": "76ad212ccd103c93d45c1ffa0e208b45",
"text": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.",
"title": ""
},
{
"docid": "2da84ca7d7db508a6f9a443f2dbae7c1",
"text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.",
"title": ""
}
] |
[
{
"docid": "70edece0ace41f32630a3b813dfba58e",
"text": "Although a wide range of virtual reality (VR) systems are in use, there are few guidelines to help system and application developers select the components most appropriate for the domain problem they are investigating. Using the results of an empirical study, we developed such guidelines for the choice of display environment for four specific, but common, volume visualization problems: identification and judgment of the size, shape, density, and connectivity of objects present in a volume. These tasks are derived from questions being asked by collaborators studying Cystic Fibrosis (CF). We compared user performance in three different stereo VR systems: (1) head-mounted display (HMD); (2) fish tank VR (fish tank); and (3) fish tank VR augmented with a haptic device (haptic). HMD participants were placed \"inside\" the volume and walked within it to explore its structure. Fish tank and haptic participants saw the entire volume on-screen and rotated it to view it from different perspectives. Response time and accuracy were used to measure performance. Results showed that the fish tank and haptic groups were significantly more accurate at judging the shape, density, and connectivity of objects and completed the tasks significantly faster than the HMD group. Although the fish tank group was itself significantly faster than the haptic group, there were no statistical differences in accuracy between the two. Participants classified the HMD system as an \"inside-out\" display (looking outwards from inside the volume), and the fish tank and haptic systems as \"outside-in\" displays (looking inwards from outside the volume). Including haptics added an inside-out capability to the fish tank system through the use of touch. We recommend an outside-in system because it offers both overview and context, two visual properties that are important for the volume visualization tasks we studied. In addition, based on the haptic group's opinion (80% positive) that haptic feedback aided comprehension, we recommend supplementing the outside-in visual display with inside-out haptics when possible.",
"title": ""
},
{
"docid": "a54bc0f529d047aa273d834c53c15bd3",
"text": "This paper presents an optimized methodology to folded cascode operational transconductance amplifier (OTA) design. The design is done in different regions of operation, weak inversion, strong inversion and moderate inversion using the gm/ID methodology in order to optimize MOS transistor sizing. Using 0.35μm CMOS process, the designed folded cascode OTA achieves a DC gain of 77.5dB and a unity-gain frequency of 430MHz in strong inversion mode. In moderate inversion mode, it has a 92dB DC gain and provides a gain bandwidth product of around 69MHz. The OTA circuit has a DC gain of 75.5dB and unity-gain frequency limited to 19.14MHZ in weak inversion region. Keywords—CMOS IC design, Folded Cascode OTA, gm/ID methodology, optimization.",
"title": ""
},
{
"docid": "17c54cad1666e22db0c5dd9c81d43b8b",
"text": "With the prevalence of e-commence websites and the ease of online shopping, consumers are embracing huge amounts of various options in products. Undeniably, shopping is one of the most essential activities in our society and studying consumer’s shopping behavior is important for the industry as well as sociology and psychology. Not surprisingly, one of the most popular e-commerce categories is clothing business. There arises the needs for analysis of popular and attractive clothing features which could further boost many emerging applications, such as clothing recommendation and advertising. In this work, we design a novel system that consists of three major components: 1) exploring and organizing a large-scale clothing dataset from a online shopping website, 2) pruning and extracting images of best-selling products in clothing item data and user transaction history, and 3) utilizing a machine learning based approach to discovering clothing attributes as the representative and discriminative characteristics of popular clothing style elements. Through the experiments over a large-scale online clothing dataset, we demonstrate the effectiveness of our proposed system, and obtain useful insights on clothing consumption trends and profitable clothing features.",
"title": ""
},
{
"docid": "2c5b384a66fe8b3abef31fc605f9daf0",
"text": "Since achieving W3C recommendation status in 2004, the Web Ontology Language (OWL) has been successfully applied to many problems in computer science. Practical experience with OWL has been quite positive in general; however, it has also revealed room for improvement in several areas. We systematically analyze the identified shortcomings of OWL, such as expressivity issues, problems with its syntaxes, and deficiencies in the definition of OWL species. Furthermore, we present an overview of OWL 2—an extension to and revision of OWL that is currently being developed within the W3C OWL Working Group. Many aspects of OWL have been thoroughly reengineered in OWL 2, thus producing a robust platform for future development of the language.",
"title": ""
},
{
"docid": "5e42cdbe42b9fafb53b8bbd82ec96d5a",
"text": "Fifty years ago, the author published a paper in Operations Research with the title, “A proof for the queuing formula: L = W ” [Little, J. D. C. 1961. A proof for the queuing formula: L = W . Oper. Res. 9(3) 383–387]. Over the years, L = W has become widely known as “Little’s Law.” Basically, it is a theorem in queuing theory. It has become well known because of its theoretical and practical importance. We report key developments in both areas with the emphasis on practice. In the latter, we collect new material and search for insights on the use of Little’s Law within the fields of operations management and computer architecture.",
"title": ""
},
{
"docid": "ea9f43aaab4383369680c85a040cedcf",
"text": "Efforts toward automated detection and identification of multistep cyber attack scenarios would benefit significantly from a methodology and language for modeling such scenarios. The Correlated Attack Modeling Language (CAML) uses a modular approach, where a module represents an inference step and modules can be linked together to detect multistep scenarios. CAML is accompanied by a library of predicates, which functions as a vocabulary to describe the properties of system states and events. The concept of attack patterns is introduced to facilitate reuse of generic modules in the attack modeling process. CAML is used in a prototype implementation of a scenario recognition engine that consumes first-level security alerts in real time and produces reports that identify multistep attack scenarios discovered in the alert stream.",
"title": ""
},
{
"docid": "d950407cfcbc5457b299e05c8352107e",
"text": "Pedicle screw instrumentation in AIS has advantages of rigid fixation, improved deformity correction and a shorter fusion, but needs an exacting technique. The author has been using the K-wire method with intraoperative single PA and lateral radiographs, because it is safe, accurate and fast. Pedicle screws are inserted in every segment on the correction side (thoracic concave) and every 2–3 on the supportive side (thoracic convex). After an over-bent rod is inserted on the corrective side, the rod is rotated 90° counterclockwise. This maneuver corrects the coronal and sagittal curves. Then the vertebra is derotated by direct vertebral rotation (DVR) correcting the rotational deformity. The direction of DVR should be opposite to that of the vertebral rotation. A rigid rod has to be used to prevent the rod from straightening out during the rod derotation and DVR. The ideal classification of AIS should address all curve patterns, predicts accurate fusion extent and have good inter/intraobserver reliability. The Suk classification matches the ideal classification is simple and memorable, and has only four structural curve patterns; single thoracic, double thoracic, double major and thoracolumbar/lumbar. Each curve has two types, A and B. When using pedicle screws in thoracic AIS, curves are usually fused from upper neutral to lower neutral vertebra. Identification of the end vertebra and the neutral vertebra is important in deciding the fusion levels and the direction of DVR. In lumbar AIS, fusion is performed from upper neutral vertebra to L3 or L4 depending on its curve types. Rod derotation and DVR using pedicle screw instrumentation give true three dimensional deformity correction in the treatment of AIS. Suk classification with these methods predicts exact fusion extent and is easy to understand and remember.",
"title": ""
},
{
"docid": "8762106693491e46772c2efade5929dc",
"text": "A collection of technologies termed social computing is driving a dramatic evolution of the Web, matching the dot-com era in growth, excitement, and investment. All of these share a high degree of community formation, user level content creation, and a variety of other characteristics. We provide an overview of social computing and identify salient characteristics. We argue that social computing holds tremendous disruptive potential in the business world and can significantly impact society, and outline possible changes in organized human action that could be brought about. Social computing can also have deleterious effects associated with it, including security issues. We suggest that social computing should be a priority for researchers and business leaders and illustrate the fundamental shifts in communication, computing, collaboration, and commerce brought about by this trend.",
"title": ""
},
{
"docid": "e59b203f3b104553a84603240ea467eb",
"text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.",
"title": ""
},
{
"docid": "522cf7baa14071f2196263ccd061fdc2",
"text": "To understand and identify the attack surfaces of a Cyber-Physical System (CPS) is an essential step towards ensuring its security. The growing complexity of the cybernetics and the interaction of independent domains such as avionics, robotics and automotive is a major hindrance against a holistic view CPS. Furthermore, proliferation of communication networks have extended the reach of CPS from a user-centric single platform to a widely distributed network, often connecting to critical infrastructure, e.g., through smart energy initiative. In this manuscript, we reflect on this perspective and provide a review of current security trends and tools for secure CPS. We emphasize on both the design and execution flows and particularly highlight the necessity of efficient attack surface detection. We provide a detailed characterization of attacks reported on different cyber-physical systems, grouped according to their application domains, attack complexity, attack source and impact. Finally, we review the current tools, point out their inadequacies and present a roadmap of future research.",
"title": ""
},
{
"docid": "fb54ca0c25ffe37cf9bab5677f52c341",
"text": "Convolutional networks (ConvNets) have become a popular approach to computer vision. Here we consider the parallelization of ConvNet training, which is computationally costly. Our novel parallel algorithm is based on decomposition into a set of tasks, most of which are convolutions or FFTs. Theoretical analysis suggests that linear speedup with the number of processors is attainable. To attain such performance on real shared-memory machines, our algorithm computes convolutions converging on the same node of the network with temporal locality to reduce cache misses, and sums the convergent convolution outputs via an almost wait-free concurrent method to reduce time spent in critical sections. Benchmarking with multi-core CPUs shows speedup roughly equal to the number of physical cores. We also demonstrate 90x speedup on a many-core CPU (Xeon Phi Knights Corner). Our algorithm can be either faster or slower than certain GPU implementations depending on specifics of the network architecture, kernel sizes, and density and size of the output patch.",
"title": ""
},
{
"docid": "cdaa99f010b20906fee87d8de08e1106",
"text": "We propose a novel hierarchical clustering algorithm for data-sets in which only pairwise distances between the points are provided. The classical Hungarian method is an efficient algorithm for solving the problem of minimal-weight cycle cover. We utilize the Hungarian method as the basic building block of our clustering algorithm. The disjoint cycles, produced by the Hungarian method, are viewed as a partition of the data-set. The clustering algorithm is formed by hierarchical merging. The proposed algorithm can handle data that is arranged in non-convex sets. The number of the clusters is automatically found as part of the clustering process. We report an improved performance of our algorithm in a variety of examples and compare it to the spectral clustering algorithm.",
"title": ""
},
{
"docid": "e33dd9c497488747f93cfcc1aa6fee36",
"text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.",
"title": ""
},
{
"docid": "d23c5fc626d0f7b1d9c6c080def550b8",
"text": "Gamification of education is a developing approach for increasing learners’ motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical evidence rather than on potentialities, beliefs or preferences. Accordingly, it critically examines the advancement in gamifying education. The discussion is structured around the used gamification mechanisms, the gamified subjects, the type of gamified learning activities, and the study goals, with an emphasis on the reliability and validity of the reported outcomes. To improve our understanding and offer a more realistic picture of the progress of gamification in education, consistent with the presented evidence, we examine both the outcomes reported in the papers and how they have been obtained. While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach.",
"title": ""
},
{
"docid": "04ed876237214c1366f966b80ebb7fd4",
"text": "Load Balancing is essential for efficient operations indistributed environments. As Cloud Computing is growingrapidly and clients are demanding more services and betterresults, load balancing for the Cloud has become a veryinteresting and important research area. Many algorithms weresuggested to provide efficient mechanisms and algorithms forassigning the client's requests to available Cloud nodes. Theseapproaches aim to enhance the overall performance of the Cloudand provide the user more satisfying and efficient services. Inthis paper, we investigate the different algorithms proposed toresolve the issue of load balancing and task scheduling in CloudComputing. We discuss and compare these algorithms to providean overview of the latest approaches in the field.",
"title": ""
},
{
"docid": "35e4a1519cbeaa46fe63f0f6aec8c28a",
"text": "Decision trees and Random Forest are most popular methods of machine learning techniques. C4.5 which is an extension version of ID.3 algorithm and CART are one of these most commonly use algorithms to generate decision trees. Random Forest which constructs a lot of number of trees is one of another useful technique for solving both classification and regression problems. This study compares classification performances of different decision trees (C4.5, CART) and Random Forest which was generated using 50 trees. Data came from OECD countries health expenditures for the year 2011. AUC and ROC curve graph was used for performance comparison. Experimental results show that Random Forest outperformed in classification accuracy [AUC=0.98] in comparison with CART (0.95) and C4.5 (0.90) respectively. Future studies more focus on performance comparisons of different machine learning techniques using several datasets and different hyperparameter optimization techniques.",
"title": ""
},
{
"docid": "d1662ef8103d5513268a604253de122a",
"text": "Highly-interconnected networks of nonlinear analog neurons are shown to be extremely effective in computing. The networks can rapidly provide a collectively-computed solution (a digital output) to a problem on the basis of analog input information. The problems to be solved must be formulated in terms of desired optima, often subject to constraints. The general principles involved in constructing networks to solve specific problems are discussed. Results of computer simulations of a network designed to solve a difficult but well-defined optimization problem-the Traveling-Salesman Problem-are presented and used to illustrate the computational power of the networks. Good solutions to this problem are collectively computed within an elapsed time of only a few neural time constants. The effectiveness of the computation involves both the nonlinear analog response of the neurons and the large connectivity among them. Dedicated networks of biological or microelectronic neurons could provide the computational capabilities described for a wide class of problems having combinatorial complexity. The power and speed naturally displayed by such collective networks may contribute to the effectiveness of biological information processing.",
"title": ""
},
{
"docid": "7c4f18a980cfba4cc02385fd66a0cd0c",
"text": "As an indispensable component, Batch Normalization (BN) has successfully improved the training of deep neural networks (DNNs) with mini-batches, by normalizing the distribution of the internal representation for each hidden layer. However, the effectiveness of BN would diminish with scenario of micro-batch (e.g. less than 10 samples in a mini-batch), since the estimated statistics in a mini-batch are not reliable with insufficient samples. In this paper, we present a novel normalization method, called Batch Kalman Normalization (BKN), for improving and accelerating the training of DNNs, particularly under the context of microbatches. Specifically, unlike the existing solutions treating each hidden layer as an isolated system, BKN treats all the layers in a network as a whole system, and estimates the statistics of a certain layer by considering the distributions of all its preceding layers, mimicking the merits of Kalman Filtering. BKN has two appealing properties. First, it enables more stable training and faster convergence compared to previous works. Second, training DNNs using BKN performs substantially better than those using BN and its variants, especially when very small mini-batches are presented. On the image classification benchmark of ImageNet, using BKN powered networks we improve upon the best-published model-zoo results: reaching 74.0% top-1 val accuracy for InceptionV2. More importantly, using BKN achieves the comparable accuracy with extremely smaller batch size, such as 64 times smaller on CIFAR-10/100 and 8 times smaller on ImageNet.",
"title": ""
},
{
"docid": "ca898f6e889632dc01576e36ca5b4b8b",
"text": "In recent years, deep learning has had a profound impact on machine learning and artificial intelligence. Here we investigate if quantum algorithms for deep learning lead to an advantage over existing classical deep learning algorithms. We develop two quantum machine learning algorithms that reduce the time required to train a deep Boltzmann machine and allow richer classes of models, namely multi–layer, fully connected networks, to be efficiently trained without the use of contrastive divergence or similar approximations. Our algorithms may be used to efficiently train either full or restricted Boltzmann machines. By using quantum state preparation methods, we avoid the use of contrastive divergence approximation and obtain improved maximization of the underlying objective function.",
"title": ""
}
] |
scidocsrr
|
58a12aa372ad5577c9cd5087a46c16df
|
Object Recognition and Tracking based on Object Feature Extracting
|
[
{
"docid": "3d0103c34fcc6a65ad56c85a9fe10bad",
"text": "This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space ‘interest points’ may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines.",
"title": ""
},
{
"docid": "1ff51e3f6b73aa6fe8eee9c1fb404e4e",
"text": "The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.",
"title": ""
}
] |
[
{
"docid": "804cee969d47d912d8bdc40f3a3eeb32",
"text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.",
"title": ""
},
{
"docid": "477ebb38817e57276074ea4e89da0d73",
"text": "OBJECTIVE\nThe objective of this paper is to highlight the state-of-the-art machine learning (ML) techniques in computational docking. The use of smart computational methods in the life cycle of drug design is relatively a recent development that has gained much popularity and interest over the last few years. Central to this methodology is the notion of computational docking which is the process of predicting the best pose (orientation + conformation) of a small molecule (drug candidate) when bound to a target larger receptor molecule (protein) in order to form a stable complex molecule. In computational docking, a large number of binding poses are evaluated and ranked using a scoring function. The scoring function is a mathematical predictive model that produces a score that represents the binding free energy, and hence the stability, of the resulting complex molecule. Generally, such a function should produce a set of plausible ligands ranked according to their binding stability along with their binding poses. In more practical terms, an effective scoring function should produce promising drug candidates which can then be synthesized and physically screened using high throughput screening process. Therefore, the key to computer-aided drug design is the design of an efficient highly accurate scoring function (using ML techniques).\n\n\nMETHODS\nThe methods presented in this paper are specifically based on ML techniques. Despite many traditional techniques have been proposed, the performance was generally poor. Only in the last few years started the application of the ML technology in the design of scoring functions; and the results have been very promising.\n\n\nMATERIAL\nThe ML-based techniques are based on various molecular features extracted from the abundance of protein-ligand information in the public molecular databases, e.g., protein data bank bind (PDBbind).\n\n\nRESULTS\nIn this paper, we present this paradigm shift elaborating on the main constituent elements of the ML approach to molecular docking along with the state-of-the-art research in this area. For instance, the best random forest (RF)-based scoring function on PDBbind v2007 achieves a Pearson correlation coefficient between the predicted and experimentally determined binding affinities of 0.803 while the best conventional scoring function achieves 0.644. The best RF-based ranking power ranks the ligands correctly based on their experimentally determined binding affinities with accuracy 62.5% and identifies the top binding ligand with accuracy 78.1%.\n\n\nCONCLUSIONS\nWe conclude with open questions and potential future research directions that can be pursued in smart computational docking; using molecular features of different nature (geometrical, energy terms, pharmacophore), advanced ML techniques (e.g., deep learning), combining more than one ML models.",
"title": ""
},
{
"docid": "d97e9181f01f195c0b299ce8893ddbbd",
"text": "Linear algebra is a powerful and proven tool in Web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score Web pages based on the principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the Web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the Web pages while the third dimension adds the anchor text. We then use the rank-1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative Web pages.",
"title": ""
},
{
"docid": "8471c34489a205dd738dadd2ecf83348",
"text": "Recent neuroimaging studies have shown the importance of the prefrontal and anterior cingulate cortices in deception. However, little is known about the role of each of these regions during deception. Using positron emission tomography (PET), we measured brain activation while participants told truths or lies about two types of real-world events: experienced and unexperienced. The imaging data revealed that activity of the dorsolateral, ventrolateral and medial prefrontal cortices was commonly associated with both types of deception (pretending to know and pretending not to know), whereas activity of the anterior cingulate cortex (ACC) was only associated with pretending not to know. Regional cerebral blood flow (rCBF) increase in the ACC was positively correlated with that in the dorsolateral prefrontal cortex only during pretending not to know. These results suggest that the lateral and medial prefrontal cortices have general roles in deception, whereas the ACC contributes specifically to pretending not to know.",
"title": ""
},
{
"docid": "cfe31ce3a6a23d9148709de6032bd90b",
"text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.",
"title": ""
},
{
"docid": "121b2c0146580661a6aba6e17b334b5c",
"text": "With the rapid development of Internet and the emergence of Web 2.0 sites, user interaction has increased considerably, widening the scope of word-of mouth communication. People can now access information, exchange ideas and opinions, join various networks and socializing groups, regardless of their position on the globe. This new means of communication, electronic word of mouth (e-WOM), makes it possible for users to be inter-connected and participate in a continuous flow of information exchange. In literature, there have been identified multiple factors that can define electronic word of mouth and it has been noted that this phenomenon can influence the purchase intention of online consumers. However, due to the complexity of e-WOM, it’s very difficult to offer a single definition of this concept. Considering the ample area that electronic word of mouth may have an impact on, this article proposes a literature review of possible factors that influence the purchase intentions. Results will provide a baseline for developing a possible model for electronic word of mouth and its role in the purchase intention. Intercultural Communication and the Future of Education THE HUB Fusion between Social Sciences, Arts, Theology, Engineering and Management",
"title": ""
},
{
"docid": "06f421d0f63b9dc08777c573840654d5",
"text": "This paper presents the implementation of a modified state observer-based adaptive dynamic inverse controller for the Black Kite micro aerial vehicle. The pitch and velocity adaptations are computed by the modified state observer in the presence of turbulence to simulate atmospheric conditions. This state observer uses the estimation error to generate the adaptations and, hence, is more robust than model reference adaptive controllers which use modeling or tracking error. In prior work, a traditional proportional-integral-derivative control law was tested in simulation for its adaptive capability in the longitudinal dynamics of the Black Kite micro aerial vehicle. This controller tracks the altitude and velocity commands during normal conditions, but fails in the presence of both parameter uncertainties and system failures. The modified state observer-based adaptations, along with the proportional-integral-derivative controller enables tracking despite these conditions. To simulate flight of the micro aerial vehicle with turbulence, a Dryden turbulence model is included. The turbulence levels used are based on the absolute load factor experienced by the aircraft. The length scale was set to 2.0 meters with a turbulence intensity of 5.0 m/s that generates a moderate turbulence. Simulation results for various flight conditions show that the modified state observer-based adaptations were able to adapt to the uncertainties and the controller tracks the commanded altitude and velocity. The summary of results for all of the simulated test cases and the response plots of various states for typical flight cases are presented.",
"title": ""
},
{
"docid": "36162ebd7d7c5418e4c78bad5bbba8ab",
"text": "In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.",
"title": ""
},
{
"docid": "7e36f0f54e34801c9b50a97a12473328",
"text": "The amalgamation of polymer and pharmaceutical sciences led to the introduction of polymer in the design and development of drug delivery systems. Polymeric delivery systems are mainly intended to achieve controlled or sustained drug delivery. Polysaccharides fabricated into hydrophilic matrices remain popular biomaterials for controlled-release dosage forms and the most abundant naturally occurring biopolymer is cellulose; so hdroxypropylmethyl cellulose, hydroxypropyl cellulose, microcrystalline cellulose and hydroxyethyl cellulose can be used for production of time controlled delivery systems. Additionally microcrystalline cellulose, sodium carboxymethyl cellulose, hydroxypropylmethyl cellulose, hydroxyethyl cellulose as well as hydroxypropyl cellulose are used to coat tablets. Cellulose acetate phthalate and hydroxymethyl cellulose phthalate are also used for enteric coating of tablets. Targeting of drugs to the colon following oral administration has also been accomplished by using polysaccharides such as hdroxypropylmethyl cellulose and hydroxypropyl cellulose in hydrated form; also they act as binders that swell when hydrated by gastric media and delay absorption. This paper assembles the current knowledge on the structure and chemistry of cellulose, and in the development of innovative cellulose esters and ethers for pharmaceuticals.",
"title": ""
},
{
"docid": "d3b0a831715bd2f2de9d94811bdd47e7",
"text": "Aspect Term Extraction (ATE) identifies opinionated aspect terms in texts and is one of the tasks in the SemEval Aspect Based Sentiment Analysis (ABSA) contest. The small amount of available datasets for supervised ATE and the costly human annotation for aspect term labelling give rise to the need for unsupervised ATE. In this paper, we introduce an architecture that achieves top-ranking performance for supervised ATE. Moreover, it can be used efficiently as feature extractor and classifier for unsupervised ATE. Our second contribution is a method to automatically construct datasets for ATE. We train a classifier on our automatically labelled datasets and evaluate it on the human annotated SemEval ABSA test sets. Compared to a strong rule-based baseline, we obtain a dramatically higher F-score and attain precision values above 80%. Our unsupervised method beats the supervised ABSA baseline from SemEval, while preserving high precision scores.",
"title": ""
},
{
"docid": "939145a5fed6b08b78d81a6721753a19",
"text": "This study explores and compares the portrayal of women in the news reporting of crimes of sexual violence against women between two newspapers from different cultures, the Jakarta Post and the Guardian. The Jakarta Post is an English quality newspaper published in Indonesia, and the Guardian is a quality broadsheet from Great Britain. To explore the representation of women, this study accounts the portrayal of men as well since the two entities are strongly inter-related. The analytical tool used in this study is naming analysis of social actors, which is a part of critical discourse analysis. This analysis is aimed at probing the representation through the choice of lexical items in representing the main news actors. The findings of the analysis indicate that the choices of the naming categories used by both newspapers are different. The Jakarta Post mostly functionalises both the victims and the perpetrators in terms of their legal status in the criminal cases. This suggests that the broadsheet tends to view them as part of the legal processes instead of as people. The Guardian typically classifies the victims in terms of their age and gender and refers to the perpetrators with their surnames instead of as parts of the criminal cases. The Guardian’s tendency to represent both perpetrators and victims as people instead of parts of legal processes indicates that the paper is attempting to focus the reports more on the crimes themselves rather than the participants involved in the cases.",
"title": ""
},
{
"docid": "1ba1b3bb1ef0fb0b6b10b8f4dcaa6716",
"text": "Lichen sclerosus et atrophicus (LSA) is a chronic inflammatory scarring disease with a predilection for the anogenital area; however, 15%-20% of LSA cases are extragenital. The folliculocentric variant is rarely reported and less well understood. The authors report a rare case of extragenital, folliculocentric LSA in a 10-year-old girl. The patient presented to the dermatology clinic for evaluation of an asymptomatic eruption of the arms and legs, with no vaginal or vulvar involvement. Physical examination revealed the presence of numerous 2-4 mm, mostly perifollicular, hypopigmented, slightly atrophic papules and plaques. Many of the lesions had a central keratotic plug. Cutaneous histopathological examination showed features of LSA. Based on clinical and histological findings, folliculocentric extragenital LSA was diagnosed.",
"title": ""
},
{
"docid": "c467fe65c242436822fd72113b99c033",
"text": "Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is a powerful technique for generating striking images of vector data. Based on local ltering of an input texture along a curved stream line segment in a vector eld, it is possible to depict directional information of the vector eld at pixel resolution. The methods suggested so far can handle structured grids only. Now we present an approach that works both on two-dimensional unstructured grids and directly on triangulated surfaces in three-dimensional space. Because unstructured meshes often occur in real applications, this feature makes LIC available for a number of new applications.",
"title": ""
},
{
"docid": "5b97d597534e65bf5d00f89d8df97767",
"text": "Research into online gaming has steadily increased over the last decade, although relatively little research has examined the relationship between online gaming addiction and personality factors. This study examined the relationship between a number of personality traits (sensation seeking, self-control, aggression, neuroticism, state anxiety, and trait anxiety) and online gaming addiction. Data were collected over a 1-month period using an opportunity sample of 123 university students at an East Midlands university in the United Kingdom. Gamers completed all the online questionnaires. Results of a multiple linear regression indicated that five traits (neuroticism, sensation seeking, trait anxiety, state anxiety, and aggression) displayed significant associations with online gaming addiction. The study suggests that certain personality traits may be important in the acquisition, development, and maintenance of online gaming addiction, although further research is needed to replicate the findings of the present study.",
"title": ""
},
{
"docid": "e7b9c3ef571770788cd557f8c4843bcf",
"text": "Different efforts have been done to address the problem of information overload on the Internet. Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. In this paper, we propose an algorithm to solve the web page recommendation problem. In our algorithm, we use distributed learning automata to learn the behavior of previous users’ and recommend pages to the current user based on learned pattern. Our experiments on real data set show that the proposed algorithm performs better than the other algorithms that we compared to and, at the same time, it is less complex than other algorithms with respect to memory usage and computational cost too.",
"title": ""
},
{
"docid": "866e7819b0389f26daab015c6ff40b69",
"text": "This study examined the effects of multiple risk, promotive, and protective factors on three achievement-related measures (i.e., grade point average, number of absences, and math achievement test scores) for African American 7th-grade students (n = 837). There were 3 main findings. First, adolescents had lower grade point averages, more absences, and lower achievement test scores as their exposure to risk factors increased. Second, different promotive and protective factors emerged as significant contributors depending on the nature of the achievement-related outcome that was being assessed. Third, protective factors were identified whose effects were magnified in the presence of multiple risks. Results were discussed in light of the developmental tasks facing adolescents and the contexts in which youth exposed to multiple risks and their families live.",
"title": ""
},
{
"docid": "495791a3cdf75dc368423fac310851df",
"text": "Though marketers have made great strides in understanding the Internet, they still understand little about what makes for a compelling consumer experience online. Recently, the flow construct has been proposed as important for understanding consumer behavior on the World Wide Web. Although widely studied over the past twenty years, quantitative modeling efforts of the flow construct have been neither systematic nor comprehensive. In large parts, these efforts have been hampered by considerable confusion regarding the exact conceptual definition of flow. Lacking precise definition, it has been difficult to measure flow empirically, let alone apply the concept in practice. Following the conceptual model of flow proposed by Hoffman and Novak (1996), we conceptualize flow as a complex multidimensional construct characterized by directed relationships among a set of unidimensional constructs, most of which have previously been incorporated in various definitions of flow. In a quantitative modeling framework, we use data collected from a large-sample Web-based consumer survey to measure this set of constructs, and fit a series of structural equation models that test Hoffman and Novak’s (1996) theory. The conceptual model is largely supported and the improved fit offered by the revised model provides additional insights into the antecedents and consequences of flow. A key insight from the paper is that the degree to which the online experience is compelling can be defined and measured. As such, our flow model may be useful both theoretically and in practice as marketers strive to decipher the secrets of commercial success in interactive online environments.",
"title": ""
},
{
"docid": "ad5b8a1bcea8265351669be4f4c49476",
"text": "Software startups are newly created companies with little operating history and oriented towards producing cutting-edge products. As their time and resources are extremely scarce, and one failed project can put them out of business, startups need effective practices to face with those unique challenges. However, only few scientific studies attempt to address characteristics of failure, especially during the earlystage. With this study we aim to raise our understanding of the failure of early-stage software startup companies. This state-of-practice investigation was performed using a literature review followed by a multiple-case study approach. The results present how inconsistency between managerial strategies and execution can lead to failure by means of a behavioral framework. Despite strategies reveal the first need to understand the problem/solution fit, actual executions prioritize the development of the product to launch on the market as quickly as possible to verify product/market fit, neglecting the necessary learning process.",
"title": ""
},
{
"docid": "245b313fa0a72707949f20c28ce7e284",
"text": "We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) which preserves the computational simplicity of ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA.",
"title": ""
},
{
"docid": "6f3a85fb7b35d2ec59732cadb39f7b81",
"text": "INTRODUCTION\nAlthough a considerable amount of research has addressed psychopathological and personality correlates of creativity, the relationship between these characteristics and the phenomenology of creativity has been neglected. Relating these characteristics to the phenomenology of creativity may assist in clarifying the precise nature of the relationship between psychopathology and creativity. The current article reports on an empirical study of the relationship between the phenomenology of the creative process and psychopathological and personality characteristics in a sample of artists.\n\n\nMETHOD\nA total of 100 artists (43 males, 57 females, mean age = 34.69 years) from a range of disciplines completed the Experience of Creativity Questionnaire and measures of \"positive\" schizotypy, affective disturbance, mental boundaries, and normal personality.\n\n\nRESULTS\nThe sample of artists was found to be elevated on \"positive\" schizotypy, unipolar affective disturbance, thin boundaries, and the personality dimensions of Openness to Experience and Neuroticism, compared with norm data. Schizotypy was found to be the strongest predictor of a range of creative experience scales (Distinct Experience, Anxiety, Absorption, Power/Pleasure), suggesting a strong overlap of schizotypal and creative experience.\n\n\nDISCUSSION\nThese findings indicate that \"positive\" schizotypy is associated with central features of \"flow\"-type experience, including distinct shift in phenomenological experience, deep absorption, focus on present experience, and sense of pleasure. The neurologically based construct of latent inhibition may be a mechanism that facilitates entry into flow-type states for schizotypal individuals. This may occur by reduced latent inhibition providing a \"fresh\" awareness and therefore a greater absorption in present experience, thus leading to flow-type states.",
"title": ""
}
] |
scidocsrr
|
47547553b4abbac9675503e48ae8c0bd
|
Understanding Plagiarism Linguistic Patterns, Textual Features, and Detection Methods
|
[
{
"docid": "fe6fa144846269c7b2c9230ca9d8217b",
"text": "The paper is dedicated to plagiarism problem. The ways how to reduce plagiarism: both: plagiarism prevention and plagiarism detection are discussed. Widely used plagiarism detection methods are described. The most known plagiarism detection tools are analysed.",
"title": ""
}
] |
[
{
"docid": "c63d32013627d0bcea22e1ad62419e62",
"text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.",
"title": ""
},
{
"docid": "091c57447d5a3c97d3ff1afb57ebb4e3",
"text": "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.",
"title": ""
},
{
"docid": "e0c71e449f4c155a993ae04ece4bc822",
"text": "This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.",
"title": ""
},
{
"docid": "345bd0959cf210e4afd47e9bf6fad76d",
"text": "Smartphone applications are getting more multi-farious and demanding of increased energy and computing resources. Mobile Cloud Computing (MCC) made a novel platform which allows personal Smartphones to execute heavy computing tasks with the assistance of powerful cloudlet servers attached to numerous wireless access points (APs). Furthermore, due to users' mobility in anywhere, ensuring the continuous connectivity of mobile devices in given wireless network access point is quite difficult because the signal strength becomes sporadic at that time. In this paper, we develop a QoS and mobility aware optimal resource allocation architecture, namely Q-MAC, for remote code execution in MCC that offers higher efficiency in timeliness and reliability domains. By carrying continuous track of user, our proposed architecture performs the offloading process. Our test-bed implementation results show that the Q-MAC outperforms the state-of-the-art methods in terms of success percentage, execution time and workload distribution.",
"title": ""
},
{
"docid": "e8e796774aa6e16ff022ab155237f402",
"text": "Mobile payment is the killer application in mobile commerce. We classify the payment methods according to several standards, analyze and point out the merits and drawbacks of each method. To enable future applications and technologies handle mobile payment, we provide a general layered framework and a new process for mobile payment. The framework is composed of load-bearing layer, network interface and core application platform layer, business layer, and decision-making layer. And it can be extended and improved by the developers. Then a pre-pay and account-based payment process is described. Our method has the advantages of low cost and technical requirement, high scalability and security.",
"title": ""
},
{
"docid": "73872cb92a522a222a3e8ee28a21e263",
"text": "All the power of computational techniques for data processing and analysis is worthless without human analysts choosing appropriate methods depending on data characteristics, setting parameters and controlling the work of the methods, interpreting results obtained, understanding what to do next, reasoning, and drawing conclusions. To enable effective work of human analysts, relevant information must be presented to them in an adequate way. Since visual representation of information greatly promotes man’s perception and cognition, visual displays of data and results of computational processing play a very important role in analysis. However, a simple combination of visualization with computational analysis is not sufficient. The challenge is to build analytical tools and environments where the power of computational methods is synergistically combined with man’s background knowledge, flexible thinking, imagination, and capacity for insight. This is the main goal of the emerging multidisciplinary research field of Visual Analytics (Thomas and Cook [45]), which is defined as the science of analytical reasoning facilitated by interactive visual interfaces. Analysis of movement data is an appropriate target for a synergy of diverse technologies, including visualization, computations, database queries, data transformations, and other computer-based operations. In this chapter, we try to define what combination of visual and computational techniques can support the analysis of massive movement data and how these techniques should interact. Before that, we shall briefly overview the existing computer-based tools and techniques for visual analysis of movement data.",
"title": ""
},
{
"docid": "d352913b60263d12072a9b79bfe36d18",
"text": "Jauhar et al. (2015) recently proposed to learn sense-specific word representations by “retrofitting” standard distributional word representations to an existing ontology. We observe that this approach does not require an ontology, and can be generalized to any graph defining word senses and relations between them. We create such a graph using translations learned from parallel corpora. On a set of lexical semantic tasks, representations learned using parallel text perform roughly as well as those derived from WordNet, and combining the two representation types significantly improves performance.",
"title": ""
},
{
"docid": "543a4aacf3d0f3c33071b0543b699d3c",
"text": "This paper describes a buffer sharing technique that strikes a balance between the use of disk bandwidth and memory in order to maximize the performance of a video-on-demand server. We make the key observation that the configuration parameters of the system should be independent of the physical characteristics of the data (e.g., popularity of a clip). Instead, the configuration parameters are fixed and our strategy adjusts itself dynamically at run-time to support a pattern of access to the video clips.",
"title": ""
},
{
"docid": "72e9f82070605ca5f0467f29ad9ca780",
"text": "Social media are pervaded by unsubstantiated or untruthful rumors, that contribute to the alarming phenomenon of misinformation. The widespread presence of a heterogeneous mass of information sources may affect the mechanisms behind the formation of public opinion. Such a scenario is a florid environment for digital wildfires when combined with functional illiteracy, information overload, and confirmation bias. In this essay, we focus on a collection of works aiming at providing quantitative evidence about the cognitive determinants behind misinformation and rumor spreading. We account for users’ behavior with respect to two distinct narratives: a) conspiracy and b) scientific information sources. In particular, we analyze Facebook data on a time span of five years in both the Italian and the US context, and measure users’ response to i) information consistent with one’s narrative, ii) troll contents, and iii) dissenting information e.g., debunking attempts. Our findings suggest that users tend to a) join polarized communities sharing a common narrative (echo chambers), b) acquire information confirming their beliefs (confirmation bias) even if containing false claims, and c) ignore dissenting information.",
"title": ""
},
{
"docid": "7c2960e9fd059e57b5a0172e1d458250",
"text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.",
"title": ""
},
{
"docid": "6b125ab0691988a5836855346f277970",
"text": "Cardol (C₁₅:₃), isolated from cashew (Anacardium occidentale L.) nut shell liquid, has been shown to exhibit bactericidal activity against various strains of Staphylococcus aureus, including methicillin-resistant strains. The maximum level of reactive oxygen species generation was detected at around the minimum bactericidal concentration of cardol, while reactive oxygen species production drastically decreased at doses above the minimum bactericidal concentration. The primary response for bactericidal activity around the bactericidal concentration was noted to primarily originate from oxidative stress such as intracellular reactive oxygen species generation. High doses of cardol (C₁₅:₃) were shown to induce leakage of K⁺ from S. aureus cells, which may be related to the decrease in reactive oxygen species. Antioxidants such as α-tocopherol and ascorbic acid restricted reactive oxygen species generation and restored cellular damage induced by the lipid. Cardol (C₁₅:₃) overdose probably disrupts the native membrane-associated function as it acts as a surfactant. The maximum antibacterial activity of cardols against S. aureus depends on their log P values (partition coefficient in octanol/water) and is related to their similarity to those of anacardic acids isolated from the same source.",
"title": ""
},
{
"docid": "b83e784d3ec4afcf8f6ed49dbe90e157",
"text": "In this paper, the impact of an increased number of layers on the performance of axial flux permanent magnet synchronous machines (AFPMSMs) is studied. The studied parameters are the inductance, terminal voltages, PM losses, iron losses, the mean value of torque, and the ripple torque. It is shown that increasing the number of layers reduces the fundamental winding factor. In consequence, the rated torque for the same current reduces. However, the reduction of harmonics associated with a higher number of layers reduces the ripple torque, PM losses, and iron losses. Besides studying the performance of the AFPMSMs for the rated conditions, the study is broadened for the field weakening (FW) region. During the FW region, the flux of the PMs is weakened by an injection of a reversible d-axis current. This keeps the terminal voltage of the machine fixed at the rated value. The inductance plays an important role in the FW study. A complete study for the FW shows that the two layer winding has the optimum performance compared to machines with an other number of winding layers.",
"title": ""
},
{
"docid": "2b8d90c11568bb8b172eca20a48fd712",
"text": "INTRODUCTION\nCancer incidence and mortality estimates for 25 cancers are presented for the 40 countries in the four United Nations-defined areas of Europe and for the European Union (EU-27) for 2012.\n\n\nMETHODS\nWe used statistical models to estimate national incidence and mortality rates in 2012 from recently-published data, predicting incidence and mortality rates for the year 2012 from recent trends, wherever possible. The estimated rates in 2012 were applied to the corresponding population estimates to obtain the estimated numbers of new cancer cases and deaths in Europe in 2012.\n\n\nRESULTS\nThere were an estimated 3.45 million new cases of cancer (excluding non-melanoma skin cancer) and 1.75 million deaths from cancer in Europe in 2012. The most common cancer sites were cancers of the female breast (464,000 cases), followed by colorectal (447,000), prostate (417,000) and lung (410,000). These four cancers represent half of the overall burden of cancer in Europe. The most common causes of death from cancer were cancers of the lung (353,000 deaths), colorectal (215,000), breast (131,000) and stomach (107,000). In the European Union, the estimated numbers of new cases of cancer were approximately 1.4 million in males and 1.2 million in females, and around 707,000 men and 555,000 women died from cancer in the same year.\n\n\nCONCLUSION\nThese up-to-date estimates of the cancer burden in Europe alongside the description of the varying distribution of common cancers at both the regional and country level provide a basis for establishing priorities to cancer control actions in Europe. The important role of cancer registries in disease surveillance and in planning and evaluating national cancer plans is becoming increasingly recognised, but needs to be further advocated. The estimates and software tools for further analysis (EUCAN 2012) are available online as part of the European Cancer Observatory (ECO) (http://eco.iarc.fr).",
"title": ""
},
{
"docid": "ab572c22a75656c19e50b311eb4985ec",
"text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.",
"title": ""
},
{
"docid": "5a4c9b6626d2d740246433972ad60f16",
"text": "We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows:",
"title": ""
},
{
"docid": "36d6d14ab816a2fea62df31e370d7b1a",
"text": "Modern applications provide interfaces for scripting, but many users do not know how to write script commands. However, many users are familiar with the idea of entering keywords into a web search engine. Hence, if a user is familiar with the vocabulary of an application domain, we anticipate that they could write a set of keywords expressing a command in that domain. For instance, in the web browsing domain, a user might enter <B>click search button</B>. We call expressions of this form keyword commands, and we present a novel approach for translating keyword commands directly into executable code. Our prototype of this system in the web browsing domain translates <B>click search button</B> into the Chickenfoot code <B>click(findButton(\"search\"))</B>. This code is then executed in the context of a web browser to carry out the effect. We also present an implementation of this system in the domain of Microsoft Word. A user study revealed that subjects could use keyword commands to successfully complete 90% of the web browsing tasks in our study without instructions or training. Conversely, we would expect users to complete close to 0% of the tasks if they had to guess the underlying JavaScript commands with no instructions or training.",
"title": ""
},
{
"docid": "3cc07ea28720245f9c4983b0a4b1a66d",
"text": "A first line of attack in exploratory data analysis is data visualization, i.e., generating a 2-dimensional representation of data that makes clusters of similar points visually identifiable. Standard JohnsonLindenstrauss dimensionality reduction does not produce data visualizations. The t-SNE heuristic of van der Maaten and Hinton, which is based on non-convex optimization, has become the de facto standard for visualization in a wide range of applications. This work gives a formal framework for the problem of data visualization – finding a 2-dimensional embedding of clusterable data that correctly separates individual clusters to make them visually identifiable. We then give a rigorous analysis of the performance of t-SNE under a natural, deterministic condition on the “ground-truth” clusters (similar to conditions assumed in earlier analyses of clustering) in the underlying data. These are the first provable guarantees on t-SNE for constructing good data visualizations. We show that our deterministic condition is satisfied by considerably general probabilistic generative models for clusterable data such as mixtures of well-separated log-concave distributions. Finally, we give theoretical evidence that t-SNE provably succeeds in partially recovering cluster structure even when the above deterministic condition is not met.",
"title": ""
},
{
"docid": "11851c0615ad483b6c4f9d0e4ccc30b2",
"text": "In the era of information technology, human tend to develop better and more convenient lifestyle. Nowadays, almost all the electronic devices are equipped with wireless technology. A wireless communication network has numerous advantages and becomes an important application. The enhancements provide by the wireless technology gives the ease of control to the users and not least the mobility of the devices within the network. It is use the Zigbee as the wireless modules. The Smart Ordering System introduced current and fast way to order food at a restaurant. The system uses a small keypad to place orders and the order made by inserting the code on the keypad menu. This code comes along with the menu. The signal will be delivered to the order by the Zigbee technology, and it will automatically be displayed on the screen in the kitchen. Keywords— smart, ordering, S.O.S, Zigbee.",
"title": ""
},
{
"docid": "2399755bed6b1fc5fac495d54886acc0",
"text": "Lately fire outbreak is common issue happening in Malays and the damage caused by these type of incidents is tremendous toward nature and human interest. Due to this the need for application for fire detection has increases in recent years. In this paper we proposed a fire detection algorithm based on image processing techniques which is compatible in surveillance devices like CCTV, wireless camera to UAVs. The algorithm uses RGB colour model to detect the colour of the fire which is mainly comprehended by the intensity of the component R which is red colour. The growth of fire is detected using sobel edge detection. Finally a colour based segmentation technique was applied based on the results from the first technique and second technique to identify the region of interest (ROI) of the fire. After analysing 50 different fire scenarios images, the final accuracy obtained from testing the algorithm was 93.61% and the efficiency was 80.64%.",
"title": ""
},
{
"docid": "45ec93ccf4b2f6a6b579a4537ca73e9c",
"text": "Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections when composing two operations where a check on the collection (such as non-emptiness) precedes an action (such as removing an entry). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused CHECK-THEN-ACT idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections - comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We implemented a tool, CTADetector, to detect and correct misused CHECK-THEN-ACT idioms. Using CTADetector we found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THEN-ACT idioms are commonly misused in practice, and correcting them is important.",
"title": ""
}
] |
scidocsrr
|
d83d63e45d53a3b93225ce6797cdb26a
|
Encoder-decoder with focus-mechanism for sequence labelling based spoken language understanding
|
[
{
"docid": "1dd8fdb5f047e58f60c228e076aa8b66",
"text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.",
"title": ""
}
] |
[
{
"docid": "d945ae2fe20af58c2ca4812c797d361d",
"text": "Triple-negative breast cancers (TNBC) are genetically characterized by aberrations in TP53 and a low rate of activating point mutations in common oncogenes, rendering it challenging in applying targeted therapies. We performed whole-exome sequencing (WES) and RNA sequencing (RNA-seq) to identify somatic genetic alterations in mouse models of TNBCs driven by loss of Trp53 alone or in combination with Brca1 Amplifications or translocations that resulted in elevated oncoprotein expression or oncoprotein-containing fusions, respectively, as well as frameshift mutations of tumor suppressors were identified in approximately 50% of the tumors evaluated. Although the spectrum of sporadic genetic alterations was diverse, the majority had in common the ability to activate the MAPK/PI3K pathways. Importantly, we demonstrated that approved or experimental drugs efficiently induce tumor regression specifically in tumors harboring somatic aberrations of the drug target. Our study suggests that the combination of WES and RNA-seq on human TNBC will lead to the identification of actionable therapeutic targets for precision medicine-guided TNBC treatment.Significance: Using combined WES and RNA-seq analyses, we identified sporadic oncogenic events in TNBC mouse models that share the capacity to activate the MAPK and/or PI3K pathways. Our data support a treatment tailored to the genetics of individual tumors that parallels the approaches being investigated in the ongoing NCI-MATCH, My Pathway Trial, and ESMART clinical trials. Cancer Discov; 8(3); 354-69. ©2017 AACR.See related commentary by Natrajan et al., p. 272See related article by Matissek et al., p. 336This article is highlighted in the In This Issue feature, p. 253.",
"title": ""
},
{
"docid": "f2a677515866e995ff8e0e90561d7cbc",
"text": "Pattern matching and data abstraction are important concepts in designing programs, but they do not fit well together. Pattern matching depends on making public a free data type representation, while data abstraction depends on hiding the representation. This paper proposes the views mechanism as a means of reconciling this conflict. A view allows any type to be viewed as a free data type, thus combining the clarity of pattern matching with the efficiency of data abstraction.",
"title": ""
},
{
"docid": "26b8ec80d9fe7317e306bed3cd5c9fa4",
"text": "We describe a method for disambiguating Chinese commas that is central to Chinese sentence segmentation. Chinese sentence segmentation is viewed as the detection of loosely coordinated clauses separated by commas. Trained and tested on data derived from the Chinese Treebank, our model achieves a classification accuracy of close to 90% overall, which translates to an F1 score of 70% for detecting commas that signal sentence boundaries.",
"title": ""
},
{
"docid": "99bd908e217eb9f56c40abd35839e9b3",
"text": "How does the physical structure of an arithmetic expression affect the computational processes engaged in by reasoners? In handwritten arithmetic expressions containing both multiplications and additions, terms that are multiplied are often placed physically closer together than terms that are added. Three experiments evaluate the role such physical factors play in how reasoners construct solutions to simple compound arithmetic expressions (such as \"2 + 3 × 4\"). Two kinds of influence are found: First, reasoners incorporate the physical size of the expression into numerical responses, tending to give larger responses to more widely spaced problems. Second, reasoners use spatial information as a cue to hierarchical expression structure: More narrowly spaced subproblems within an expression tend to be solved first and tend to be multiplied. Although spatial relationships besides order are entirely formally irrelevant to expression semantics, reasoners systematically use these relationships to support their success with various formal properties.",
"title": ""
},
{
"docid": "a9f23b7a6e077d7e9ca1a3165948cdf3",
"text": "In most problem-solving activities, feedback is received at the end of an action sequence. This creates a credit-assignment problem where the learner must associate the feedback with earlier actions, and the interdependencies of actions require the learner to either remember past choices of actions (internal state information) or rely on external cues in the environment (external state information) to select the right actions. We investigated the nature of explicit and implicit learning processes in the credit-assignment problem using a probabilistic sequential choice task with and without external state information. We found that when explicit memory encoding was dominant, subjects were faster to select the better option in their first choices than in the last choices; when implicit reinforcement learning was dominant subjects were faster to select the better option in their last choices than in their first choices. However, implicit reinforcement learning was only successful when distinct external state information was available. The results suggest the nature of learning in credit assignment: an explicit memory encoding process that keeps track of internal state information and a reinforcement-learning process that uses state information to propagate reinforcement backwards to previous choices. However, the implicit reinforcement learning process is effective only when the valences can be attributed to the appropriate states in the system – either internally generated states in the cognitive system or externally presented stimuli in the environment.",
"title": ""
},
{
"docid": "3f8860bc21f26b81b066f4c75b9390e1",
"text": "Adaptive filter algorithms are extensively use in active control applications and the availability of low cost powerful digital signal processor (DSP) platforms has opened the way for new applications and further research opportunities in e.g. the active control area. The field of active control demands a solid exposure to practical systems and DSP platforms for a comprehensive understanding of the theory involved. Traditional laboratory experiments prove to be insufficient to fulfill these demands and need to be complemented with more flexible and economic remotely controlled laboratories. The purpose of this thesis project is to implement a number of different adaptive control algorithms in the recently developed remotely controlled Virtual Instrument Systems in Reality (VISIR) ANC/DSP remote laboratory at Blekinge Institute of Technology and to evaluate the performance of these algorithms in the remote laboratory. In this thesis, performance of different filtered-x versions adaptive algorithms (NLMS, LLMS, RLS and FuRLMS) has been evaluated in a remote Laboratory. The adaptive algorithms were implemented remotely on a Texas Instrument DSP TMS320C6713 in an ANC system to attenuate low frequency noise which ranges from 0-200 Hz in a circular ventilation duct using single channel feed forward control. Results show that the remote lab can handle complex and advanced control algorithms. These algorithms were tested and it was found that remote lab works effectively and the achieved attenuation level for the algorithms used on the duct system is comparable to similar applications.",
"title": ""
},
{
"docid": "e79df31bd411d7c62d625a047dde61ce",
"text": "The depth resolution achieved by a continuous wave time-of-flight (C-ToF) imaging system is determined by the coding (modulation and demodulation) functions that it uses. Almost all current C-ToF systems use sinusoid or square coding functions, resulting in a limited depth resolution. In this article, we present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. Given a fixed total source power and acquisition time, the new Hamiltonian coding scheme can achieve up to an order of magnitude higher resolution as compared to the current state-of-the-art methods, especially in low signal-to-noise ratio (SNR) settings. We also develop a comprehensive physically-motivated simulator for C-ToF cameras that can be used to evaluate various coding schemes prior to a real hardware implementation. Since most off-the-shelf C-ToF sensors use sinusoid or square functions, we develop a hardware prototype that can implement a wide range of coding functions. Using this prototype and our software simulator, we demonstrate the performance advantages of the proposed Hamiltonian coding functions in a wide range of imaging settings.",
"title": ""
},
{
"docid": "579db3cec4e49d53090ee13f35385c35",
"text": "In cloud computing environments, multiple tenants are often co-located on the same multi-processor system. Thus, preventing information leakage between tenants is crucial. While the hypervisor enforces software isolation, shared hardware, such as the CPU cache or memory bus, can leak sensitive information. For security reasons, shared memory between tenants is typically disabled. Furthermore, tenants often do not share a physical CPU. In this setting, cache attacks do not work and only a slow cross-CPU covert channel over the memory bus is known. In contrast, we demonstrate a high-speed covert channel as well as the first side-channel attack working across processors and without any shared memory. To build these attacks, we use the undocumented DRAM address mappings. We present two methods to reverse engineer the mapping of memory addresses to DRAM channels, ranks, and banks. One uses physical probing of the memory bus, the other runs entirely in software and is fully automated. Using this mapping, we introduce DRAMA attacks, a novel class of attacks that exploit the DRAM row buffer that is shared, even in multi-processor systems. Thus, our attacks work in the most restrictive environments. First, we build a covert channel with a capacity of up to 2 Mbps, which is three to four orders of magnitude faster than memory-bus-based channels. Second, we build a side-channel template attack that can automatically locate and monitor memory accesses. Third, we show how using the DRAM mappings improves existing attacks and in particular enables practical Rowhammer attacks on DDR4.",
"title": ""
},
{
"docid": "1769133301f9292d5c2b4e81e1d213be",
"text": "Smart contracts are an innovation built on top of the blockchain technology. It provides a platform for automatically executing contracts in an anonymous, distributed, and trusted way, which has the potential to revolutionize many industries. The most popular programming language for creating smart contracts is called Solidity, which is supported by Ethereum. Like ordinary programs, Solidity programs may contain vulnerabilities, which potentially lead to attacks. The problem is magnified by the fact that smart contracts, unlike ordinary programs, cannot be patched easily once deployed. It is thus important that smart contracts are checked against potential vulnerabilities. Existing approaches tackle the problem by developing methods which aim to automatically analyze or verify smart contracts. Such approaches often results in false alarms or poor scalability, fundamentally because Solidity is Turing-complete. In this work, we propose an alternative approach to automatically identify critical program paths (with multiple function calls including inter-contract function calls) in a smart contract, rank the paths according to their criticalness, discard them if they are infeasible or otherwise present them with user friendly warnings for user inspection. We identify paths which involve monetary transaction as critical paths, and prioritize those which potentially violate important properties. For scalability, symbolic execution techniques are only applied to top ranked critical paths. Our approach has been implemented in a tool called sCompile, which has been applied to 36,099 smart contracts. The experiment results show that sCompile is efficient, i.e., 5 seconds on average for one smart contract. Furthermore, we show that many known vulnerability can be captured if the user inspects as few as 10 program paths generated by sCompile. Lastly, sCompile discovered 224 unknown vulnerabilities with a false positive rate of 15.4% before user inspection.",
"title": ""
},
{
"docid": "dfcc931d9cd7d084bbbcf400f44756a5",
"text": "In this paper we address the problem of aligning very long (often more than one hour) audio files to their corresponding textual transcripts in an effective manner. We present an efficient recursive technique to solve this problem that works well even on noisy speech signals. The key idea of this algorithm is to turn the forced alignment problem into a recursive speech recognition problem with a gradually restricting dictionary and language model. The algorithm is tolerant to acoustic noise and errors or gaps in the text transcript or audio tracks. We report experimental results on a 3 hour audio file containing TV and radio broadcasts. We will show accurate alignments on speech under a variety of real acoustic conditions such as speech over music and speech over telephone lines. We also report results when the same audio stream has been corrupted with white additive noise or compressed using a popular web encoding format such as RealAudio. This algorithm has been used in our internal multimedia indexing project. It has processed more than 200 hours of audio from varied sources, such as WGBH NOVA documentaries and NPR web audio files. The system aligns speech media content in about one to five times realtime, depending on the acoustic conditions of the audio signal.",
"title": ""
},
{
"docid": "807885753c1d75be3c3cd37a066e718d",
"text": "Secure deletion of data from non-volatile storage is a well-recognized problem. While numerous solutions have been proposed, advances in storage technologies have stymied efforts to solve the problem. For instance, SSDs make use of techniques such as wear leveling that involve replication of data; this is in direct opposition to efforts to securely delete sensitive data from storage. We present a technique to provide secure deletion guarantees at file granularity, independent of the characteristics of the underlying storage medium. The approach builds on prior seminal work on cryptographic erasure, encrypting every file on an insecure medium with a unique key that can later be discarded to cryptographically render the data irrecoverable. To make the approach scalable and, therefore, usable on commodity systems, keys are organized in an efficient tree structure where a single master key is confined to a secure store. We describe an implementation of this scheme as a fileaware stackable block device, deployed as a standalone Linux kernel module that does not require modifications to the operating system. Our prototype demonstrates that secure deletion independent of the underlying storage medium can be achieved with comparable overhead to existing full disk encryption implementations.",
"title": ""
},
{
"docid": "b67a4add892b8bd88be792f24ff0c2ba",
"text": "A 2.1-to-2.8-GHz low-power consumption all-digital phase locked loop (ADPLL) with a time-windowed time-to-digital converter (TDC) is presented. The time-windowed TDC uses a two-step structure with an inverter- and a Vernier-delay time-quantizer to improve time resolution, which results in low phase noise. Time-windowed operation is implemented in the TDC, in which a single-shot pulse-based operation is used for low power consumption. The test chip implemented in 90-nm CMOS technology exhibits in-band phase noise of , where the loop-bandwidth is set to 500 kHz with a 40-MHz reference signal, and out-band noise of at a 1-MHz offset frequency. The chip core occupies 0.37 and the measured power consumption is 8.1 mA from a 1.2-V power supply.",
"title": ""
},
{
"docid": "9a7ef5c9f6ceca7a88d2351504404954",
"text": "In this paper, we propose a 3D HMM (Three-dimensional Hidden Markov Models) approach to recognizing human facial expressions and associated emotions. Human emotion is usually classified by psychologists into six categories: Happiness, Sadness, Anger, Fear, Disgust and Surprise. Further, psychologists categorize facial movements based on the muscles that produce those movements using a Facial Action Coding System (FACS). We look beyond pure muscle movements and investigate facial features – brow, mouth, nose, eye height and facial shape – as a means of determining associated emotions. Histogram of Optical Flow is used as the descriptor for extracting and describing the key features, while training and testing are performed on 3D Hidden Markov Models. Experiments on datasets show our approach is promising and robust.",
"title": ""
},
{
"docid": "a0420ded12a1fd704ce3ba939acb48db",
"text": "Deep belief nets (DBNs) have been successfully applied in various fields ranging from image classification and audio recognition to information retrieval. Compared with traditional shallow neural networks, DBNs can use unlabeled data to pretrain a multi-layer generative model, which can better solve the overfitting problem during training neural networks. In this study we represent malware as opcode sequences and use DBNs to detect malware. We compare the performance of DBNs with three widely used classification algorithms: Support Vector Machines (SVM), Decision Tree and k-Nearest Neighbor algorithm (KNN). The DBN model gives detection accuracy that is equal to the best of the other models. When using additional unlabeled data for DBN pre-training, DBNs performed better than the compared classification algorithms. We also use the DBNs as an autoencoder to extract the feature vectors of the input data. The experiments shows that the autoencoder can effectively model the underlying structure of the input data, and can significantly reduce the dimensions of feature vectors.",
"title": ""
},
{
"docid": "e8f4c76dad3f7888f00d82ca68b3d297",
"text": "A novel algorithm to segment out objects in a video sequence is proposed in this work. First, we extract object instances in each frame. Then, we select a visually important object instance in each frame to construct the salient object track through the sequence. This can be formulated as finding the maximal weight clique in a complete k-partite graph, which is NP hard. Therefore, we develop the sequential clique optimization (SCO) technique to efficiently determine the cliques corresponding to salient object tracks. We convert these tracks into video object segmentation results. Experimental results show that the proposed algorithm significantly outperforms the state-of-the-art video object segmentation and video salient object detection algorithms on recent benchmark datasets.",
"title": ""
},
{
"docid": "595cb7698c38b9f5b189ded9d270fe69",
"text": "Sentiment Analysis can help to extract knowledge related to opinions and emotions from user generated text information. It can be applied in medical field for patients monitoring purposes. With the availability of large datasets, deep learning algorithms have become a state of the art also for sentiment analysis. However, deep models have the drawback of not being non human-interpretable, raising various problems related to model’s interpretability. Very few work have been proposed to build models that explain their decision making process and actions. In this work, we review the current sentiment analysis approaches and existing explainable systems. Moreover, we present a critical review of explainable sentiment analysis models and discussed the insight of applying explainable sentiment analysis in the medical field.",
"title": ""
},
{
"docid": "0dc5a8b5b0c3d8424b510f5910f26976",
"text": "In 1992, Tani et al. proposed remotely operating machines in a factory by manipulating a live video image on a computer screen. In this paper we revisit this metaphor and investigate its suitability for mobile use. We present Touch Projector, a system that enables users to interact with remote screens through a live video image on their mobile device. The handheld device tracks itself with respect to the surrounding displays. Touch on the video image is \"projected\" onto the target display in view, as if it had occurred there. This literal adaptation of Tani's idea, however, fails because handheld video does not offer enough stability and control to enable precise manipulation. We address this with a series of improvements, including zooming and freezing the video image. In a user study, participants selected targets and dragged targets between displays using the literal and three improved versions. We found that participants achieved highest performance with automatic zooming and temporary image freezing.",
"title": ""
},
{
"docid": "462a0746875e35116f669b16d851f360",
"text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.",
"title": ""
}
] |
scidocsrr
|
dff9d0d7f03f37aa0d5db61a741a0580
|
Survey on Intrusion Detection System using Machine Learning Techniques
|
[
{
"docid": "b50efa7b82d929c1b8767e23e8359a06",
"text": "Intrusion detection (ID) is an important component of infrastructure protection mechanisms. Intrusion detection systems (IDSs) need to be accurate, adaptive, and extensible. Given these requirements and the complexities of today's network environments, we need a more systematic and automated IDS development process rather that the pure knowledge encoding and engineering approaches. This article describes a novel framework, MADAM ID, for Mining Audit Data for Automated Models for Instrusion Detection. This framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns. It then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems. We also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing IDSs.",
"title": ""
},
{
"docid": "0f853c6ccf6ce4cf025050135662f725",
"text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.",
"title": ""
}
] |
[
{
"docid": "26140dbe32672dc138c46e7fd6f39b1a",
"text": "The state of the art in probabilistic demand forecasting [40] minimizes Quantile Loss to predict the future demand quantiles for different horizons. However, since quantiles aren’t additive, in order to predict the total demand for any wider future interval all required intervals are usually appended to the target vector during model training. The separate optimization of these overlapping intervals can lead to inconsistent forecasts, i.e. forecasts which imply an invalid joint distribution between different horizons. As a result, inter-temporal decision making algorithms that depend on the joint or step-wise conditional distribution of future demand cannot utilize these forecasts. In this work, we address the problem by using sample paths to predict future demand quantiles in a consistent manner and propose several novel methodologies to solve this problem. Our work covers the use of covariance shrinkage methods, autoregressive models, generative adversarial networks and also touches on the use of variational autoencoders and Bayesian Dropout.",
"title": ""
},
{
"docid": "26d0e97bbb14bc52b8dbb3c03522ac38",
"text": "Intraocular injections of rhodamine and horseradish peroxidase in chameleon, labelled retrogradely neurons in the ventromedial tegmental region of the mesencephalon and the ventrolateral thalamus of the diencephalon. In both areas, staining was observed contralaterally to the injected eye. Labelling was occasionally observed in some rhombencephalic motor nuclei. These results indicate that chameleons, unlike other reptilian species, have two retinopetal nuclei.",
"title": ""
},
{
"docid": "15e2fc773fb558e55d617f4f9ac22f69",
"text": "Recent advances in ASR and spoken language processing have led to improved systems for automated assessment for spoken language. However, it is still challenging for automated scoring systems to achieve high performance in terms of the agreement with human experts when applied to non-native children’s spontaneous speech. The subpar performance is mainly caused by the relatively low recognition rate on non-native children’s speech. In this paper, we investigate different neural network architectures for improving non-native children’s speech recognition and the impact of the features extracted from the corresponding ASR output on the automated assessment of speaking proficiency. Experimental results show that bidirectional LSTM-RNN can outperform feed-forward DNN in ASR, with an overall relative WER reduction of 13.4%. The improved speech recognition can then boost the language proficiency assessment performance. Correlations between the rounded automated scores and expert scores range from 0.66 to 0.70 for the three speaking tasks studied, similar to the humanhuman agreement levels for these tasks.",
"title": ""
},
{
"docid": "aa223de93696eec79feb627f899f8e8d",
"text": "The standard life events methodology for the prediction of psychological symptoms was compared with one focusing on relatively minor events, namely, the hassles and uplifts of everyday life. Hassles and Uplifts Scales were constructed and administered once a month for 10 consecutive months to a community sample of middle-aged adults. It was found that the Hassles Scale was a better predictor of concurrent and subsequent psychological symptoms than were the life events scores, and that the scale shared most of the variance in symptoms accounted for by life events. When the effects of life events scores were removed, hassles and symptoms remained significantly correlated. Uplifts were positively related to symptoms for women but not for men. Hassles and uplifts were also shown to be related, although only modestly so, to positive and negative affect, thus providing discriminate validation for hassles and uplifts in comparison to measures of emotion. It was concluded that the assessment of daily hassles and uplifts may be a better approach to the prediction of adaptational outcomes than the usual life events approach.",
"title": ""
},
{
"docid": "d704917077795fbe16e52ea2385e19ef",
"text": "The objectives of this review were to summarize the evidence from randomized controlled trials (RCTs) on the effects of animal-assisted therapy (AAT). Studies were eligible if they were RCTs. Studies included one treatment group in which AAT was applied. We searched the following databases from 1990 up to October 31, 2012: MEDLINE via PubMed, CINAHL, Web of Science, Ichushi Web, GHL, WPRIM, and PsycINFO. We also searched all Cochrane Database up to October 31, 2012. Eleven RCTs were identified, and seven studies were about \"Mental and behavioral disorders\". Types of animal intervention were dog, cat, dolphin, bird, cow, rabbit, ferret, and guinea pig. The RCTs conducted have been of relatively low quality. We could not perform meta-analysis because of heterogeneity. In a study environment limited to the people who like animals, AAT may be an effective treatment for mental and behavioral disorders such as depression, schizophrenia, and alcohol/drug addictions, and is based on a holistic approach through interaction with animals in nature. To most effectively assess the potential benefits for AAT, it will be important for further research to utilize and describe (1) RCT methodology when appropriate, (2) reasons for non-participation, (3) intervention dose, (4) adverse effects and withdrawals, and (5) cost.",
"title": ""
},
{
"docid": "37f4da100d31ad1da1ba21168c95d7e9",
"text": "An AC chopper controller with symmetrical Pulse-Width Modulation (PWM) is proposed to achieve better performance for a single-phase induction motor compared to phase-angle control line-commutated voltage controllers and integral-cycle control of thyristors. Forced commutated device IGBT controlled by a microcontroller was used in the AC chopper which has the advantages of simplicity, ability to control large amounts of power and low waveform distortion. In this paper the simulation and hardware models of a simple single phase IGBT An AC controller has been developed which showed good results.",
"title": ""
},
{
"docid": "554a3f5f19503a333d3788cf46ffcef2",
"text": "Hospital overcrowding has been a problem in Thai public healthcare system. The main cause of this problem is the limited available resources, including a limited number of doctors, nurses, and limited capacity and availability of medical devices. There have been attempts to alleviate the problem through various strategies. In this paper, a low-cost system was developed and tested in a public hospital with limited budget. The system utilized QR code and smartphone application to capture as-is hospital processes and the time spent on individual activities. With the available activity data, two algorithms were developed to identify two quantities that are valuable to conduct process improvement: the most congested time and bottleneck activities. The system was implemented in a public hospital and results were presented.",
"title": ""
},
{
"docid": "9eae7dded031b37956ceea6e68f1076c",
"text": "One of the core principles of the SAP HANA database system is the comprehensive support of distributed query facility. Supporting scale-out scenarios was one of the major design principles of the system from the very beginning. Within this paper, we first give an overview of the overall functionality with respect to data allocation, metadata caching and query routing. We then dive into some level of detail for specific topics and explain features and methods not common in traditional disk-based database systems. In summary, the paper provides a comprehensive overview of distributed query processing in SAP HANA database to achieve scalability to handle large databases and heterogeneous types of workloads.",
"title": ""
},
{
"docid": "54ed287c473d796c291afda23848338e",
"text": "Shared memory and message passing are two opposing communication models for parallel multicomputer architectures. Comparing such architectures has been difficult, because applications must be hand-crafted for each architecture, often resulting in radically different sources for comparison. While it is clear that shared memory machines are currently easier to program, in the future, programs will be written in high-level languages and compiled to the specific parallel target, thus eliminating this difference.In this paper, we evaluate several parallel architecture alternatives --- message passing, NUMA, and cachecoherent shared memory --- for a collection of scientific benchmarks written in C*, a data-parallel language. Using a single suite of C* source programs, we compile each benchmark and simulate the interconnect for the alternative models. Our objective is to examine underlying, technology-independent costs inherent in each alternative. Our results show the relative work required to execute these data parallel programs on the different architectures, and point out where some models have inherent advantages for particular data-parallel program styles.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "2aade03834c6db2ecc2912996fd97501",
"text": "User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers.",
"title": ""
},
{
"docid": "2aefddf5e19601c8338f852811cebdee",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
},
{
"docid": "c3f2726c10ebad60d715609f15b67b43",
"text": "Sleep-waking cycles are fundamental in human circadian rhythms and their disruption can have consequences for behaviour and performance. Such disturbances occur due to domestic or occupational schedules that do not permit normal sleep quotas, rapid travel across multiple meridians and extreme athletic and recreational endeavours where sleep is restricted or totally deprived. There are methodological issues in quantifying the physiological and performance consequences of alterations in the sleep-wake cycle if the effects on circadian rhythms are to be separated from the fatigue process. Individual requirements for sleep show large variations but chronic reduction in sleep can lead to immuno-suppression. There are still unanswered questions about the sleep needs of athletes, the role of 'power naps' and the potential for exercise in improving the quality of sleep.",
"title": ""
},
{
"docid": "7ab5f56b615848ba5d8dc2f149fd8bf2",
"text": "At present, most outdoor video-surveillance, driver-assistance and optical remote sensing systems have been designed to work under good visibility and weather conditions. Poor visibility often occurs in foggy or hazy weather conditions and can strongly influence the accuracy or even the general functionality of such vision systems. Consequently, it is important to import actual weather-condition data to the appropriate processing mode. Recently, significant progress has been made in haze removal from a single image [1,2]. Based on the hazy weather classification, specialized approaches, such as a dehazing process, can be employed to improve recognition. Figure 1 shows a sample processing flow of our dehazing program.",
"title": ""
},
{
"docid": "7a356a485b46c6fc712a0174947e142e",
"text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related forearm, wrist, and hand injuries and illnesses was conducted as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review provides a comprehensive overview and analysis of 36 studies that addressed many of the interventions commonly used in hand rehabilitation. Findings reveal that the use of occupation-based activities has reasonable yet limited evidence to support its effectiveness. This review supports the premise that many client factors can be positively affected through the use of several commonly used occupational therapy-related modalities and methods. The implications for occupational therapy practice, research, and education and limitations of reviewed studies are also discussed.",
"title": ""
},
{
"docid": "7ac42bef7a9e0c8bd33f359a157f24e0",
"text": "Monte Carlo tree search (MCTS) is a heuristic search method that is used to efficiently search decision trees. The method is particularly efficient in searching trees with a high branching factor. MCTS has a number of advantages over traditional tree search algorithms like simplicity, adaptability etc. This paper is a study of existing literature on different types of MCTS, specifically on using Genetic Algorithms with MCTS. It studies the advantages and disadvantages of this approach, and applies an enhanced variant to Gomoku, a board game with a high branching factor.",
"title": ""
},
{
"docid": "fead6ca9612b29697f73cb5e57c0a1cc",
"text": "This research examines the effect of online social capital and Internet use on the normally negative effects of technology addiction, especially for individuals prone to self-concealment. Self-concealment is a personality trait that describes individuals who are more likely to withhold personal and private information, inhibiting catharsis and wellbeing. Addiction, in any context, is also typically associated with negative outcomes. However, we investigate the hypothesis that communication technology addiction may positively affect wellbeing for self-concealing individuals when online interaction is positive, builds relationships, or fosters a sense of community. Within these parameters, increased communication through mediated channels (and even addiction) may reverse the otherwise negative effects of self-concealment on wellbeing. Overall, the proposed model offers qualified support for the continued analysis of mediated communication as a potential source for improving the wellbeing for particular individuals. This study is important because we know that healthy communication in relationships, including disclosure, is important to wellbeing. This study recognizes that not all people are comfortable communicating in face-to-face settings. Our findings offer evidence that the presence of computers in human behaviors (e.g., mediated channels of communication and NCTs) enables some individuals to communicate and fos ter beneficial interpersonal relationships, and improve their wellbeing.",
"title": ""
},
{
"docid": "4c61d388acfde29dbf049842ef54a800",
"text": "Image matting plays an important role in image and video editing. However, the formulation of image matting is inherently ill-posed. Traditional methods usually employ interaction to deal with the image matting problem with trimaps and strokes, and cannot run on the mobile phone in real-time. In this paper, we propose a real-time automatic deep matting approach for mobile devices. By leveraging the densely connected blocks and the dilated convolution, a light full convolutional network is designed to predict a coarse binary mask for portrait image. And a feathering block, which is edge-preserving and matting adaptive, is further developed to learn the guided filter and transform the binary mask into alpha matte. Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps. The experiments show that the proposed approach achieves comparable results with the state-of-the-art matting solvers.",
"title": ""
},
{
"docid": "fde2aefec80624ff4bc21d055ffbe27b",
"text": "Object detector with region proposal networks such as Fast/Faster R-CNN [1, 2] have shown the state-of-the art performance on several benchmarks. However, they have limited success for detecting small objects. We argue the limitation is related to insufficient performance of Fast R-CNN block in Faster R-CNN. In this paper, we propose a refining block for Fast R-CNN. We further merge the block and Faster R-CNN into a single network (RF-RCNN). The RF-RCNN was applied on plate and human detection in RoadView image that consists of high resolution street images (over 30M pixels). As a result, the RF-RCNN showed great improvement over the Faster-RCNN.",
"title": ""
},
{
"docid": "a9fc5418c0b5789b02dd6638a1b61b5d",
"text": "As the homeostatis characteristics of nerve systems show, artificial neural networks are considered to be robust to variation of circuit components and interconnection faults. However, the tolerance of neural networks depends on many factors, such as the fault model, the network size, and the training method. In this study, we analyze the fault tolerance of fixed-point feed-forward deep neural networks for the implementation in CMOS digital VLSI. The circuit errors caused by the interconnection as well as the processing units are considered. In addition to the conventional and dropout training methods, we develop a new technique that randomly disconnects weights during the training to increase the error resiliency. Feed-forward deep neural networks for phoneme recognition are employed for the experiments.",
"title": ""
}
] |
scidocsrr
|
916fb7599c7605b8f46ed3646f7e429f
|
Coco: Runtime Reasoning about Conflicting Commitments
|
[
{
"docid": "0c1f01d9861783498c44c7c3d0acd57e",
"text": "We understand a sociotechnical system as a multistakeholder cyber-physical system. We introduce governance as the administration of such a system by the stakeholders themselves. In this regard, governance is a peer-to-peer notion and contrasts with traditional management, which is a top-down hierarchical notion. Traditionally, there is no computational support for governance and it is achieved through out-of-band interactions among system administrators. Not surprisingly, traditional approaches simply do not scale up to large sociotechnical systems.\n We develop an approach for governance based on a computational representation of norms in organizations. Our approach is motivated by the Ocean Observatory Initiative, a thirty-year $400 million project, which supports a variety of resources dealing with monitoring and studying the world's oceans. These resources include autonomous underwater vehicles, ocean gliders, buoys, and other instrumentation as well as more traditional computational resources. Our approach has the benefit of directly reflecting stakeholder needs and assuring stakeholders of the correctness of the resulting governance decisions while yielding adaptive resource allocation in the face of changes in both stakeholder needs and physical circumstances.",
"title": ""
}
] |
[
{
"docid": "4b97e5694dc8f1d2e1b5bf8f28bd9b10",
"text": "Poor eating habits are an important public health issue that has large health and economic implications. Many food preferences are established early, but because people make more and more independent eating decisions as they move through adolescence, the transition to independent living during the university days is an important event. To study the phenomenon of food selection, the heath belief model was applied to predict the likelihood of healthy eating among university students. Structural equation modeling was used to investigate the validity of the health belief model (HBM) among 194 students, followed by gender-based analyses. The data strongly supported the HBM. Social change campaign implications are discussed.",
"title": ""
},
{
"docid": "bc244181de9e85cacb2e797585fcb9c1",
"text": "Recent tourism research increasingly explored the opportunities of using Augmented Reality (AR) in order to boost tourism and increase the value for tourists while travelling within a destination. The Technology Acceptance Model (TAM) has been applied to a number of research disciplines, lately also AR however, studies focusing on the tourism context are still scarce. As this field is expected to increase in importance rapidly due to technological advancements and research into functionality, acceptance and usefulness, it is important to identify what the basic requirements are for AR to be accepted by users. Furthermore, the provision of a conceptual model provides researchers with a starting point on which they can base their future research. Therefore, this paper proposes an AR acceptance model including five external variables that might be included in future AR acceptance research.",
"title": ""
},
{
"docid": "74bb1f11761857bf876c9869ed47baeb",
"text": "This paper describes the automatic design of methods for detecting fraudulent behavior. Much of the de&,, ic nrrnm,-,li~h~rl ,,&,a n .am.L~ nf mn.-h;na lm..~:~~ e-. .. ..--..*.*yYYA’“.. UY.“b Y UISLUY “I III-Yllr IxuIY11~ methods. In particular, we combine data mining and constructive induction with more standard machine learning techniques to design methods for detecting fraudulent usage of cellular telephones based on profiling customer behavior. Specifically, we use a rulelearning program to uncover indicators of fraudulent behavior from a large database of cellular calls. These indicators are used to create profilers, which then serve as features to a system that combines evidence from multiple profilers to generate high-confidence alarms. Experiments indicate that this automatic approach performs nearly as well as the best hand-tuned methods for detecting fraud.",
"title": ""
},
{
"docid": "28fd803428e8f40a4627e05a9464e97b",
"text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"title": ""
},
{
"docid": "d21213e0dbef657d5e7ec8689fe427ed",
"text": "Cutaneous infections due to Listeria monocytogenes are rare. Typically, infections manifest as nonpainful, nonpruritic, self-limited, localized, papulopustular or vesiculopustular eruptions in healthy persons. Most cases follow direct inoculation of the skin in veterinarians or farmers who have exposure to animal products of conception. Less commonly, skin lesions may arise from hematogenous dissemination in compromised hosts with invasive disease. Here, we report the first case in a gardener that occurred following exposure to soil and vegetation.",
"title": ""
},
{
"docid": "57dbe095ca124fbf0fc394b927e9883f",
"text": "How much is 131 million US dollars? To help readers put such numbers in context, we propose a new task of automatically generating short descriptions known as perspectives, e.g. “$131 million is about the cost to employ everyone in Texas over a lunch period”. First, we collect a dataset of numeric mentions in news articles, where each mention is labeled with a set of rated perspectives. We then propose a system to generate these descriptions consisting of two steps: formula construction and description generation. In construction, we compose formulae from numeric facts in a knowledge base and rank the resulting formulas based on familiarity, numeric proximity and semantic compatibility. In generation, we convert a formula into natural language using a sequence-to-sequence recurrent neural network. Our system obtains a 15.2% F1 improvement over a non-compositional baseline at formula construction and a 12.5 BLEU point improvement over a baseline description generation.",
"title": ""
},
{
"docid": "e7107b7d552ae2dc44f4ac51210f433e",
"text": "A one-step process was applied to directly converting wet oil-bearing microalgae biomass of Chlorella pyrenoidosa containing about 90% of water into biodiesel. In order to investigate the effects of water content on biodiesel production, distilled water was added to dried microalgae biomass to form wet biomass used to produce biodiesel. The results showed that at lower temperature of 90°C, water had a negative effect on biodiesel production. The biodiesel yield decreased from 91.4% to 10.3% as water content increased from 0% to 90%. Higher temperature could compensate the negative effect. When temperature reached 150°C, there was no negative effect, and biodiesel yield was over 100%. Based on the above research, wet microalgae biomass was directly applied to biodiesel production, and the optimal conditions were investigated. Under the optimal conditions of 100 mg dry weight equivalent wet microalgae biomass, 4 mL methanol, 8 mL n-hexane, 0.5 M H2SO4, 120°C, and 180 min reaction time, the biodiesel yield reached as high as 92.5% and the FAME content was 93.2%. The results suggested that biodiesel could be effectively produced directly from wet microalgae biomass and this effort may offer the benefits of energy requirements for biodiesel production.",
"title": ""
},
{
"docid": "555e3bbc504c7309981559a66c584097",
"text": "The hippocampus has been implicated in the regulation of anxiety and memory processes. Nevertheless, the precise contribution of its ventral (VH) and dorsal (DH) division in these issues still remains a matter of debate. The Trial 1/2 protocol in the elevated plus-maze (EPM) is a suitable approach to assess features associated with anxiety and memory. Information about the spatial environment on initial (Trial 1) exploration leads to a subsequent increase in open-arm avoidance during retesting (Trial 2). The objective of the present study was to investigate whether transient VH or DH deactivation by lidocaine microinfusion would differently interfere with the performance of EPM-naive and EPM-experienced rats. Male Wistar rats were bilaterally-implanted with guide cannulas aimed at the VH or the DH. One-week after surgery, they received vehicle or lidocaine 2.0% in 1.0 microL (0.5 microL per side) at pre-Trial 1, post-Trial 1 or pre-Trial 2. There was an increase in open-arm exploration after the intra-VH lidocaine injection on Trial 1. Intra-DH pre-Trial 2 administration of lidocaine also reduced the open-arm avoidance. No significant changes were observed in enclosed-arm entries, an EPM index of general exploratory activity. The cautious exploration of potentially dangerous environment requires VH functional integrity, suggesting a specific role for this region in modulating anxiety-related behaviors. With regard to the DH, it may be preferentially involved in learning and memory since the acquired response of inhibitory avoidance was no longer observed when lidocaine was injected pre-Trial 2.",
"title": ""
},
{
"docid": "72f3800a072c2844f6ec145788c0749e",
"text": "In Augmented Reality (AR), interfaces consist of a blend of both real and virtual content. In this paper we examine existing gaming styles played in the real world or on computers. We discuss the strengths and weaknesses of these mediums within an informal model of gaming experience split into four aspects; physical, mental, social and emotional. We find that their strengths are mostly complementary, and argue that games built in AR can blend them to enhance existing game styles and open up new ones. To illustrate these ideas, we present our work on AR Worms, a re-implementation of the classic computer game Worms using Augmented Reality. We discuss how AR has enabled us to start exploring interfaces for gaming, and present informal observations of players at several demonstrations. Finally, we present some ideas for AR games in the area of strategy and role playing games.",
"title": ""
},
{
"docid": "ab0c814984386934d6e99ba04d07ef18",
"text": "Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction through label propagation and label regression. Different from previous efforts, the new approach propagates the label information from labeled to unlabeled data with a well-designed mechanism of random walks, in which outliers are effectively detected and the obtained virtual labels of unlabeled data can be well encoded in a weighted regression model. These virtual labels are thereafter regressed with a linear model to calculate the projection matrix for dimensionality reduction. By this means, when the manifold or the clustering assumption of data is satisfied, the labels of labeled data can be correctly propagated to the unlabeled data; and thus, the proposed approach utilizes the labeled and the unlabeled data more effectively than previous work. Experimental results are carried out upon several databases, and the advantage of the new approach is well demonstrated.",
"title": ""
},
{
"docid": "5706118011df482fdd1e3690c638e963",
"text": "This paper proposes a novel approach for segmenting primary video objects by using Complementary Convolutional Neural Networks (CCNN) and neighborhood reversible flow. The proposed approach first pre-trains CCNN on massive images with manually annotated salient objects in an end-to-end manner, and the trained CCNN has two separate branches that simultaneously handle two complementary tasks, i.e., foregroundness and backgroundness estimation. By applying CCNN on each video frame, the spatial foregroundness and backgroundness maps can be initialized, which are then propagated between various frames so as to segment primary video objects and suppress distractors. To enforce efficient temporal propagation, we divide each frame into superpixels and construct neighborhood reversible flow that reflects the most reliable temporal correspondences between superpixels in far-away frames. Within such flow, the initialized foregroundness and backgroundness can be efficiently and accurately propagated along the temporal axis so that primary video objects gradually pop-out and distractors are well suppressed. Extensive experimental results on three video datasets show that the proposed approach achieves impressive performance in comparisons with 18 state-of-the-art models.",
"title": ""
},
{
"docid": "3b26f9c91ee0eb76768403fcb9579003",
"text": "The major task of network embedding is to learn low-dimensional vector representations of social-network nodes. It facilitates many analytical tasks such as link prediction and node clustering and thus has attracted increasing attention. The majority of existing embedding algorithms are designed for unsigned social networks. However, many social media networks have both positive and negative links, for which unsigned algorithms have little utility. Recent findings in signed network analysis suggest that negative links have distinct properties and added value over positive links. This brings about both challenges and opportunities for signed network embedding. In addition, user attributes, which encode properties and interests of users, provide complementary information to network structures and have the potential to improve signed network embedding. Therefore, in this paper, we study the novel problem of signed social network embedding with attributes. We propose a novel framework SNEA, which exploits the network structure and user attributes simultaneously for network representation learning. Experimental results on link prediction and node clustering with real-world datasets demonstrate the effectiveness of SNEA.",
"title": ""
},
{
"docid": "09c7331d77c5a9a2812df90e6e9256ea",
"text": "We present a technique for approximating a light probe image as a constellation of light sources based on a median cut algorithm. The algorithm is efficient, simple to implement, and can realistically represent a complex lighting environment with as few as 64 point light sources.",
"title": ""
},
{
"docid": "95395c693b4cdfad722ae0c3545f45ef",
"text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.",
"title": ""
},
{
"docid": "21db70be88df052de82990109941e49a",
"text": "We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.",
"title": ""
},
{
"docid": "01a21dde4e7e14ed258cb05025ee4efc",
"text": "Computerized and, more recently, Internet-based treatments for depression have been developed and tested in controlled trials. The aim of this meta-analysis was to summarize the effects of these treatments and investigate characteristics of studies that may be related to the effects. In particular, the authors were interested in the role of personal support when completing a computerized treatment. Following a literature search and coding, the authors included 12 studies, with a total of 2446 participants. Ten of the 12 studies were delivered via the Internet. The mean effect size of the 15 comparisons between Internet-based and other computerized psychological treatments vs. control groups at posttest was d = 0.41 (95% confidence interval [CI]: 0.29-0.54). However, this estimate was moderated by a significant difference between supported (d = 0.61; 95% CI: 0.45-0.77) and unsupported (d = 0.25; 95% CI: 0.14-0.35) treatments. The authors conclude that although more studies are needed, Internet and other computerized treatments hold promise as potentially evidence-based treatments of depression.",
"title": ""
},
{
"docid": "a57aa7ff68f7259a9d9d4d969e603dcd",
"text": "Society has changed drastically over the last few years. But this is nothing new, or so it appears. Societies are always changing, just as people are always changing. And seeing as it is the people who form the societies, a constantly changing society is only natural. However something more seems to have happened over the last few years. Without wanting to frighten off the reader straight away, we can point to a diversity of social developments that indicate that the changes seem to be following each other faster, especially over the last few decades. We can for instance, point to the pluralisation (or a growing versatility), differentialisation and specialisation of society as a whole. On a more personal note, we see the diversification of communities, an emphasis on emancipation, individualisation and post-materialism and an increasing wish to live one's life as one wishes, free from social, religious or ideological contexts.",
"title": ""
},
{
"docid": "30235f6663f14ee6e9b963064566eabe",
"text": "Intentional change theory (ICT) explains sustainable leadership development in terms of the essential components of behavior, thoughts, feelings, and perceptions related to leadership effectiveness as a complex system (Boyatzis, 2001, 2006a, 2006b). This article reviews previous studies and expands the interpretation using complexity theory concepts focused on leadership development. Sustained, desired change represents a metamorphosis in actions, habits, or competencies associated with leadership effectiveness. It may be in dreams or aspirations. It may be in the way someone acts in certain situations. A person may refine her sensitivity to others (Boyatzis, in press), become more optimistic (Seligman & Csikszentmihalyi, 2000), or learn how to articulate a shared vision for those in her organization (Bennis & Nanus, 1985). These changes are desired in that the person thinks, feels, or acts in a specified manner. They are sustainable in that they endure. A sustained, desired change may also include the wish to maintain a current state, relationship, or habit, but maintaining the current state appears to require an investment of energy. In either situation, it requires intentional effort.",
"title": ""
},
{
"docid": "a987f009509e9c4f5c29b27275713eac",
"text": "PURPOSE\nThis article provides a critical overview of problem-based learning (PBL), its effectiveness for knowledge acquisition and clinical performance, and the underlying educational theory. The focus of the paper is on (1) the credibility of claims (both empirical and theoretical) about the ties between PBL and educational outcomes and (2) the magnitude of the effects.\n\n\nMETHOD\nThe author reviewed the medical education literature, starting with three reviews published in 1993 and moving on to research published from 1992 through 1998 in the primary sources for research in medical education. For each study the author wrote a summary, which included study design, outcome measures, effect sizes, and any other information relevant to the research conclusion.\n\n\nRESULTS AND CONCLUSION\nThe review of the literature revealed no convincing evidence that PBL improves knowledge base and clinical performance, at least not of the magnitude that would be expected given the resources required for a PBL curriculum. The results were considered in light of the educational theory that underlies PBL and its basic research. The author concludes that the ties between educational theory and research (both basic and applied) are loose at best.",
"title": ""
},
{
"docid": "45e5227a5b156806a3bdc560ce895651",
"text": "This paper presents reconfigurable RF integrated circuits (ICs) for a compact implementation of an intelligent RF front-end for multiband and multistandard applications. Reconfigurability has been addressed at each level starting from the basic elements to the RF blocks and the overall front-end architecture. An active resistor tunable from 400 to 1600 /spl Omega/ up to 10 GHz has been designed and an equivalent model has been extracted. A fully tunable active inductor using a tunable feedback resistor has been proposed that provides inductances between 0.1-15 nH with Q>50 in the C-band. To demonstrate reconfigurability at the block level, voltage-controlled oscillators with very wide tuning ranges have been implemented in the C-band using the proposed active inductor, as well as using a switched-spiral resonator with capacitive tuning. The ICs have been implemented using 0.18-/spl mu/m Si-CMOS and 0.18-/spl mu/m SiGe-BiCMOS technologies.",
"title": ""
}
] |
scidocsrr
|
c330cf827d5d780e09c2bba7eb78dcb0
|
Fast malware classification by automated behavioral graph matching
|
[
{
"docid": "b37de4587fbadad9258c1c063b03a07a",
"text": "Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus(AV)–a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects—from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.",
"title": ""
},
{
"docid": "f395e3d72341bd20e1a16b97259bad7d",
"text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.",
"title": ""
},
{
"docid": "3c3f3a9d6897510d5d5d3d55c882502c",
"text": "Error-tolerant graph matching is a powerful concept that has various applications in pattern recognition and machine vision. In the present paper, a new distance measure on graphs is proposed. It is based on the maximal common subgraph of two graphs. The new measure is superior to edit distance based measures in that no particular edit operations together with their costs need to be defined. It is formally shown that the new distance measure is a metric. Potential algorithms for the efficient computation of the new measure are discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "122c3bb1eef57338f841d9ad6b2756c0",
"text": "In this paper the concept of interval valued intuitionistic fuzzy soft rough sets is introduced. Also interval valued intuitionistic fuzzy soft rough set based multi criteria group decision making scheme is presented, which refines the primary evaluation of the whole expert group and enables us to select the optimal object in a most reliable manner. The proposed scheme is illustrated by an example regarding the candidate selection problem. 2010 AMS Classification: 54A40, 03E72, 03E02, 06D72",
"title": ""
},
{
"docid": "93d74028598d9d654ce198df606ba0ef",
"text": "Continually advancing technology has made it feasible to capture data online for onward transmission as a steady flow of newly generated data points, termed as data stream. Continuity and unboundedness of data streams make storage of data and multiple scans of data an impractical proposition for the purpose of knowledge discovery. Need to learn structures from data in streaming environment has been a driving force for making clustering a popular technique for knowledge discovery from data streams. Continuous nature of streaming data makes it infeasible to look for point membership among the clusters discovered so far, necessitating employment of a synopsis structure to consolidate incoming data points. This synopsis is exploited for building clustering scheme to meet subsequent user demands. The proposed Exclusive and Complete Clustering (ExCC) algorithm captures non-overlapping clusters in data streams with mixed attributes, such that each point either belongs to some cluster or is an outlier/noise. The algorithm is robust, adaptive to changes in data distribution and detects succinct outliers on-the-fly. It deploys a fixed granularity grid structure as synopsis and performs clustering by coalescing dense regions in grid. Speed-based pruning is applied to synopsis prior to clustering to ensure currency of discovered clusters. Extensive experimentation demonstrates that the algorithm is robust, identifies succinct outliers on-the-fly and is adaptive to change in the data distribution. ExCC algorithm is further evaluated for performance and compared with other contemporary algorithms.",
"title": ""
},
{
"docid": "3e62ac4e3476cc2999808f0a43a24507",
"text": "We present a detailed description of a new Bioconductor package, phyloseq, for integrated data and analysis of taxonomically-clustered phylogenetic sequencing data in conjunction with related data types. The phyloseq package integrates abundance data, phylogenetic information and covariates so that exploratory transformations, plots, and confirmatory testing and diagnostic plots can be carried out seamlessly. The package is built following the S4 object-oriented framework of the R language so that once the data have been input the user can easily transform, plot and analyze the data. We present some examples that highlight the methods and the ease with which we can leverage existing packages.",
"title": ""
},
{
"docid": "5288f4bbc2c9b8531042ce25b8df05b0",
"text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.",
"title": ""
},
{
"docid": "8d6ebefca528255bc14561e1106522af",
"text": "Constant power loads may yield instability due to the well-known negative impedance characteristic. This paper analyzes the factors that cause instability of a dc microgrid with multiple dc–dc converters. Two stabilization methods are presented for two operation modes: 1) constant voltage source mode; and 2) droop mode, and sufficient conditions for the stability of the dc microgrid are obtained by identifying the eigenvalues of the Jacobian matrix. The key is to transform the eigenvalue problem to a quadratic eigenvalue problem. When applying the methods in practical engineering, the salient feature is that the stability parameter domains can be estimated by the available constraints, such as the values of capacities, inductances, maximum load power, and distances of the cables. Compared with some classical methods, the proposed methods have wider stability region. The simulation results based on MATLAB/simulink platform verify the feasibility of the methods.",
"title": ""
},
{
"docid": "10b1bcf25d8a96c076c32d3c20ecb664",
"text": "Barrett’s esophagus (BE) is characterized by a distinct Th2-predominant cytokine profile. However, antigens that shift the immune response toward the Th2 profile are unknown. We examined the effects of rebamipide on the esophageal microbiome and BE development in a rat model. BE was induced by esophagojejunostomy in 8-week-old male Wistar rats. Rats were divided into control and rebamipide-treated group receiving either a normal or a 0.225 % rebamipide-containing diet, respectively, and killed 8, 16, 24, and 32 weeks after the operation. PCR-amplified 16S rDNAs extracted from esophageal samples were examined by terminal-restriction fragment length polymorphism (T-RFLP) analysis to assess microbiome composition. The dynamics of four bacterial genera (Lactobacillus, Clostridium, Streptococcus, and Enterococcus) were analyzed by real-time PCR. The incidences of BE in the control and rebamipide group at 24 and 32 weeks were 80 and 100, and 20 and 33 %, respectively. T-RFLP analysis of normal esophagus revealed that the proportion of Clostridium was 8.3 %, while that of Lactobacillales was 71.8 %. The proportions of Clostridium increased and that of Lactobacillales decreased at 8 weeks in both groups. Such changes were consistently observed in the control but not in the rebamipide group. Clostridium and Lactobacillus expression was lower and higher, respectively, in the rebamipide group than in the control group. Rebamipide reduced BE development and altered the esophageal microbiome composition, which might play a role in BE development.",
"title": ""
},
{
"docid": "4d73e82edf2a5ab8ee9191e1c1492af7",
"text": "This analysis focuses on several masculinities conceptualized by Holden Caulfield within the sociohistorical context of the 1950s. When it is understood by heterosexual men that to be masculine means to be emotionally detached and viewing women as sexual objects, they help propagate the belief that expressions of femininity and non-hegemonic masculinities are abnormal behavior and something to be demonized and punished. I propose that Holden’s “craziness” is the result of there being no positive images of heteronormative masculinity and no alternative to the rigid and strictly enforced roles offered to boys as they enter manhood. I suggest that a paradigm shift is needed that starts by collectively recognizing our current forms of patriarchy as harmful to everyone, followed by a reevaluation of how gender is prescribed to youth.",
"title": ""
},
{
"docid": "fa7682dc85d868e57527fdb3124b309c",
"text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"title": ""
},
{
"docid": "8ecd81f0078666a91a8c4183c2cb5a11",
"text": "Due to the broadcast nature of radio propagation, the wireless air interface is open and accessible to both authorized and illegitimate users. This completely differs from a wired network, where communicating devices are physically connected through cables and a node without direct association is unable to access the network for illicit activities. The open communications environment makes wireless transmissions more vulnerable than wired communications to malicious attacks, including both the passive eavesdropping for data interception and the active jamming for disrupting legitimate transmissions. Therefore, this paper is motivated to examine the security vulnerabilities and threats imposed by the inherent open nature of wireless communications and to devise efficient defense mechanisms for improving the wireless network security. We first summarize the security requirements of wireless networks, including their authenticity, confidentiality, integrity, and availability issues. Next, a comprehensive overview of security attacks encountered in wireless networks is presented in view of the network protocol architecture, where the potential security threats are discussed at each protocol layer. We also provide a survey of the existing security protocols and algorithms that are adopted in the existing wireless network standards, such as the Bluetooth, Wi-Fi, WiMAX, and the long-term evolution (LTE) systems. Then, we discuss the state of the art in physical-layer security, which is an emerging technique of securing the open communications environment against eavesdropping attacks at the physical layer. Several physical-layer security techniques are reviewed and compared, including information-theoretic security, artificial-noise-aided security, security-oriented beamforming, diversity-assisted security, and physical-layer key generation approaches. Since a jammer emitting radio signals can readily interfere with the legitimate wireless users, we also introduce the family of various jamming attacks and their countermeasures, including the constant jammer, intermittent jammer, reactive jammer, adaptive jammer, and intelligent jammer. Additionally, we discuss the integration of physical-layer security into existing authentication and cryptography mechanisms for further securing wireless networks. Finally, some technical challenges which remain unresolved at the time of writing are summarized and the future trends in wireless security are discussed.",
"title": ""
},
{
"docid": "30cd626772ad8c8ced85e8312d579252",
"text": "An off-state leakage current unique for short-channel SOI MOSFETs is reported. This off-state leakage is the amplification of gate-induced-drain-leakage current by the lateral bipolar transistor in an SOI device due to the floating body. The leakage current can be enhanced by as much as 100 times for 1/4 mu m SOI devices. This can pose severe constraints in future 0.1 mu m SOI device design. A novel technique was developed based on this mechanism to measure the lateral bipolar transistor current gain beta of SOI devices without using a body contact.<<ETX>>",
"title": ""
},
{
"docid": "cf369f232ba023e675f322f42a20b2c2",
"text": "Ring topology local area networks (LAN’s) using the “buffer insertion” access method have as yet received relatively little attention. In this paper we present details of a LAN of this.-, called SILK-system for integrated local communication (in German, “Kommunikation”). Sections of the paper describe the synchronous transmission technique of the ring channel, the time-multiplexed access of eight ports at each node, the “braided” interconnection for bypassing defective nodes, and the role of interface transformation units and user interfaces, as well as some traffic,characteristics and reliability aspects. SILK’S modularity and open system concept are demonstrated by the already implemented applications such as distributed text editing, local telephone or teletex exchange, and process control in a TV studio.",
"title": ""
},
{
"docid": "02cdba46c543f4fdf6489bd2eeb29929",
"text": "Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function. On the other hand, Bayesian methods, such as Gaussian Processes (GPs), exploit prior knowledge to quickly infer the shape of a new function at test time. Yet GPs are computationally expensive, and it can be hard to design appropriate priors. In this paper we propose a family of neural models, Conditional Neural Processes (CNPs), that combine the benefits of both. CNPs are inspired by the flexibility of stochastic processes such as GPs, but are structured as neural networks and trained via gradient descent. CNPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large datasets. We demonstrate the performance and versatility of the approach on a range of canonical machine learning tasks, including regression, classification and image completion.",
"title": ""
},
{
"docid": "af459f8f89bd1f27595dd3c9be4baf13",
"text": "The recent successes in applying deep learning techniques to solve standard computer vision problems has aspired researchers to propose new computer vision problems in different domains. As previously established in the field, training data itself plays a significant role in the machine learning process, especially deep learning approaches which are data hungry. In order to solve each new problem and get a decent performance, a large amount of data needs to be captured which may in many cases pose logistical difficulties. Therefore, the ability to generate de novo data or expand an existing dataset, however small, in order to satisfy data requirement of current networks may be invaluable. Herein, we introduce a novel way to partition an action video clip into action, subject and context. Each part is manipulated separately and reassembled with our proposed video generation technique. Furthermore, our novel human skeleton trajectory generation along with our proposed video generation technique, enables us to generate unlimited action recognition training data. These techniques enables us to generate video action clips from an small set without costly and time-consuming data acquisition. Lastly, we prove through extensive set of experiments on two small human action recognition datasets, that this new data generation technique can improve the performance of current action recognition neural nets.",
"title": ""
},
{
"docid": "d2e0276ddab166a1126d832298042632",
"text": "Accurate knowledge characteristics of a power system are very essential to have an idea of its operation conditions. The ferroresonant circuit exhibits oscillations characterized by frequency that is in accordance to the voltage applied on, and the currents circulating through the components of the circuit. The dominant frequency of these periodic oscillations could be the network frequency (fundamental ferroresonance) or a fraction of it (sub-harmonic ferroresonance). This phenomenon (both fundamental and sub-harmonic ferroresonance) is characterized by overvoltages or overcurrents, which may provoke the equipment damage and malfunctioning of the protective devices. The article presents an approach to teaching high voltage laboratory using specially designed exercises that can be done using MATLAB 6.5. This article present a MATLAB based technology to simulate ferroresonance in power system. Evaluation of the simulation with more than 30 students is very positive in terms of their developing confidence in and understanding of this simulation. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 19: 347 357, 2011; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20316",
"title": ""
},
{
"docid": "e9c450173afdd9aa329e290a18dafac8",
"text": "The gap between domain experts and natural language processing expertise is a barrier to extracting understanding from clinical text. We describe a prototype tool for interactive review and revision of natural language processing models of binary concepts extracted from clinical notes. We evaluated our prototype in a user study involving 9 physicians, who used our tool to build and revise models for 2 colonoscopy quality variables. We report changes in performance relative to the quantity of feedback. Using initial training sets as small as 10 documents, expert review led to final F1scores for the \"appendiceal-orifice\" variable between 0.78 and 0.91 (with improvements ranging from 13.26% to 29.90%). F1for \"biopsy\" ranged between 0.88 and 0.94 (-1.52% to 11.74% improvements). The average System Usability Scale score was 70.56. Subjective feedback also suggests possible design improvements.",
"title": ""
},
{
"docid": "6d56e0db0ebdfe58152cb0faa73453c4",
"text": "Chatbot is a computer application that interacts with users using natural language in a similar way to imitate a human travel agent. A successful implementation of a chatbot system can analyze user preferences and predict collective intelligence. In most cases, it can provide better user-centric recommendations. Hence, the chatbot is becoming an integral part of the future consumer services. This paper is an implementation of an intelligent chatbot system in travel domain on Echo platform which would gather user preferences and model collective user knowledge base and recommend using the Restricted Boltzmann Machine (RBM) with Collaborative Filtering. With this chatbot based on DNN, we can improve human to machine interaction in the travel domain.",
"title": ""
},
{
"docid": "32fa965f20be0ae72c32ef7f096b32d4",
"text": "We systematically explored a spectrum of normalization algorithms related to Batch Normalization (BN) and propose a generalized formulation that simultaneously solves two major limitations of BN: (1) online learning and (2) recurrent learning. Our proposal is simpler and more biologically-plausible. Unlike previous approaches, our technique can be applied out of the box to all learning scenarios (e.g., online learning, batch learning, fully-connected, convolutional, feedforward, recurrent and mixed — recurrent and convolutional) and compare favorably with existing approaches. We also propose Lp Normalization for normalizing by different orders of statistical moments. In particular, L1 normalization is well-performing, simple to implement, fast to compute, more biologically-plausible and thus ideal for GPU or hardware implementations. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. 1 ar X iv :1 61 0. 06 16 0v 1 [ cs .L G ] 1 9 O ct 2 01 6 Approach FF & FC FF & Conv Rec & FC Rec & Conv Online Learning Small Batch All Combined Original Batch Normalization(BN) 3 3 7 7 7 Suboptimal 7 Time-specific BN 3 3 Limited Limited 7 Suboptimal 7 Layer Normalization 3 7* 3 7* 3 3 7* Streaming Normalization 3 3 3 3 3 3 3 Table 1: An overview of normalization techiques for different tasks. 3: works well. 7: does not work well. FF: Feedforward. Rec: Recurrent. FC: Fully-connected. Conv: convolutional. Limited: time-specific BN requires recording normalization statistics for each timestep and thus may not generalize to novel sequence length. *Layer normalization does not fail on these tasks but perform significantly worse than the best approaches.",
"title": ""
},
{
"docid": "2856ac46d6618b64d6b29969b13f1f4b",
"text": "The predominant focus in the neurobiological study of memory has been on remembering (persistence). However, recent studies have considered the neurobiology of forgetting (transience). Here we draw parallels between neurobiological and computational mechanisms underlying transience. We propose that it is the interaction between persistence and transience that allows for intelligent decision-making in dynamic, noisy environments. Specifically, we argue that transience (1) enhances flexibility, by reducing the influence of outdated information on memory-guided decision-making, and (2) prevents overfitting to specific past events, thereby promoting generalization. According to this view, the goal of memory is not the transmission of information through time, per se. Rather, the goal of memory is to optimize decision-making. As such, transience is as important as persistence in mnemonic systems.",
"title": ""
},
{
"docid": "7272ebab22d3efec95792acece86b4dd",
"text": "Many of today's machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today's primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.",
"title": ""
},
{
"docid": "e7865d56e092376493090efc48a7e238",
"text": "Machine learning techniques are applied to the task of context awareness, or inferring aspects of the user's state given a stream of inputs from sensors worn by the person. We focus on the task of indoor navigation and show that, by integrating information from accelerometers, magnetometers and temperature and light sensors, we can collect enough information to infer the user's location. However, our navigation algorithm performs very poorly, with almost a 50% error rate, if we use only the raw sensor signals. Instead, we introduce a \"data cooking\" module that computes appropriate high-level features from the raw sensor data. By introducing these high-level features, we are able to reduce the error rate to 2% in our example environment.",
"title": ""
}
] |
scidocsrr
|
c131c4df99ebd796b36f4054268c444b
|
Mobile medical and health apps: state of the art, concerns, regulatory control and
certification
|
[
{
"docid": "f9d2ccdbbc2dd5a0ea5635c53a6b1e50",
"text": "OBJECTIVES\nThe article provides an overview of current trends in personal sensor, signal and imaging informatics, that are based on emerging mobile computing and communications technologies enclosed in a smartphone and enabling the provision of personal, pervasive health informatics services.\n\n\nMETHODS\nThe article reviews examples of these trends from the PubMed and Google scholar literature search engines, which, by no means claim to be complete, as the field is evolving and some recent advances may not be documented yet.\n\n\nRESULTS\nThere exist critical technological advances in the surveyed smartphone technologies, employed in provision and improvement of diagnosis, acute and chronic treatment and rehabilitation health services, as well as in education and training of healthcare practitioners. However, the most emerging trend relates to a routine application of these technologies in a prevention/wellness sector, helping its users in self-care to stay healthy.\n\n\nCONCLUSIONS\nSmartphone-based personal health informatics services exist, but still have a long way to go to become an everyday, personalized healthcare-provisioning tool in the medical field and in a clinical practice. Key main challenge for their widespread adoption involve lack of user acceptance striving from variable credibility and reliability of applications and solutions as they a) lack evidence- based approach; b) have low levels of medical professional involvement in their design and content; c) are provided in an unreliable way, influencing negatively its usability; and, in some cases, d) being industry-driven, hence exposing bias in information provided, for example towards particular types of treatment or intervention procedures.",
"title": ""
}
] |
[
{
"docid": "cb5d3d06c46266c9038aea9d18d4ae69",
"text": "Signal distortion of photoplethysmographs (PPGs) due to motion artifacts has been a limitation for developing real-time, wearable health monitoring devices. The artifacts in PPG signals are analyzed by comparing the frequency of the PPG with a reference pulse and daily life motions, including typing, writing, tapping, gesturing, walking, and running. Periodical motions in the range of pulse frequency, such as walking and running, cause motion artifacts. To reduce these artifacts in real-time devices, a least mean square based active noise cancellation method is applied to the accelerometer data. Experiments show that the proposed method recovers pulse from PPGs efficiently.",
"title": ""
},
{
"docid": "fd3e8dda962bf5f9f47540f1ec22bf63",
"text": "Dandruff is a common scalp disorder affecting almost half of the population at the prepubertal age and of any gender and ethnicity. The fungus Malassezia furfur, a lipophilic yeast is widely accepted as the causative agent of dandruff, which due to its lipase activity releases proinflammatory free fatty acids causing dermal inflammation and tissue damage. Currently available treatment options of chemical origin have various limitations, either due to poor clinical efficiency or compliance issues. Also, these drugs are unable to prevent recurrence, which is the most common problem. Due to this, attention is shifting towards herbal remedies with aromatic plants and their essential oils being widely popular for their empirically antifungal properties. The present study was aimed to evaluate the potential inhibitory effects of essential oils of indigenous plants on M. furfur. The antifungal activity of four aromatic oils of kapur tulsi, cinnamon, eucalyptus, cajeput along with one fixed oil of karanj was screened alone or in combinations using tea tree oil and ketoconazole as standards. Out of five selected oils, three oils showed activity in the order cinnamon oil >kapur tulsi oil >cajeput while karanj oil and eucalyptus oil were inactive against the fungus. The minimum inhibitory concentration (MICs) of the active oils was evaluated using broth dilution method. The cinnamon oil, kapur tulsi oil and cajeput oil on evaluating in different combinations showed synergistic effect with a mixture of cinnamon oil and kapur tulsi oil exhibiting the best activity. The study reports effectiveness of kapur tulsi oil against M. furfur for the first time and further the synergistic combinations of oils is also being reported for the first time. The findings provide promising information on the potential use of essential oils for the treatment of dandruff.",
"title": ""
},
{
"docid": "d6f15e49f3ecdbe3e2949520c3e0c643",
"text": "In this paper we explore the connection between clustering categorical data and entropy: clusters of similar poi lower entropy than those of dissimilar ones. We use this connection to design an incremental heuristic algorithm, COOLCAT, which is capable of efficiently clustering large data sets of records with categorical attributes, and data streams. In contrast with other categorical clustering algorithms published in the past, COOLCAT's clustering results are very stable for different sample sizes and parameter settings. Also, the criteria for clustering is a very intuitive one, since it is deeply rooted on the well-known notion of entropy. Most importantly, COOLCAT is well equipped to deal with clustering of data streams(continuously arriving streams of data point) since it is an incremental algorithm capable of clustering new points without having to look at every point that has been clustered so far. We demonstrate the efficiency and scalability of COOLCAT by a series of experiments on real and synthetic data sets.",
"title": ""
},
{
"docid": "855bffa65ab1a459223ae73aee777e85",
"text": "We present a program synthesis-oriented dataset consisting of human written problem statements and solutions for these problems. The problem statements were collected via crowdsourcing and the program solutions were extracted from humanwritten solutions in programming competitions, accompanied by input/output examples. We propose using this dataset for the program synthesis tasks aimed for working with real user-generated data. As a baseline we present few models, with best model achieving 5.6% accuracy, showcasing both complexity of the dataset and large room for",
"title": ""
},
{
"docid": "8ed1f9194914b5529b4e89444b5feb45",
"text": "support for the camera interfaces. My colleagues Felix Woelk and Kevin Köser I would like to thank for many fruitful discussions. I thank our system administrator Torge Storm for always fixing my machine and providing enough data space for all my sequences which was really a hard job. Of course I also would like to thank the other members of the group Jan Woetzel, Daniel Grest, Birger Streckel and Renate Staecker for their help, the discussions and providing the exciting working environment. Last but not least, I would like to express my gratitude to my wife Miriam for always supporting me and my work. I also want to thank my sons Joshua and Noah for suffering under my paper writing. Finally I thank my parents for always supporting my education and my work.",
"title": ""
},
{
"docid": "3bab4fb7a4d062c80e80f12b77c228ac",
"text": "Holoprosencephaly encompasses a series of midline defects of the brain and face. Most cases are associated with severe malformations of the brain which are incompatible with life. At the other end of the spectrum, however, are patients with midline facial defects and normal or near-normal brain development. Although some are mentally retarded, others have the potential for achieving near-normal mentality and a full life expectancy. The latter patients do not fit clearly into the previously defined classification system. Proposed is a new classification focusing on those patients with normal or lobar brain morphology but with a wide range of facial anomalies. The classification aids in planning treatment. Coupled with CT scan findings of the brain and a period of observation, patients unlikely to thrive can be distinguished from those who will benefit from surgical intervention. Repair of the false median cleft lip and palate may suffice in patients with moderate mental retardation. Patients exhibiting normal or near-normal mentality with hypotelorbitism and nasomaxillary hypoplasia can be treated with a simultaneous midface advancement, facial bipartition expansion, and nasal reconstruction.",
"title": ""
},
{
"docid": "1cd2270eb217e6233a60e002478c1ea0",
"text": "We describe work on the visualization of bibliographic data and, to aid in this task, the application of numerical techniques for multidimensional scaling.\nMany areas of scientific research involve complex multivariate data. One example of this is Information Retrieval. Document comparisons may be done using a large number of variables. Such conditions do not favour the more well-known methods of visualization and graphical analysis, as it is rarely feasible to map each variable onto one aspect of even a three-dimensional, coloured and textured space.\nBead is a prototype system for the graphically-based exploration of information. In this system, articles in a bibliography are represented by particles in 3-space. By using physically-based modelling techniques to take advantage of fast methods for the approximation of potential fields, we represent the relationships between articles by their relative spatial positions. Inter-particle forces tend to make similar articles move closer to one another and dissimilar ones move apart. The result is a 3D scene which can be used to visualize patterns in the high-D information space.",
"title": ""
},
{
"docid": "ccee41a49596a7cdf19ac6e7893e7766",
"text": "In industrial fabric productions, real time systems are needed to detect the fabric defects. This paper presents a real time defect detection approach which compares the time performances of Matlab and C++ programming languages. In the proposed method, important texture features of the fabric images are extracted using CoHOG method. Artificial neural network is used to classify the fabric defects. The developed method has been applied to detect the knitting fabric defects on a circular knitting machine. An overall defect detection success rate of 93% is achieved for the Matlab and C++ applications. To give an idea to the researches in defect detection area, real time operation speeds of Matlab and C++ codes have been examined. Especially, the number of images that can be processed in one second has been determined. While the Matlab based coding can process 3 images in 1 second, C++/Opencv based coding can process 55 images in 1 second. Previous works have rarely included the practical comparative evaluations of software environments. Therefore, we believe that the results of our industrial experiments will be a valuable resource for future works in this area.",
"title": ""
},
{
"docid": "08b845d6e8770e7f4ee17c977f6878d1",
"text": "PURPOSE\nThe present study describes the results of using a processed nerve allograft, Avance Nerve Graft, as an extracellular matrix scaffold for the reconstruction of lingual nerve (LN) and inferior alveolar nerve (IAN) discontinuities.\n\n\nPATIENTS AND METHODS\nA retrospective analysis of the neurosensory outcomes for 26 subjects with 28 LN and IAN discontinuities reconstructed with a processed nerve allograft was conducted to determine the treatment effectiveness and safety. Sensory assessments were conducted preoperatively and 3, 6, and 12 months after surgical reconstruction. The outcomes population, those with at least 6 months of postoperative follow-up, included 21 subjects with 23 nerve defects. The neurosensory assessments included brush stroke directional sensation, static 2-point discrimination, contact detection, pressure pain threshold, and pressure pain tolerance. Using the clinical neurosensory testing scale, sensory impairment scores were assigned preoperatively and at each follow-up appointment. Improvement was defined as a score of normal, mild, or moderate.\n\n\nRESULTS\nThe neurosensory outcomes from LNs and IANs that had been microsurgically repaired with a processed nerve allograft were promising. Of those with nerve discontinuities treated, 87% had improved neurosensory scores with no reported adverse experiences. Similar levels of improvement, 87% for the LNs and 88% for the IANs, were achieved for both nerve types. Also, 100% sensory improvement was achieved in injuries repaired within 90 days of the injury compared with 77% sensory improvement in injuries repaired after 90 days.\n\n\nCONCLUSIONS\nThese results suggest that processed nerve allografts are an acceptable treatment option for reconstructing trigeminal nerve discontinuities. Additional studies will focus on reviewing the outcomes of additional cases.",
"title": ""
},
{
"docid": "63fef6099108f7990da0a7687e422e14",
"text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.",
"title": ""
},
{
"docid": "ea959ccd4eb6b6ac1d2acd2bfde7c633",
"text": "This paper proposes a mixed-initiative feature engineering approach using explicit knowledge captured in a knowledge graph complemented by a novel interactive visualization method. Using the explicitly captured relations and dependencies between concepts and their properties, feature engineering is enabled in a semi-automatic way. Furthermore, the results (and decisions) obtained throughout the process can be utilized for refining the features and the knowledge graph. Analytical requirements can then be conveniently captured for feature engineering -- enabling integrated semantics-driven data analysis and machine learning.",
"title": ""
},
{
"docid": "7063d3eb38008bcd344f0ae1508cca61",
"text": "The fitness of an evolutionary individual can be understood in terms of its two basic components: survival and reproduction. As embodied in current theory, trade-offs between these fitness components drive the evolution of life-history traits in extant multicellular organisms. Here, we argue that the evolution of germ-soma specialization and the emergence of individuality at a new higher level during the transition from unicellular to multicellular organisms are also consequences of trade-offs between the two components of fitness-survival and reproduction. The models presented here explore fitness trade-offs at both the cell and group levels during the unicellular-multicellular transition. When the two components of fitness negatively covary at the lower level there is an enhanced fitness at the group level equal to the covariance of components at the lower level. We show that the group fitness trade-offs are initially determined by the cell level trade-offs. However, as the transition proceeds to multicellularity, the group level trade-offs depart from the cell level ones, because certain fitness advantages of cell specialization may be realized only by the group. The curvature of the trade-off between fitness components is a basic issue in life-history theory and we predict that this curvature is concave in single-celled organisms but becomes increasingly convex as group size increases in multicellular organisms. We argue that the increasingly convex curvature of the trade-off function is driven by the initial cost of reproduction to survival which increases as group size increases. To illustrate the principles and conclusions of the model, we consider aspects of the biology of the volvocine green algae, which contain both unicellular and multicellular members.",
"title": ""
},
{
"docid": "77f5216ede8babf4fb3b2bcbfc9a3152",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "be70a14152656eb886c8a28e7e0dd613",
"text": "OBJECTIVES\nTranscutaneous electrical nerve stimulation (TENS) is an analgesic current that is used in many acute and chronic painful states. The aim of this study was to investigate central pain modulation by low-frequency TENS.\n\n\nMETHODS\nTwenty patients diagnosed with subacromial impingement syndrome of the shoulder were enrolled in the study. Patients were randomized into 2 groups: low-frequency TENS and sham TENS. Painful stimuli were delivered during which functional magnetic resonance imaging scans were performed, both before and after treatment. Ten central regions of interest that were reported to have a role in pain perception were chosen and analyzed bilaterally on functional magnetic resonance images. Perceived pain intensity during painful stimuli was evaluated using visual analog scale (VAS).\n\n\nRESULTS\nIn the low-frequency TENS group, there was a statistically significant decrease in the perceived pain intensity and pain-specific activation of the contralateral primary sensory cortex, bilateral caudal anterior cingulate cortex, and of the ipsilateral supplementary motor area. There was a statistically significant correlation between the change of VAS value and the change of activity in the contralateral thalamus, prefrontal cortex, and the ipsilateral posterior parietal cortex. In the sham TENS group, there was no significant change in VAS value and activity of regions of interest.\n\n\nDISCUSSION\nWe suggest that a 1-session low-frequency TENS may induce analgesic effect through modulation of discriminative, affective, and motor aspects of central pain perception.",
"title": ""
},
{
"docid": "e73de235f2b54eb62159fa52d75be74f",
"text": "Building on the success of recent discriminative midlevel elements, we propose a surprisingly simple approach for object detection which performs comparable to the current state-of-the-art approaches on PASCAL VOC comp-3 detection challenge (no external data). Through extensive experiments and ablation analysis, we show how our approach effectively improves upon the HOG-based pipelines by adding an intermediate mid-level representation for the task of object detection. This representation is easily interpretable and allows us to visualize what our object detector “sees”. We also discuss the insights our approach shares with CNN-based methods, such as sharing representation between categories helps.",
"title": ""
},
{
"docid": "ae5c21fa28694728aca532a582f612c3",
"text": "The purpose of this study was to apply cross-education during 4 wk of unilateral limb immobilization using a shoulder sling and swathe to investigate the effects on muscle strength, muscle size, and muscle activation. Twenty-five right-handed participants were assigned to one of three groups as follows: the Immob + Train group wore a sling and swathe and strength trained (n = 8), the Immob group wore a sling and swathe and did not strength train (n = 8), and the Control group received no treatment (n = 9). Immobilization was applied to the nondominant (left) arm. Strength training consisted of maximal isometric elbow flexion and extension of the dominant (right) arm 3 days/wk. Torque (dynamometer), muscle thickness (ultrasound), maximal voluntary activation (interpolated twitch), and electromyography (EMG) were measured. The change in right biceps and triceps brachii muscle thickness [7.0 ± 1.9 and 7.1 ± 2.2% (SE), respectively] was greater for Immob + Train than Immob (0.4 ± 1.2 and -1.9 ± 1.7%) and Control (0.8 ± 0.5 and 0.0 ± 1.1%, P < 0.05). Left biceps and triceps brachii muscle thickness for Immob + Train (2.2 ± 0.7 and 3.4 ± 2.1%, respectively) was significantly different from Immob (-2.8 ± 1.1 and -5.2 ± 2.7%, respectively, P < 0.05). Right elbow flexion strength for Immob + Train (18.9 ± 5.5%) was significantly different from Immob (-1.6 ± 4.0%, P < 0.05). Right and left elbow extension strength for Immob + Train (68.1 ± 25.9 and 32.2 ± 9.0%, respectively) was significantly different from the respective limb of Immob (1.3 ± 7.7 and -6.1 ± 7.8%) and Control (4.7 ± 4.7 and -0.2 ± 4.5%, P < 0.05). Immobilization in a sling and swathe decreased strength and muscle size but had no effect on maximal voluntary activation or EMG. The cross-education effect on the immobilized limb was greater after elbow extension training. This study suggests that strength training the nonimmobilized limb benefits the immobilized limb for muscle size and strength.",
"title": ""
},
{
"docid": "6490b984de3a9769cdae92208e7bb26d",
"text": "A new perspective on the topic of antibiotic resistance is beginning to emerge based on a broader evolutionary and ecological understanding rather than from the traditional boundaries of clinical research of antibiotic-resistant bacterial pathogens. Phylogenetic insights into the evolution and diversity of several antibiotic resistance genes suggest that at least some of these genes have a long evolutionary history of diversification that began well before the 'antibiotic era'. Besides, there is no indication that lateral gene transfer from antibiotic-producing bacteria has played any significant role in shaping the pool of antibiotic resistance genes in clinically relevant and commensal bacteria. Most likely, the primary antibiotic resistance gene pool originated and diversified within the environmental bacterial communities, from which the genes were mobilized and penetrated into taxonomically and ecologically distant bacterial populations, including pathogens. Dissemination and penetration of antibiotic resistance genes from antibiotic producers were less significant and essentially limited to other high G+C bacteria. Besides direct selection by antibiotics, there is a number of other factors that may contribute to dissemination and maintenance of antibiotic resistance genes in bacterial populations.",
"title": ""
},
{
"docid": "362ce6581dee5023c9d548b634153345",
"text": "In most practical situations, the compression or transmission of images and videos creates distortions that will eventually be perceived by a human observer. Vice versa, image and video restoration techniques, such as inpainting or denoising, aim to enhance the quality of experience of human viewers. Correctly assessing the similarity between an image and an undistorted reference image as subjectively experienced by a human viewer can thus lead to significant improvements in any transmission, compression, or restoration system. This paper introduces the Haar wavelet-based perceptual similarity index (HaarPSI), a novel and computationally inexpensive similarity measure for full reference image quality assessment. The HaarPSI utilizes the coefficients obtained from a Haar wavelet decomposition to assess local similarities between two images, as well as the relative importance of image areas. The consistency of the HaarPSI with the human quality of experience was validated on four large benchmark databases containing thousands of differently distorted images. On these databases, the HaarPSI achieves higher correlations with human opinion scores than state-of-the-art full reference similarity measures like the structural similarity index (SSIM), the feature similarity index (FSIM), and the visual saliency-based index (VSI). Along with the simple computational structure and the short execution time, these experimental results suggest a high applicability of the HaarPSI in real world tasks.",
"title": ""
},
{
"docid": "06fbad57a95cd6cefb08ebaf668bc9de",
"text": "This paper presents an approach to predict future motion of a moving object based on its past movement. This approach is capable of learning ob ject movement in an open environment, which is one of the limitions in some prior w rks. The proposed approach exploits the similarities of short-term movement behaviors by modeling a trajectory as concatenation of short segments. These short egments are assumed to be noisy realizations of latent segments. The transitions b etween the underlying latent segments are assumed to follow a Markov model. This predicti ve model was applied to two real-world applications and yielded favorable perfo rmance on both tasks.",
"title": ""
}
] |
scidocsrr
|
7563e1651f2361aa859ec0d45ff4a1c8
|
Data Noising as Smoothing in Neural Network Language Models
|
[
{
"docid": "6a4cd21704bfbdf6fb3707db10f221a8",
"text": "Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this paper, we propose a simpler solution that use recurrent neural networks composed of rectified linear units. Key to our solution is the use of the identity matrix or its scaled version to initialize the recurrent weight matrix. We find that our solution is comparable to a standard implementation of LSTMs on our four benchmarks: two toy problems involving long-range temporal structures, a large language modeling problem and a benchmark speech recognition problem.",
"title": ""
},
{
"docid": "f2fb948a8e133be27dd2d27a3601606f",
"text": "If a document is about travel, we may expect that short snippets of the document should also be about travel. We introduce a general framework for incorporating these types of invariances into a discriminative classifier. The framework imagines data as being drawn from a slice of a Lévy process. If we slice the Lévy process at an earlier point in time, we obtain additional pseudo-examples, which can be used to train the classifier. We show that this scheme has two desirable properties: it preserves the Bayes decision boundary, and it is equivalent to fitting a generative model in the limit where we rewind time back to 0. Our construction captures popular schemes such as Gaussian feature noising and dropout training, as well as admitting new generalizations.",
"title": ""
}
] |
[
{
"docid": "d20068a72753d8c7238b1c0734ed5b2e",
"text": "Left atrial ablation is increasingly used to treat patients with symptomatic atrial fibrillation (AF). Prior to ablation, exclusion of left atrial appendage (LAA) thrombus is important. Whether ECG-gated dual-source computed tomography (DSCT) provides a sensitive means of detecting LAA thrombus in patients undergoing percutaneous AF ablation is unknown. Thus, we sought to determine the utility of ECG-gated DSCT in detecting LAA thrombus in patients with AF. A total of 255 patients (age 58 ± 11 years, 78% male, ejection fraction 58 ± 9%) who underwent ECG-gated DSCT and transesophageal echocardiography (TEE) prior to AF ablation between February 2006 and October 2007 were included. CHADS2 score and demographic data were obtained prospectively. Gated DSCT images were independently reviewed by two cardiac imagers blinded to TEE findings. The LAA was either defined as normal (fully opacified) or abnormal (under-filled) by DSCT. An under-filled LAA was identified in 33 patients (12.9%), of whom four had thrombus confirmed by TEE. All patients diagnosed with LAA thrombus using TEE also had an abnormal LAA by gated DSCT. Thus, sensitivity and specificity for gated DSCT were 100% and 88%, respectively. No cases of LAA filling defects were observed in patients <51 years old with a CHADS2 of 0. In patients referred for AF ablation, thrombus is uncommon in the absence of additional risk factors. Gated DSCT provides excellent sensitivity for the detection of thrombus. Thus, in AF patients with a CHADS2 of 0, gated DSCT may provide a useful stand-alone imaging modality.",
"title": ""
},
{
"docid": "5cd6debed0333d480aeafe406f526d2b",
"text": "In the coming advanced age society, an innovative technology to assist the activities of daily living of elderly and disabled people and the heavy work in nursing is desired. To develop such a technology, an actuator safe and friendly for human is required. It should be small, lightweight and has to provide a proper softness. A pneumatic rubber artificial muscle is available as such actuators. We have developed some types of pneumatic rubber artificial muscles and applied them to wearable power assist devices. A wearable power assist device is equipped to the human body to assist the muscular force, which supports activities of daily living, rehabilitation, heavy working, training and so on. In this paper, some types of pneumatic rubber artificial muscles developed in our laboratory are introduced. Further, two kinds of wearable power assist devices driven with the rubber artificial muscles are described. Some evaluations can clarify the effectiveness of pneumatic rubber artificial muscle for such an innovative human assist technology.",
"title": ""
},
{
"docid": "87e315548e67f8de46ad0cb3db8b7aaa",
"text": "We study answer selection for question answering, in which given a question and a set of candidate answer sentences, the goal is to identify the subset that contains the answer. Unlike previous work which treats this task as a straightforward pointwise classification problem, we model this problem as a ranking task and propose a pairwise ranking approach that can directly exploit existing pointwise neural network models as base components. We extend the Noise-Contrastive Estimation approach with a triplet ranking loss function to exploit interactions in triplet inputs over the question paired with positive and negative examples. Experiments on TrecQA and WikiQA datasets show that our approach achieves state-of-the-art effectiveness without the need for external knowledge sources or feature engineering.",
"title": ""
},
{
"docid": "99511c1267d396d3745f075a40a06507",
"text": "Problem Description: It should be well known that processors are outstripping memory performance: specifically that memory latencies are not improving as fast as processor cycle time or IPC or memory bandwidth. Thought experiment: imagine that a cache miss takes 10000 cycles to execute. For such a processor instruction level parallelism is useless, because most of the time is spent waiting for memory. Branch prediction is also less effective, since most branches can be determined with data already in registers or in the cache; branch prediction only helps for branches which depend on outstanding cache misses. At the same time, pressures for reduced power consumption mount. Given such trends, some computer architects in industry (although not Intel EPIC) are talking seriously about retreating from out-of-order superscalar processor architecture, and instead building simpler, faster, dumber, 1-wide in-order processors with high degrees of speculation. Sometimes this is proposed in combination with multiprocessing and multithreading: tolerate long memory latencies by switching to other processes or threads. I propose something different: build narrow fast machines but use intelligent logic inside the CPU to increase the number of outstanding cache misses that can be generated from a single program. By MLP I mean simply the number of outstanding cache misses that can be generated (by a single thread, task, or program) and executed in an overlapped manner. It does not matter what sort of execution engine generates the multiple outstanding cache misses. An out-of-order superscalar ILP CPU may generate multiple outstanding cache misses, but 1-wide processors can be just as effective. Change the metrics: total execution time remains the overall goal, but instead of reporting IPC as an approximation to this, we must report MLP. Limit studies should be in terms of total number of non-overlapped cache misses on critical path. Now do the research: Many present-day hot topics in computer architecture help ILP, but do not help MLP. As mentioned above, predicting branch directions for branches that can be determined from data already in the cache or in registers does not help MLP for extremely long latencies. Similarly, prefetching of data cache misses for array processing codes does not help MLP – it just moves it around. Instead, investigate microarchitectures that help MLP: (0) Trivial case – explicit multithreading, like SMT. (1) Slightly less trivial case – implicitly multithread single programs, either by compiler software on an MT machine, or by a hybrid, such as …",
"title": ""
},
{
"docid": "4de437aa5fe1b27ebba232f0efe82b02",
"text": "Most people do not interact with Semantic Web data directly. Unless they have the expertise to understand the underlying technology, they need textual or visual interfaces to help them make sense of it. We explore the problem of generating natural language summaries for Semantic Web data. This is non-trivial, especially in an open-domain context. To address this problem, we explore the use of neural networks. Our system encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. We train and evaluate our models on two corpora of loosely aligned Wikipedia snippets and DBpedia and Wikidata triples with promising results.",
"title": ""
},
{
"docid": "e6662ebd9842e43bd31926ac171807ca",
"text": "INTRODUCTION\nDisruptions in sleep and circadian rhythms are observed in individuals with bipolar disorders (BD), both during acute mood episodes and remission. Such abnormalities may relate to dysfunction of the molecular circadian clock and could offer a target for new drugs.\n\n\nAREAS COVERED\nThis review focuses on clinical, actigraphic, biochemical and genetic biomarkers of BDs, as well as animal and cellular models, and highlights that sleep and circadian rhythm disturbances are closely linked to the susceptibility to BDs and vulnerability to mood relapses. As lithium is likely to act as a synchronizer and stabilizer of circadian rhythms, we will review pharmacogenetic studies testing circadian gene polymorphisms and prophylactic response to lithium. Interventions such as sleep deprivation, light therapy and psychological therapies may also target sleep and circadian disruptions in BDs efficiently for treatment and prevention of bipolar depression.\n\n\nEXPERT OPINION\nWe suggest that future research should clarify the associations between sleep and circadian rhythm disturbances and alterations of the molecular clock in order to identify critical targets within the circadian pathway. The investigation of such targets using human cellular models or animal models combined with 'omics' approaches are crucial steps for new drug development.",
"title": ""
},
{
"docid": "d7f4776aca7400ed28396f6f8dc44498",
"text": "Many problems, such as cognitive radio, parameter control of a scanning tunnelling microscope or internet advertisement, can be modelled as non-stationary bandit problems where the distributions of rewards changes abruptly at unknown time instants. In this paper, we analyze two algorithms designed for solving this issue: discounted UCB (D-UCB) and sliding-window UCB (SW-UCB). We establish an upperbound for the expected regret by upper-bounding the expectation of the number of times suboptimal arms are played. The proof relies on an interesting Hoeffding type inequality for self normalized deviations with a random number of summands. We establish a lower-bound for the regret in presence of abrupt changes in the arms reward distributions. We show that the discounted UCB and the sliding-window UCB both match the lower-bound up to a logarithmic factor. Numerical simulations show that D-UCB and SW-UCB perform significantly better than existing soft-max methods like EXP3.S.",
"title": ""
},
{
"docid": "ebd65c03599cc514e560f378f676cc01",
"text": "The purpose of this paper is to examine an integrated model of TAM and D&M to explore the effects of quality features, perceived ease of use, perceived usefulness on users’ intentions and satisfaction, alongside the mediating effect of usability towards use of e-learning in Iran. Based on the e-learning user data collected through a survey, structural equations modeling (SEM) and path analysis were employed to test the research model. The results revealed that ‘‘intention’’ and ‘‘user satisfaction’’ both had positive effects on actual use of e-learning. ‘‘System quality’’ and ‘‘information quality’’ were found to be the primary factors driving users’ intentions and satisfaction towards use of e-learning. At last, ‘‘perceived usefulness’’ mediated the relationship between ease of use and users’ intentions. The sample consisted of e-learning users of four public universities in Iran. Past studies have seldom examined an integrated model in the context of e-learning in developing countries. Moreover, this paper tries to provide a literature review of recent published studies in the field of e-learning. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a9c07fb7a8ca7115bfc5591aa082e1ef",
"text": "In this paper we introduce a variant of Memory Networks (Weston et al., 2015b) that needs significantly less supervision to perform question and answering tasks. The original model requires that the sentences supporting the answer be explicitly indicated during training. In contrast, our approach only requires the answer to the question during training. We apply the model to the synthetic bAbI tasks, showing that our approach is competitive with the supervised approach, particularly when trained on a sufficiently large amount of data. Furthermore, it decisively beats other weakly supervised approaches based on LSTMs. The approach is quite general and can potentially be applied to many other tasks that require capturing long-term dependencies.",
"title": ""
},
{
"docid": "1461157186183f11d7270d89eecd926a",
"text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.",
"title": ""
},
{
"docid": "6470c8a921a9095adb96afccaa0bf97b",
"text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.",
"title": ""
},
{
"docid": "040fbc1d1d75855fbf15f47880c2aefd",
"text": "The emotional connections students foster in their classrooms are likely to impact their success in school. Using a multimethod, multilevel approach, this study examined the link between classroom emotional climate and academic achievement, including the role of student engagement as a mediator. Data were collected from 63 fifthand sixth-grade classrooms (N 1,399 students) and included classroom observations, student reports, and report card grades. As predicted, multilevel mediation analyses showed that the positive relationship between classroom emotional climate and grades was mediated by engagement, while controlling for teacher characteristics and observations of both the organizational and instructional climates of the classrooms. Effects were robust across grade level and student gender. The discussion highlights the role of classroom-based, emotion-related interactions to promote academic achievement.",
"title": ""
},
{
"docid": "5ca6f2aaa70a7c7593e68f25999697d8",
"text": "Traditional text detection methods mostly focus on quadrangle text. In this study we propose a novel method named sliding line point regression (SLPR) in order to detect arbitrary-shape text in natural scene. SLPR regresses multiple points on the edge of text line and then utilizes these points to sketch the outlines of the text. The proposed SLPR can be adapted to many object detection architectures such as Faster R-CNN and R-FCN. Specifically, we first generate the smallest rectangular box including the text with region proposal network (RPN), then isometrically regress the points on the edge of text by using the vertically and horizontally sliding lines. To make full use of information and reduce redundancy, we calculate x-coordinate or y-coordinate of target point by the rectangular box position, and just regress the remaining y-coordinate or x-coordinate. Accordingly we can not only reduce the parameters of system, but also restrain the points which will generate more regular polygon. Our approach achieved competitive results on traditional ICDAR2015 Incidental Scene Text benchmark and curve text detection dataset CTW1500.",
"title": ""
},
{
"docid": "575208e6df214fa4378fa18be48af51d",
"text": "A parser based on logic programming language (DCG) has very useful features; perspicuity, power, generality and so on. However, it does have some drawbacks in which it cannot deal with CFG with left recursive rules, for example. To overcome these drawbacks, a Bottom-Up parser embedded in Prolog (BUP) has been developed. In BUP, CFG rules are translated into Prolog clauses which work as a bottom-up left corner parser with top-down expectation. BUP is augmented by introducing a “link” relation to reduce the size of a search space. Furthermore, BUP can be revised to maintain partial parsing results to avoid computational duplication. A BUP translator and a BUP tracer which support the development of grammar rules are described.",
"title": ""
},
{
"docid": "ac0b4babbe59570c801ae3efbb6dcbe3",
"text": "In recent years, RNA has attracted widespread attention as a unique biomaterial with distinct biophysical properties for designing sophisticated architectures in the nanometer scale. RNA is much more versatile in structure and function with higher thermodynamic stability compared to its nucleic acid counterpart DNA. Larger RNA molecules can be viewed as a modular structure built from a combination of many 'Lego' building blocks connected via different linker sequences. By exploiting the diversity of RNA motifs and flexibility of structure, varieties of RNA architectures can be fabricated with precise control of shape, size, and stoichiometry. Many structural motifs have been discovered and characterized over the years and the crystal structures of many of these motifs are available for nanoparticle construction. For example, using the flexibility and versatility of RNA structure, RNA triangles, squares, pentagons, and hexagons can be constructed from phi29 pRNA three-way-junction (3WJ) building block. This review will focus on 2D RNA triangles, squares, and hexamers; 3D and 4D structures built from basic RNA building blocks; and their prospective applications in vivo as imaging or therapeutic agents via specific delivery and targeting. Methods for intracellular cloning and expression of RNA molecules and the in vivo assembly of RNA nanoparticles will also be reviewed. WIREs RNA 2018, 9:e1452. doi: 10.1002/wrna.1452 This article is categorized under: RNA Methods > RNA Nanotechnology RNA Structure and Dynamics > RNA Structure, Dynamics and Chemistry RNA in Disease and Development > RNA in Disease Regulatory RNAs/RNAi/Riboswitches > Regulatory RNAs.",
"title": ""
},
{
"docid": "c2875f69b6a5d51f3fb3f3cf4ad0f346",
"text": "Cancer cells often have characteristic changes in metabolism. Cellular proliferation, a common feature of all cancers, requires fatty acids for synthesis of membranes and signaling molecules. Here, we provide a view of cancer cell metabolism from a lipid perspective, and we summarize evidence that limiting fatty acid availability can control cancer cell proliferation.",
"title": ""
},
{
"docid": "c3c1d2ec9e60300043070ea93a3c3e1b",
"text": "chology Today, March. Sherif, C. W., Sherif, W., and Nebergall, R. (1965). Attitude and Altitude Change. Philadelphia: W. B. Saunders. Stewart, E. C., and Bennett, M. J. (1991). American Cultural Patterns. Yarmouth, Maine: Intercultural Press. Tai, E. (1986). Modification of the Western Approach to Intercultural Communication for the Japanese Context. Unpublished master's thesis, Portland State University, Portland, Oregon. Thaler, A. (1970). Future Shock. New York: Bantam. Ursin, H. (1978). \"Activation, Coping and Psychosomatics.\" In E. Baade, S. Levine, and H. Ursin (Eds ) Psychobiology of Stress: A Study of Coping Men. New York: Academic Press. A Model of Intercultural Communication Competence",
"title": ""
},
{
"docid": "b79684c412309d136121ab083623f53b",
"text": "Many types of human mobility data, such as flows of taxicabs, card swiping data of subways, bike trip data and Call Details Records (CDR), can be modeled by a Spatio-Temporal Graph (STG). STG is a directed graph in which vertices and edges are associated with spatio-temporal properties (e.g. the traffic flow on a road and the geospatial location of an intersection). In this paper, we instantly detect interesting phenomena, entitled black holes and volcanos, from an STG. Specifically, a black hole is a subgraph (of an STG) that has the overall inflow greater than the overall outflow by a threshold, while a volcano is a subgraph with the overall outflow greater than the overall inflow by a threshold (detecting volcanos from an STG is proved to be equivalent to the detection of black holes). The online detection of black holes/volcanos can timely reflect anomalous events, such as disasters, catastrophic accidents, and therefore help keep public safety. The patterns of black holes/volcanos and the relations between them reveal human mobility patterns in a city, thus help formulate a better city planning or improve a system's operation efficiency. Based on a well-designed STG index, we propose a two-step black hole detection algorithm: The first step identifies a set of candidate grid cells to start from; the second step expands an initial edge in a candidate cell to a black hole and prunes other candidate cells after a black hole is detected. Then, we adapt this detection algorithm to a continuous black hole detection scenario. We evaluate our method based on Beijing taxicab data and the bike trip data in New York, finding urban anomalies and human mobility patterns.",
"title": ""
},
{
"docid": "65b7621599e1215f4d59a8c0ea46411c",
"text": "Digital microfluidic biochips (DMFBs) are revolutionizing many biochemical analysis procedures, e.g., high-throughput DNA sequencing and point-of-care clinical diagnosis. However, today’s DMFBs suffer from several limitations: (1) constraints on droplet size and the inability to vary droplet volume in a fine-grained manner; (2) the lack of integrated sensors for real-time detection; (3) the need for special fabrication processes and the associated reliability/yield concerns. To overcome the above limitations, DMFBs based on a micro-electrode-dot-array (MEDA) architecture have recently been proposed. Unlike conventional digital microfluidics, where electrodes of equal size are arranged in a regular pattern, the MEDA architecture is based on the concept of a sea-ofmicro-electrodes. The MEDA architecture allows microelectrodes to be dynamically grouped to form a micro-component that can perform different microfluidic operations on the chip. Design-automation tools can reduce the difficulty of MEDA biochip design and help to ensure that the manufactured biochips are versatile and reliable. In order to fully exploit MEDA-specific advantages (e.g., real-time droplet sensing), this dissertation research targets new design, optimization, and test problems for MEDA biochips.",
"title": ""
}
] |
scidocsrr
|
1629d77fd536505c5f23aaa09b159501
|
Single-Site Colectomy With Miniature In Vivo Robotic Platform
|
[
{
"docid": "28e9bb0eef126b9969389068b6810073",
"text": "This paper presents the task specifications for designing a novel Insertable Robotic Effectors Platform (IREP) with integrated stereo vision and surgical intervention tools for Single Port Access Surgery (SPAS). This design provides a compact deployable mechanical architecture that may be inserted through a single Ø15 mm access port. Dexterous surgical intervention and stereo vision are achieved via the use of two snake-like continuum robots and two controllable CCD cameras. Simulations and dexterity evaluation of our proposed design are compared to several design alternatives with different kinematic arrangements. Results of these simulations show that dexterity is improved by using an independent revolute joint at the tip of a continuum robot instead of achieving distal rotation by transmission of rotation about the backbone of the continuum robot. Further, it is shown that designs with two robotic continuum robots as surgical arms have diminished dexterity if the bases of these arms are close to each other. This result justifies our design and points to ways of improving the performance of existing designs that use continuum robots as surgical arms.",
"title": ""
}
] |
[
{
"docid": "a019dd7cfca3cc019212d8a81219ce27",
"text": "Over the past few decades, remarkable advances in imaging technology have been made that allow more accurate diagnosis of biliary tract diseases and better planning of surgical procedures and other interventions aimed at managing these conditions. Operative techniques have also improved as a result of a better understanding of biliary and hepatic anatomy and physiology. Moreover, the continuing evolution of minimally invasive surgery has promoted the gradual adoption of laparoscopic approaches to these complex operations. Accordingly, biliary tract surgery, like many other areas of modern surgery, is constantly changing. In what follows, we describe common operations performed to treat diseases of the biliary tract, emphasizing details of operative planning and intraoperative technique and suggesting specific strategies for preventing common problems. It should be remembered that complex biliary tract procedures, whether open or laparoscopic, are best done in specialized units where surgeons, anesthetists, intensivists, and nursing staff all are accustomed to handling the special problems and requirements of patients undergoing such procedures.",
"title": ""
},
{
"docid": "f85163403b153c7577548567405839ec",
"text": "We used PIC18F452 microcontroller for hardware and software implementation of our home security system design. Based on PIC18F452, our system can monitor doors and windows of a house and can set alarm and warming signal to a nearest police station if anybody tries to break in. This security system also provides the functionality to identify the residents ID card to get access to the house without turning on the warning signal and alarm. Also, the security system provides a status that is not monitoring the door and windows since there is some possibility that the host do not want the system always checks the status of their house.",
"title": ""
},
{
"docid": "c66bdb0dd09b73557c76c08d2f0a03a3",
"text": "The search for the most potent strength training intervention is continuous. Maximal strength training (MST) yields large improvements in force generating capacity (FGC), largely attributed to efferent neural drive enhancement. However, it remains elusive whether eccentric overload, prior to the concentric phase, may augment training-induced neuromuscular adaptations. A total of 53 23±3(SD)-year-old untrained males were randomized to either a non-training control group (CG) or one of two training groups performing leg press strength training with linear progression, 3x per week for eight weeks. The first training group carried out MST with 4x4 repetitions at ~90% one-repetition maximum (1RM) in both action phases. The second group performed MST with an augmented eccentric load of 150% 1RM (eMST). Measurements were taken of 1RM and rate of force development (RFD), countermovement jump (CMJ) performance, and evoked potentials recordings (V-wave (V) and H-reflex (H) normalized to M-wave (M) in m. soleus). 1RM increased from 133±16kg to 157±23kg and 123±18kg to 149±22kg, and CMJ by 2.3±3.6cm and 2.2±3.7cm for MST and eMST, respectively (all p<0.05). Early, late, and maximal RFD increased in both groups (634-1501·s-1 (MST); 644-2111N·s-1 (eMST); p<0.05). These functional improvements were accompanied by increased V/M-ratio (MST:0.34±0.11 to 0.42±14; eMST:0.36±0.14 to 0.43±13; p<0.05). Resting H/M-ratio remained unchanged. Training-induced improvements did not differ. All increases, except for CMJ, were different from the CG. MST is an enterprise for large gains in FGC and functional performance. Eccentric overload did not induce additional improvements, suggesting firing frequency and motor unit recruitment during MST may be maximal.",
"title": ""
},
{
"docid": "f285815e47ea0613fb1ceb9b69aee7df",
"text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.",
"title": ""
},
{
"docid": "634b30b81da7139082927109b4c22d5e",
"text": "Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be “unrolled” to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.",
"title": ""
},
{
"docid": "8c5f09f3c7c5a8bc1b7c26602fd8102a",
"text": "With increasing interest in sentiment analysis research and opinionated web content always on the rise, focus on analysis of text in various domains and different languages is a relevant and important task. This paper explores the problems of sentiment analysis and opinion strength measurement using a rule-based approach tailored to the Arabic language. The approach takes into account language-specific traits that are valuable to syntactically segment a text, and allow for closer analysis of opinion-bearing language queues. By using an adapted sentiment lexicon along with sets of opinion indicators, a rule-based methodology for opinion-phrase extraction is introduced, followed by a method to rate the parsed opinions and offer a measure of opinion strength for the text under analysis. The proposed method, even with a small set of rules, shows potential for a simple and scalable opinion-rating system, which is of particular interest for morphologically-rich languages such as Arabic.",
"title": ""
},
{
"docid": "da93678f1b1070d68cfcbc9b7f6f88fe",
"text": "Dermal fat grafts have been utilized in plastic surgery for both reconstructive and aesthetic purposes of the face, breast, and body. There are multiple reports in the literature on the male phallus augmentation with the use of dermal fat grafts. Few reports describe female genitalia aesthetic surgery, in particular rejuvenation of the labia majora. In this report we describe an indication and use of autologous dermal fat graft for labia majora augmentation in a patient with loss of tone and volume in the labia majora. We found that this procedure is an option for labia majora augmentation and provides a stable result in volume-restoration.",
"title": ""
},
{
"docid": "19c93bdba44de7d2d8e2f7e1a412d35a",
"text": "Intense interest in applying convolutional neural networks (CNNs) in biomedical image analysis is wide spread, but its success is impeded by the lack of large annotated datasets in biomedical imaging. Annotating biomedical images is not only tedious and time consuming, but also demanding of costly, specialty - oriented knowledge and skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method called AIFT (active, incremental fine-tuning) to naturally integrate active learning and transfer learning into a single framework. AIFT starts directly with a pre-trained CNN to seek worthy samples from the unannotated for annotation, and the (fine-tuned) CNN is further fine-tuned continuously by incorporating newly annotated samples in each iteration to enhance the CNNs performance incrementally. We have evaluated our method in three different biomedical imaging applications, demonstrating that the cost of annotation can be cut by at least half. This performance is attributed to the several advantages derived from the advanced active and incremental capability of our AIFT method.",
"title": ""
},
{
"docid": "4c48a0be3e0194e57d9e08c1befeb7f7",
"text": "During preclinical investigations into the safety of drugs and chemicals, many are found to interfere with reproductive function in the female rat. This interference is commonly expressed as a change in normal morphology of the reproductive tract or a disturbance in the duration of particular phases of the estrous cycle. Such alterations can be recognized only if the pathologist has knowledge of the continuously changing histological appearance of the various components of the reproductive tract during the cycle and can accurately and consistently ascribe an individual tract to a particular phase of the cycle. Unfortunately, although comprehensive reports illustrating the normal appearance of the tract during the rat estrous cycle have been available over many years, they are generally somewhat ambiguous about distinct criteria for defining the end of one stage and the beginning of another. This detail is absolutely essential to achieve a consistent approach to staging the cycle. For the toxicologic pathologist, this report illustrates a pragmatic and practical approach to staging the estrous cycle in the rat based on personal experience and a review of the literature from the last century.",
"title": ""
},
{
"docid": "c24c7131a24b478beff8e682845588ab",
"text": "Modern technologies of mobile computing and wireless sensing prompt the concept of pervasive social network (PSN)-based healthcare. To realize the concept, the core problem is how a PSN node can securely share health data with other nodes in the network. In this paper, we propose a secure system for PSN-based healthcare. Two protocols are designed for the system. The first one is an improved version of the IEEE 802.15.6 display authenticated association. It establishes secure links with unbalanced computational requirements for mobile devices and resource-limited sensor nodes. The second protocol uses blockchain technique to share health data among PSN nodes. We realize a protocol suite to study protocol runtime and other factors. In addition, human body channels are proposed for PSN nodes in some use cases. The proposed system illustrates a potential method of using blockchain for PSN-based applications.",
"title": ""
},
{
"docid": "b6dd22ef29a87dac6b56373ce3c5f9cd",
"text": "Traditionally, object-oriented software adopts the Observer pattern to implement reactive behavior. Its drawbacks are well-documented and two families of alternative approaches have been proposed, extending object-oriented languages with concepts from functional reactive and dataflow programming, respectively event-driven programming. The former hardly escape the functional setting; the latter do not achieve the declarativeness of more functional approaches.\n In this paper, we present REScala, a reactive language which integrates concepts from event-based and functional-reactive programming into the object-oriented world. REScala supports the development of reactive applications by fostering a functional declarative style which complements the advantages of object-oriented design.",
"title": ""
},
{
"docid": "7419fa101c2471e225c976da196ed813",
"text": "A 4×40 Gb/s collaborative digital CDR is implemented in 28nm CMOS. The CDR is capable of recovering a low jitter clock from a partially-equalized or un-equalized eye by using a phase detection scheme that inherently filters out ISI edges. The CDR uses split feedback that simultaneously allows wider bandwidth and lower recovered clock jitter. A shared frequency tracking is also introduced that results in lower periodic jitter. Combining these techniques the CDR recovers a 10GHz clock from an eye containing 0.8UIpp DDJ and still achieves 1-10 MHz of tracking bandwidth while adding <; 300fs of jitter. Per lane CDR occupies only .06 mm2 and consumes 175 mW.",
"title": ""
},
{
"docid": "7fe86801de04054ffca61eb1b3334872",
"text": "Images rendered with traditional computer graphics techniques, such as scanline rendering and ray tracing, appear focused at all depths. However, there are advantages to having blur, such as adding realism to a scene or drawing attention to a particular place in a scene. In this paper we describe the optics underlying camera models that have been used in computer graphics, and present object space techniques for rendering with those models. In our companion paper [3], we survey image space techniques to simulate these models. These techniques vary in both speed and accuracy.",
"title": ""
},
{
"docid": "1a0a7a059ec05fda5b0fb689d4017603",
"text": "This paper presents a model-based approach which efficiently retrieves correct hypotheses using properties of triangles formed by the triplets of minutiae as the basic representation unit. We show that the uncertainty of minutiae locations associated with feature extraction and shear does not affect the angles of a triangle arbitrarily. Geometric constraints based on characteristics of minutiae are used to eliminate erroneous correspondences. We present an analysis to characterize the discriminating power of our indexing approach. Experimental results on fingerprint images of varying quali ty show that our approach efficiently narrows down the number of candidate hypotheses in the presence of translation, rotation, scale, shear, occlusion and clutter.",
"title": ""
},
{
"docid": "74af5749afb36c63dbf38bb8118807c9",
"text": "Modern mobile platforms like Android enable applications to read aggregate power usage on the phone. This information is considered harmless and reading it requires no user permission or notification. We show that by simply reading the phone’s aggregate power consumption over a period of a few minutes an application can learn information about the user’s location. Aggregate phone power consumption data is extremely noisy due to the multitude of components and applications that simultaneously consume power. Nevertheless, by using machine learning algorithms we are able to successfully infer the phone’s location. We discuss several ways in which this privacy leak can be remedied.",
"title": ""
},
{
"docid": "5514e96453a996a4d36f7682fa23813c",
"text": "This paper first introduces a <inline-formula> <tex-math notation=\"LaTeX\">$(k,n)$ </tex-math></inline-formula>-sharing matrix <inline-formula> <tex-math notation=\"LaTeX\">$S^{(k, n)}$ </tex-math></inline-formula> and its generation algorithm. Mathematical analysis is provided to show its potential for secret image sharing. Combining sharing matrix with image encryption, we further propose a lossless <inline-formula> <tex-math notation=\"LaTeX\">$(k,n)$ </tex-math></inline-formula>-secret image sharing scheme (SMIE-SIS). Only with no less than <inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula> shares, all the ciphertext information and security key can be reconstructed, which results in a lossless recovery of original information. This can be proved by the correctness and security analysis. Performance evaluation and security analysis demonstrate that the proposed SMIE-SIS with arbitrary settings of <inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula> has at least five advantages: 1) it is able to fully recover the original image without any distortion; 2) it has much lower pixel expansion than many existing methods; 3) its computation cost is much lower than the polynomial-based secret image sharing methods; 4) it is able to verify and detect a fake share; and 5) even using the same original image with the same initial settings of parameters, every execution of SMIE-SIS is able to generate completely different secret shares that are unpredictable and non-repetitive. This property offers SMIE-SIS a high level of security to withstand many different attacks.",
"title": ""
},
{
"docid": "ead93ea218664f371de64036e1788aa5",
"text": "OBJECTIVE\nTo assess the diagnostic efficacy of the first-trimester anomaly scan including first-trimester fetal echocardiography as a screening procedure in a 'medium-risk' population.\n\n\nMETHODS\nIn a prospective study, we evaluated 3094 consecutive fetuses with a crown-rump length (CRL) of 45-84 mm and gestational age between 11 + 0 and 13 + 6 weeks, using transabdominal and transvaginal ultrasonography. The majority of patients were referred without prior abnormal scan or increased nuchal translucency (NT) thickness, the median maternal age was, however, 35 (range, 15-46) years, and 53.8% of the mothers (1580/2936) were 35 years or older. This was therefore a self-selected population reflecting an increased percentage of older mothers opting for prenatal diagnosis. The follow-up rate was 92.7% (3117/3363).\n\n\nRESULTS\nThe prevalence of major abnormalities in 3094 fetuses was 2.8% (86/3094). The detection rate of major anomalies at the 11 + 0 to 13 + 6-week scan was 83.7% (72/86), 51.9% (14/27) for NT < 2.5 mm and 98.3% (58/59) for NT >or= 2.5 mm. The prevalence of major congenital heart defects (CHD) was 1.2% (38/3094). The detection rate of major CHD at the 11 to 13 + 6-week scan was 84.2% (32/38), 37.5% (3/8) for NT < 2.5 mm and 96.7% (29/30) for NT >or= 2.5 mm.\n\n\nCONCLUSION\nThe overall detection rate of fetal anomalies including fetal cardiac defects following a specialist scan at 11 + 0 to 13 + 6 weeks' gestation is about 84% and is increased when NT >or= 2.5 mm. This extends the possibilities of a first-trimester scan beyond risk assessment for fetal chromosomal defects. In experienced hands with adequate equipment, the majority of severe malformations as well as major CHD may be detected at the end of the first trimester, which offers parents the option of deciding early in pregnancy how to deal with fetuses affected by genetic or structural abnormalities without pressure of time.",
"title": ""
},
{
"docid": "c560dd620b3c9c6718ce717ac33f0c21",
"text": "This paper investigates the autocalibration of microelectromechanical systems (MEMS) triaxial accelerometer (TA) based on experimental design (DoE). First, for a special 6-parameter second-degree model, a six-point experimental scheme is proposed, and its G-optimality has been proven based on optimal DoE. Then, a new linearization approach is introduced, by which the TA model for autocalibration can be simplified as the expected second-degree form so that the proposed optimal experimental scheme can be applied. To reliably estimate the model parameter, a convergence-guaranteed iterative algorithm is also proposed, which can significantly reduce the bias caused by linearization. Thereafter, the effectiveness and robustness of the proposed approach have been demonstrated by simulation. Finally, the proposed calibration method has been experimentally verified using two typical types of MEMS TA, and desired experimental results effectively demonstrate the efficiency and accuracy of the proposed calibration approach.",
"title": ""
},
{
"docid": "8123ab525ce663e44b104db2cacd59a9",
"text": "Extractive summarization is the strategy of concatenating extracts taken from a corpus into a summary, while abstractive summarization involves paraphrasing the corpus using novel sentences. We define a novel measure of corpus controversiality of opinions contained in evaluative text, and report the results of a user study comparing extractive and NLG-based abstractive summarization at different levels of controversiality. While the abstractive summarizer performs better overall, the results suggest that the margin by which abstraction outperforms extraction is greater when controversiality is high, providing aion outperforms extraction is greater when controversiality is high, providing a context in which the need for generationbased methods is especially great.",
"title": ""
},
{
"docid": "068935eccad836eefae34908e15467b7",
"text": "We study the problem of k-means clustering in the presence of outliers. The goal is to cluster a set of data points to minimize the variance of the points assigned to the same cluster, with the freedom of ignoring a small set of data points that can be labeled as outliers. Clustering with outliers has received a lot of attention in the data processing community, but practical, efficient, and provably good algorithms remain unknown for the most popular k-means objective. Our work proposes a simple local search-based algorithm for k-means clustering with outliers. We prove that this algorithm achieves constant-factor approximate solutions and can be combined with known sketching techniques to scale to large data sets. Using empirical evaluation on both synthetic and large-scale real-world data, we demonstrate that the algorithm dominates recently proposed heuristic approaches for the problem.",
"title": ""
}
] |
scidocsrr
|
7c981a689a5c77324565400dde2cb603
|
Customer Satisfaction and Loyalty in an Online Shop : An Experiential Marketing Perspective
|
[
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
}
] |
[
{
"docid": "b726c5812eecb0e846f9089edc64deca",
"text": "Distant metastases harbor unique genomic characteristics not detectable in the corresponding primary tumor of the same patient and metastases located at different sites show a considerable intrapatient heterogeneity. Thus, the mere analysis of the resected primary tumor alone (current standard practice in oncology) or, if possible, even reevaluation of tumor characteristics based on the biopsy of the most accessible metastasis may not reveal sufficient information for treatment decisions. Here, we propose that this dilemma can be solved by a new diagnostic concept: liquid biopsy, that is, analysis of therapeutic targets and drug resistance-conferring gene mutations on circulating tumor cells (CTC) and cell-free circulating tumor DNA (ctDNA) released into the peripheral blood from metastatic deposits. We discuss the current challenges and future perspectives of CTCs and ctDNA as biomarkers in clinical oncology. Both CTCs and ctDNA are interesting complementary technologies that can be used in parallel in future trials assessing new drugs or drug combinations. We postulate that the liquid biopsy concept will contribute to a better understanding and clinical management of drug resistance in patients with cancer.",
"title": ""
},
{
"docid": "faf9f552aa52fcf615447e73c54bda5e",
"text": "Physicists use quantum models to describe the behavior of physical systems. Quantum models owe their success to their interpretability, to their relation to probabilistic models (quantization of classical models) and to their high predictive power. Beyond physics, these properties are valuable in general data science. This motivates the use of quantum models to analyze general nonphysical datasets. Here we provide both empirical and theoretical insights into the application of quantum models in data science. In the theoretical part of this paper, we firstly show that quantum models can be exponentially more efficient than probabilistic models because there exist datasets that admit low-dimensional quantum models and only exponentially high-dimensional probabilistic models. Secondly, we explain in what sense quantum models realize a useful relaxation of compressed probabilistic models. Thirdly, we show that sparse datasets admit low-dimensional quantum models and finally, we introduce a method to compute hierarchical orderings of properties of users (e.g., personality traits) and items (e.g., genres of movies). In the empirical part of the paper, we evaluate quantum models in item recommendation and observe that the predictive power of quantum-inspired recommender systems can compete with state-of-the-art recommender systems like SVD++ and PureSVD. Furthermore, we make use of the interpretability of quantum models by computing hierarchical orderings of properties of users and items. This work establishes a connection between data science (item recommendation), information theory (communication complexity), mathematical programming (positive semidefinite factorizations) and physics (quantum models).",
"title": ""
},
{
"docid": "333b21433d17a9d271868e203c8a9481",
"text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).",
"title": ""
},
{
"docid": "aa55e655c7fa8c86d189d03c01d5db87",
"text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.",
"title": ""
},
{
"docid": "c96bb8540cfb1b4c0c1b8e4c30496b57",
"text": "0747-5632/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.chb.2011.10.014 ⇑ Corresponding author. Present address: Departme Max Stern Yezreel Valley College, Emek Yezreel 193 5153868. E-mail address: noaml@yvc.ac.il (N. Lapidot-Lefler The present research studied the impact of three typical online communication factors on inducing the toxic online disinhibition effect: anonymity, invisibility, and lack of eye-contact. Using an experimental design with 142 participants, we examined the extent to which these factors lead to flaming behaviors, the typical products of online disinhibition. Random pairs of participants were presented with a dilemma for discussion and a common solution through online chat. The effects were measured using participants’ self-reports, expert judges’ ratings of chat transcripts, and textual analyses of participants’ conversations. A 2 2 2 (anonymity/non-anonymity visibility/invisibility eye-contact/lack of eye-contact) MANOVA was employed to analyze the findings. The results suggested that of the three independent variables, lack of eye-contact was the chief contributor to the negative effects of online disinhibition. Consequently, it appears that previous studies might have defined the concept of anonymity too broadly by not addressing other online communication factors, especially lack of eye-contact, that impact disinhibition. The findings are explained in the context of an online sense of unidentifiability, which apparently requires a more refined view of the components that create a personal sense of anonymity. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a2ef78d1fb6d20bd9b808df55dfa800a",
"text": "OBJECTIVES\nThe purpose of this study was to determine whether apoptosis is a major mechanism of cell death in patients with sepsis. The activities of caspase-3 and the antiapoptotic protein, BCL-2, were investigated also.\n\n\nDESIGN\nA prospective study of 20 patients who died of sepsis and multiple organ dysfunction was performed. The control group of 16 patients consisted of critically ill, nonseptic patients who were evaluated either prospectively (7) or retrospectively (9). In addition, normal colon sections from seven patients who had bowel resections were included. Apoptosis was evaluated in hematoxylin and eosin-stained specimens by deoxyuridine triphosphate nick end-labeling (TUNEL) and by DNA gel electrophoresis.\n\n\nSETTING\nTwo academic medical centers.\n\n\nPATIENTS\nCritically ill patients.\n\n\nMEASUREMENTS AND MAIN RESULTS\nIn septic patients, apoptosis was detected in diverse organs by all three methods with a predominance in lymphocytes and intestinal epithelial cells. Hematoxylin and eosin-stained specimens from septic patients demonstrated at least focal apoptosis in 56.3% of spleens, 47.1% of colons, and 27.7% of ileums. Indirect evidence of lymphocyte apoptosis in septic patients included extensive depletion of lymphocytes in white pulp and a marked lymphocytopenia in 15 of 19 patients. Hematoxylin and eosin from nonseptic patients' tissues revealed a low level of apoptosis in one patient only. The TUNEL method increased in positivity with a delay in tissue fixation and was highly positive in many tissues from both septic and nonseptic patients. Immunohistochemical staining for active caspase-3 showed a marked increase in septic vs. nonseptic patients (p < .01), with >25% to 50% of cells being positive focally in the splenic white pulp of six septic but in no nonseptic patients.\n\n\nCONCLUSIONS\nWe conclude that caspase-3-mediated apoptosis causes extensive lymphocyte apoptosis in sepsis and may contribute to the impaired immune response that characterizes the disorder.",
"title": ""
},
{
"docid": "738f9ec1c9232b874406fc777ec3a8c3",
"text": "In this paper we analyze in some detail the geometry of a pair of cameras, i.e., a stereo rig. Contrarily to what has been done in the past and is still done currently, for example in stereo or motion analysis, we do not assume that the intrinsic parameters of the cameras are known (coordinates of the principal points, pixels aspect ratio and focal lengths). This is important for two reasons. First, it is more realistic in applications where these parameters may vary according to the task (active vision). Second, the general case considered here, captures all the relevant information that is necessary for establishing correspondences between two pairs of images. This information is fundamentally projective and is hidden in a confusing manner in the commonly used formalism of the Essential matrix introduced by Longuet-Higgins (1981). This paper clarifies the projective nature of the correspondence problem in stereo and shows that the epipolar geometry can be summarized in one 3×3 matrix of rank 2 which we propose to call the Fundamental matrix. After this theoretical analysis, we embark on the task of estimating the Fundamental matrix from point correspondences, a task which is of practical importance. We analyze theoretically, and compare experimentally using synthetic and real data, several methods of estimation. The problem of the stability of the estimation is studied from two complementary viewpoints. First we show that there is an interesting relationship between the Fundamental matrix and three-dimensional planes which induce homographies between the images and create unstabilities in the estimation procedures. Second, we point to a deep relation between the unstability of the estimation procedure and the presence in the scene of so-called critical surfaces which have been studied in the context of motion analysis. Finally we conclude by stressing the fact that we believe that the Fundamental matrix will play a crucial role in future applications of three-dimensional Computer Vision by greatly increasing its versatility, robustness and hence applicability to real difficult problems.",
"title": ""
},
{
"docid": "0afcf50fa7bbe82263b8c4dec7b44fd2",
"text": "Motion sensing is of fundamental importance for user interfaces and input devices. In applications, where optical sensing is preferred, traditional camera-based approaches can be prohibitive due to limited resolution, low frame rates and the required computational power for image processing. We introduce a novel set of motion-sensing configurations based on laser speckle sensing that are particularly suitable for human-computer interaction. The underlying principles allow these configurations to be fast, precise, extremely compact and low cost. We provide an overview and design guidelines for laser speckle sensing for user interaction and introduce four general speckle projector/sensor configurations. We describe a set of prototypes and applications that demonstrate the versatility of our laser speckle sensing techniques.",
"title": ""
},
{
"docid": "a66b5b6dea68e5460b227af4caa14ef3",
"text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.",
"title": ""
},
{
"docid": "c4dfe9eb3aa4d082e96815d8c610968d",
"text": "In this paper, we consider the problem of predicting demographics of geographic units given geotagged Tweets that are composed within these units. Traditional survey methods that offer demographics estimates are usually limited in terms of geographic resolution, geographic boundaries, and time intervals. Thus, it would be highly useful to develop computational methods that can complement traditional survey methods by offering demographics estimates at finer geographic resolutions, with flexible geographic boundaries (i.e. not confined to administrative boundaries), and at different time intervals. While prior work has focused on predicting demographics and health statistics at relatively coarse geographic resolutions such as the county-level or state-level, we introduce an approach to predict demographics at finer geographic resolutions such as the blockgroup-level. For the task of predicting gender and race/ethnicity counts at the blockgrouplevel, an approach adapted from prior work to our problem achieves an average correlation of 0.389 (gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms this prior approach with an average correlation of 0.671 (gender) and 0.692 (race).",
"title": ""
},
{
"docid": "0efaed9de3e67e0f70be538471b6da69",
"text": "The virtualization of mobile devices such as smartphones, tablets, netbooks, and MIDs offers significant potential in addressing the mobile manageability, security, cost, compliance, application development and deployment challenges that exist in the enterprise today. Advances in mobile processor performance, memory and storage capacities have led to the availability of many of the virtualization techniques that have previously been applied in the desktop and server domains. Leveraging these opportunities, VMware's Mobile Virtualization Platform (MVP) makes use of system virtualization to deliver an end-to-end solution for facilitating employee-owned mobile phones in the enterprise. In this paper we describe the use case behind MVP, and provide an overview of the hypervisor's design and implementation. We present a novel system architecture for mobile virtualization and describe key aspects of both core and platform virtualization on mobile devices",
"title": ""
},
{
"docid": "8aacdb790ddec13f396a0591c0cd227a",
"text": "This paper reports on a qualitative study of journal entries written by students in six health professions participating in the Interprofessional Health Mentors program at the University of British Columbia, Canada. The study examined (1) what health professions students learn about professional language and communication when given the opportunity, in an interprofessional group with a patient or client, to explore the uses, meanings, and effects of common health care terms, and (2) how health professional students write about their experience of discussing common health care terms, and what this reveals about how students see their development of professional discourse and participation in a professional discourse community. Using qualitative thematic analysis to address the first question, the study found that discussion of these health care terms provoked learning and reflection on how words commonly used in one health profession can be understood quite differently in other health professions, as well as on how health professionals' language choices may be perceived by patients and clients. Using discourse analysis to address the second question, the study further found that many of the students emphasized accuracy and certainty in language through clear definitions and intersubjective agreement. However, when prompted by the discussion they were willing to consider other functions and effects of language.",
"title": ""
},
{
"docid": "6fca80896fe3493072a1bc360cd680a7",
"text": "The physical formats used to represent linguistic data and its annotations have evolved over the past four decades, accommodating different needs and perspectives as well as incorporating advances in data representation generally. This chapter provides an overview of representation formats with the aim of surveying the relevant issues for representing different data types together with current stateof-the-art solutions, in order to provide sufficient information to guide others in the choice of a representation format or formats.",
"title": ""
},
{
"docid": "f7fa80456b0fb479bc694cb89fbd84e5",
"text": "In the past two decades, social capital in its various forms and contexts has emerged as one of the most salient concepts in social sciences. While much excitement has been generated, divergent views, perspectives, and expectations have also raised the serious question : is it a fad or does it have enduring qualities that will herald a new intellectual enterprise? This presentation's purpose is to review social capital as discussed in the literature, identify controversies and debates, consider some critical issues, and propose conceptual and research strategies in building a theory. I will argue that such a theory and the research enterprise must be based on the fundamental understanding that social capital is captured from embedded resources in social networks . Deviations from this understanding in conceptualization and measurement lead to confusion in analyzing causal mechanisms in the macroand microprocesses. It is precisely these mechanisms and processes, essential for an interactive theory about structure and action, to which social capital promises to make contributions .",
"title": ""
},
{
"docid": "a78d3cc8cdd93d36a1cd1440550bb878",
"text": "Coin identification and recognition and is important to enhance the extended operation of Vending machines, Pay phone system and coin counting machines. Coin recognition is a difficult task in machine intelligence and computer vision problems because of its various rotations and widely changed patterns. Therefore, an efficient algorithm is designed to be robust and invariant to rotation, translation and scaling. The objective of this work is to find whether the object is coin or not if so denomination of the coin is found. The Fourier approximation of the coin image is used to reduce the variations on surface of coin such as light reflection effect. Then coins can be distinguished by feeding those features into a multi-layered BP neural network.",
"title": ""
},
{
"docid": "0170bcdc662628fb46142e62bc8e011d",
"text": "Agriculture is the sole provider of human food. Most farm machines are driven by fossil fuels, which contribute to greenhouse gas emissions and, in turn, accelerate climate change. Such environmental damage can be mitigated by the promotion of renewable resources such as solar, wind, biomass, tidal, geo-thermal, small-scale hydro, biofuels and wave-generated power. These renewable resources have a huge potential for the agriculture industry. The farmers should be encouraged by subsidies to use renewable energy technology. The concept of sustainable agriculture lies on a delicate balance of maximizing crop productivity and maintaining economic stability, while minimizing the utilization of finite natural resources and detrimental environmental impacts. Sustainable agriculture also depends on replenishing the soil while minimizing the use of non-renewable resources, such as natural gas, which is used in converting atmospheric nitrogen into synthetic fertilizer, and mineral ores, e.g. phosphate or fossil fuel used in diesel generators for water pumping for irrigation. Hence, there is a need for promoting use of renewable energy systems for sustainable agriculture, e.g. solar photovoltaic water pumps and electricity, greenhouse technologies, solar dryers for post-harvest processing, and solar hot water heaters. In remote agricultural lands, the underground submersible solar photovoltaic water pump is economically viable and also an environmentally-friendly option as compared with a diesel generator set. If there are adverse climatic conditions for the growth of particular plants in cold climatic zones then there is need for renewable energy technology such as greenhouses for maintaining the optimum plant ambient temperature conditions for the growth of plants and vegetables. The economics of using greenhouses for plants and vegetables, and solar photovoltaic water pumps for sustainable agriculture and the environment are presented in this article. Clean development provides industrialized countries with an incentive to invest in emission reduction projects in developing countries to achieve a reduction in CO2 emissions at the lowest cost. The mechanism of clean development is discussed in brief for the use of renewable systems for sustainable agricultural development specific to solar photovoltaic water pumps in India and the world. This article explains in detail the role of renewable energy in farming by connecting all aspects of agronomy with ecology, the environment, economics and societal change.",
"title": ""
},
{
"docid": "db597c88e71a8397b81216282d394623",
"text": "In many real applications, graph data is subject to uncertainties due to incompleteness and imprecision of data. Mining such uncertain graph data is semantically different from and computationally more challenging than mining conventional exact graph data. This paper investigates the problem of mining uncertain graph data and especially focuses on mining frequent subgraph patterns on an uncertain graph database. A novel model of uncertain graphs is presented, and the frequent subgraph pattern mining problem is formalized by introducing a new measure, called expected support. This problem is proved to be NP-hard. An approximate mining algorithm is proposed to find a set of approximately frequent subgraph patterns by allowing an error tolerance on expected supports of discovered subgraph patterns. The algorithm uses efficient methods to determine whether a subgraph pattern can be output or not and a new pruning method to reduce the complexity of examining subgraph patterns. Analytical and experimental results show that the algorithm is very efficient, accurate, and scalable for large uncertain graph databases. To the best of our knowledge, this paper is the first one to investigate the problem of mining frequent subgraph patterns from uncertain graph data.",
"title": ""
},
{
"docid": "67033d89acee89763fa1b2a06fe00dc4",
"text": "We demonstrate a novel query interface that enables users to construct a rich search query without any prior knowledge of the underlying schema or data. The interface, which is in the form of a single text input box, interacts in real-time with the users as they type, guiding them through the query construction. We discuss the issues of schema and data complexity, result size estimation, and query validity; and provide novel approaches to solving these problems. We demonstrate our query interface on two popular applications; an enterprise-wide personnel search, and a biological information database.",
"title": ""
},
{
"docid": "30decb72388cd024661c552670a28b11",
"text": "The increasing volume and unstructured nature of data available on the World Wide Web (WWW) makes information retrieval a tedious and mechanical task. Lots of this information is not semantic driven, and hence not machine process able, but its only in human readable form. The WWW is designed to builds up a source of reference for web of meaning. Ontology information on different subjects spread globally is made available at one place. The Semantic Web (SW), moreover as an extension of WWW is designed to build as a foundation of vocabularies and effective communication of Semantics. The promising area of Semantic Web is logical and lexical semantics. Ontology plays a major role to represent information more meaningfully for humans and machines for its later effective retrieval. This paper constitutes the requisite with a unique approach for a representation and reasoning with ontology for semantic analysis of various type of document and also surveys multiple approaches for ontology learning that enables reasoning with uncertain, incomplete and contradictory information in a domain context.",
"title": ""
}
] |
scidocsrr
|
2ff6e7a2ff4a1da37e5407ef2d2b8044
|
Review of Hybrid Prognostics Approaches for Remaining Useful Life Prediction of Engineered Systems, and an Application to Battery Life Prediction
|
[
{
"docid": "6f56fca8d3df57619866d9520f79e1a8",
"text": "This paper explores how the remaining useful life (RUL) can be assessed for complex systems whose internal state variables are either inaccessible to sensors or hard to measure under operational conditions. Consequently, inference and estimation techniques need to be applied on indirect measurements, anticipated operational conditions, and historical data for which a Bayesian statistical approach is suitable. Models of electrochemical processes in the form of equivalent electric circuit parameters were combined with statistical models of state transitions, aging processes, and measurement fidelity in a formal framework. Relevance vector machines (RVMs) and several different particle filters (PFs) are examined for remaining life prediction and for providing uncertainty bounds. Results are shown on battery data.",
"title": ""
},
{
"docid": "9cc5fddebc5c45c4c7f5535136275076",
"text": "This paper details the winning method in the IEEE GOLD category of the PHM psila08 Data Challenge. The task was to estimate the remaining useable life left of an unspecified complex system using a purely data driven approach. The method involves the construction of Multi-Layer Perceptron and Radial Basis Function networks for regression. A suitable selection of these networks has been successfully combined in an ensemble using a Kalman filter. The Kalman filter provides a mechanism for fusing multiple neural network model predictions over time. The essential initial stages of pre-processing and data exploration are also discussed.",
"title": ""
}
] |
[
{
"docid": "d878db1f1d4c9eee3cf061143bb95f7a",
"text": "Reputation is the opinion of the public toward a person, a group of people, or an organization. Reputation systems are particularly important in e-markets, where they help buyers to decide whether to purchase a product or not. Since a higher reputation means more profit, some users try to deceive such systems to increase their reputation. E-markets should protect their reputation systems from attacks in order to maintain a sound environment. This work addresses the task of finding attempts to deceive reputation systems in e-markets. Our goal is to generate a list of users (sellers) ranked by the probability of fraud. Firstly we describe characteristics related to transactions that may indicate frauds evidence and they are expanded to the sellers. We describe results of a simple approach that ranks sellers by counting characteristics of fraud. Then we incorporate characteristics that cannot be used by the counting approach, and we apply logistic regression to both, improved and not improved. We use real data from a large Brazilian e-market to train and evaluate our methods and the improved set with logistic regression performs better, specially when we apply stepwise optimization. We validate our results with specialists of fraud detection in this market place. In the end, we increase by 112% the number of identified fraudsters against the reputation system. In terms of ranking, we reach 93% of average precision after specialists' review in the list that uses Logistic Regression and Stepwise optimization. We also detect 55% of fraudsters with a precision of 100%.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "cb086fa252f4db172b9c7ac7e1081955",
"text": "Drivable free space information is vital for autonomous vehicles that have to plan evasive maneu vers in realtime. In this paper, we present a new efficient met hod for environmental free space detection with laser scann er based on 2D occupancy grid maps (OGM) to be used for Advance d Driving Assistance Systems (ADAS) and Collision Avo idance Systems (CAS). Firstly, we introduce an enhanced in verse sensor model tailored for high-resolution laser scanners f or building OGM. It compensates the unreflected beams and deals with the ray casting to grid cells accuracy and computationa l effort problems. Secondly, we introduce the ‘vehicle on a circle for grid maps’ map alignment algorithm that allows building more accurate local maps by avoiding the computationally expensive inaccurate operations of image sub-pixel shifting a nd rotation. The resulted grid map is more convenient for ADAS f eatures than existing methods, as it allows using less memo ry sizes, and hence, results into a better real-time performance. Thirdly, we present an algorithm to detect what we call the ‘in-sight edges’. These edges guarantee modeling the free space area with a single polygon of a fixed number of vertices regardless th e driving situation and map complexity. The results from real world experiments show the effectiveness of our approach. Keywords— Occupancy Grid Map; Static Free Space Detection; Advanced Driving Assistance Systems; las er canner; autonomous driving",
"title": ""
},
{
"docid": "2e5d6c99ac0d02711d9586176e9f176f",
"text": "Every year billions of Euros are lost worldwide due to credit card fraud. Thus, forcing financial institutions to continuously improve their fraud detection systems. In recent years, several studies have proposed the use of machine learning and data mining techniques to address this problem. However, most studies used some sort of misclassification measure to evaluate the different solutions, and do not take into account the actual financial costs associated with the fraud detection process. Moreover, when constructing a credit card fraud detection model, it is very important how to extract the right features from the transactional data. This is usually done by aggregating the transactions in order to observe the spending behavioral patterns of the customers. In this paper we expand the transaction aggregation strategy, and propose to create a new set of features based on analyzing the periodic behavior of the time of a transaction using the von Mises distribution. Then, using a real credit card fraud dataset provided by a large European card processing company, we compare state-of-the-art credit card fraud detection models, and evaluate how the different sets of features have an impact on the results. By including the proposed periodic features into the methods, the results show an average increase in savings of 13%. © 2016 Elsevier Ltd. All rights reserved. o t W s i c s 2 2 & t s p e a t h a M b t",
"title": ""
},
{
"docid": "43d307f1e7aa43350399e7343946ac47",
"text": "Computer based medical decision support system (MDSS) can be useful for the physicians with its fast and accurate decision making process. Predicting the existence of heart disease accurately, results in saving life of patients followed by proper treatment. The main objective of our paper is to present a MDSS for heart disease classification based on sequential minimal optimization (SMO) technique in support vector machine (SVM). In this we illustrated the UCI machine learning repository data of Cleveland heart disease database; we trained SVM by using SMO technique. Training a SVM requires the solution of a very large QP optimization problem..SMO algorithm breaks this large optimization problem into small sub-problems. Both the training and testing phases give the accuracy on each record. The results proved that the MDSS is able to carry out heart disease diagnosis accurately in fast way and on a large dataset it shown good ability of prediction.",
"title": ""
},
{
"docid": "913e167521f0ce7a7f1fb0deac58ae9c",
"text": "Prospect theory is a descriptive theory of how individuals choose among risky alternatives. The theory challenged the conventional wisdom that economic decision makers are rational expected utility maximizers. We present a number of empirical demonstrations that are inconsistent with the classical theory, expected utility, but can be explained by prospect theory. We then discuss the prospect theory model, including the value function and the probability weighting function. We conclude by highlighting several applications of the theory.",
"title": ""
},
{
"docid": "a110e4872095e8daf0974fa9cb051c39",
"text": "The present study provides the first evidence that illiteracy can be reliably predicted from standard mobile phone logs. By deriving a broad set of mobile phone indicators reflecting users’ financial, social and mobility patterns we show how supervised machine learning can be used to predict individual illiteracy in an Asian developing country, externally validated against a large-scale survey. On average the model performs 10 times better than random guessing with a 70% accuracy. Further we show how individual illiteracy can be aggregated and mapped geographically at cell tower resolution. Geographical mapping of illiteracy is crucial to know where the illiterate people are, and where to put in resources. In underdeveloped countries such mappings are often based on out-dated household surveys with low spatial and temporal resolution. One in five people worldwide struggle with illiteracy, and it is estimated that illiteracy costs the global economy more than $1 trillion dollars each year [1]. These results potentially enable costeffective, questionnaire-free investigation of illiteracy-related questions on an unprecedented scale.",
"title": ""
},
{
"docid": "934875351d5fa0c9b5c7499ca13727ab",
"text": "Computation of the simplicial complexes of a large point cloud often relies on extracting a sample, to reduce the associated computational burden. The study considers sampling critical points of a Morse function associated to a point cloud, to approximate the Vietoris-Rips complex or the witness complex and compute persistence homology. The effectiveness of the novel approach is compared with the farthest point sampling, in a context of classifying human face images into ethnics groups using persistence homology.",
"title": ""
},
{
"docid": "5329edd5259cf65d62922b17765fce0d",
"text": "T emergence of software-based platforms is shifting competition toward platform-centric ecosystems, although this phenomenon has not received much attention in information systems research. Our premise is that the coevolution of the design, governance, and environmental dynamics of such ecosystems influences how they evolve. We present a framework for understanding platform-based ecosystems and discuss five broad research questions that present significant research opportunities for contributing homegrown theory about their evolutionary dynamics to the information systems discipline and distinctive information technology-artifactcentric contributions to the strategy, economics, and software engineering reference disciplines.",
"title": ""
},
{
"docid": "9cbb3369c6276e74c60d2f5c01aa9778",
"text": "This paper presents some of the ground mobile robots under development at the Robotics and Mechanisms Laboratory (RoMeLa) at Virginia Tech that use biologically inspired novel locomotion strategies. By studying nature's models and then imitating or taking inspiration from these designs and processes, we apply and implement new ways for mobile robots to move. Unlike most ground mobile robots that use conventional means of locomotion such as wheels or tracks, these robots display unique mobility characteristics that make them suitable for certain environments where conventional ground robots have difficulty moving. These novel ground robots include; the whole skin locomotion robot inspired by amoeboid motility mechanisms, the three-legged walking machine STriDER (Self-excited Tripedal Dynamic Experimental Robot) that utilizes the concept of actuated passive-dynamic locomotion, the hexapod robot MARS (Multi Appendage Robotic System) that uses dry-adhesive “gecko feet” for walking in zero-gravity environments, the humanoid robot DARwIn (Dynamic Anthropomorphic Robot with Intelligence) that uses dynamic bipedal gaits, and the high mobility robot IMPASS (Intelligent Mobility Platform with Active Spoke System) that uses a novel wheel-leg hybrid locomotion strategy. Each robot and the novel locomotion strategies it uses are described, followed by a discussion of their capabilities and challenges.",
"title": ""
},
{
"docid": "b8471f4d02599bccfcfbbc04a756f0b6",
"text": "This paper describes the design and realization of an electrostatic actuated MEMS mirror operating at a resonance frequency of 23.5 KHz with a PLL feedback loop. The design is based upon a thorough understanding of the (non-linear) dynamical behavior of the mirror. Using an external position sensitive device (PSD) the proper working of the PLL is demonstrated. Next we study the possibility to replace the PSD sensor with an embedded capacitive phase-angle sensor. We show measurements of capacitance changes with large parasitic influences while actuating the mirror in a feed forward mode. This demonstrates the feasibility of a fully embedded control for a resonant scanning MEMS mirror.",
"title": ""
},
{
"docid": "0fad39b1f264193cd95d14ac7fbad10f",
"text": "A potential diagnostic pitfall in the histologic assessment of melanoma is the inability to recognize unusual melanoma variants. Of these, the more treacherous examples include the desmoplastic melanoma, the nevoid melanoma, the so-called ‘minimal-deviation melanoma,’ melanoma with prominent pigment synthesis or ‘animal-type melanoma,’ and the malignant blue nevus. Also problematic are the unusual phenotypic profiles seen in vertical growth phase melanomas; these include those tumors whose morphological peculiarities mimic cancers of nonmelanocytic lineage and those melanomas that express aberrant antigenic profiles not commonly associated with a melanocytic histogenesis. Metaplastic change in melanoma, balloon cell melanoma, signet-ring cell melanoma, myxoid melanoma, small cell melanoma and rhabdoid melanoma all have the potential to mimic metastatic and primary neoplasms of different lineage derivations. Abnormal immunohistochemical expression of CD 34, cytokeratins, epithelial membrane antigen, and smooth muscle markers as well as the deficient expression of S100 protein and melanocyte lineage-specific markers such as GP100 protein (ie HMB-45 antibody) and A103 (ie Melan-A) also present confusing diagnostic challenges. In this review, we will discuss in some detail certain of these novel clinicopathologic types of melanoma, as well as the abnormal phenotypic expressions seen in vertical growth phase melanoma.",
"title": ""
},
{
"docid": "2a2bbb4d749ec9cde3625215db2a004f",
"text": "A tropism is a growth movement exhibited by part of an organism in response to a unidirectional stimulus. When the stimulus is due to the same organism which displays the tropism, or to a neighbouring organism of the same species, the oriented growth which occurs is an autotropism. For the purpose of this review autotropic responses are divided into two main groups. One group arises from interactions of neighbouring spores and affects the point of emergence and initial direction of growth of each germ-tube; the other group arises from interactions of neighbouring somatic hyphae and can result in the orientated growth of hyphae to give the characteristic growth pattern of a fungal colony grown on a solid medium. In the simplest sense there is an interaction of two spores or two hyphae but these responses may be modified by the size or density of the population to which the spores or hyphae belong and also by other environmental conditions. In the broadest sense autotropism can be investigated between neighbouring spore suspensions (or hyphal populations) of the same species. Many autotropic responses in fungi have been attributed to the production of labile metabolites to which germ-tubes and mature hyphae react in a negative chemotropic manner. Whilst such metabolites may play a major part in tropic responses between two spores which are in contact or slightly separated it is intended to point out that there is",
"title": ""
},
{
"docid": "e31af9137176dd39efe0a9e286dd981b",
"text": "This paper presents a novel automated procedure for discovering expressive shape specifications for sophisticated functional data structures. Our approach extracts potential shape predicates based on the definition of constructors of arbitrary user-defined inductive data types, and combines these predicates within an expressive first-order specification language using a lightweight data-driven learning procedure. Notably, this technique requires no programmer annotations, and is equipped with a type-based decision procedure to verify the correctness of discovered specifications. Experimental results indicate that our implementation is both efficient and effective, capable of automatically synthesizing sophisticated shape specifications over a range of complex data types, going well beyond the scope of existing solutions.",
"title": ""
},
{
"docid": "4517d9de6951e8d1f2a0033029a87c0f",
"text": "This article surveys the recent developments in computational methods for second order fully nonlinear partial differential equations (PDEs), a relatively new subarea within numerical PDEs. Due to their ever increasing importance in mathematics itself (e.g., differential geometry and PDEs) and in many scientific and engineering fields (e.g., astrophysics, geostrophic fluid dynamics, grid generation, image processing, optimal transport, meteorology, mathematical finance, and optimal control), numerical solutions to fully nonlinear second order PDEs have garnered a great deal of interest from the numerical PDE and scientific communities. Significant progress has been made for this class of problems in the past few years, but many problems still remain open. This article intends to introduce these current advancements and new results to the SIAM community and generate more interest in numerical methods for fully nonlinear PDEs.",
"title": ""
},
{
"docid": "50ee4a43d4d261c6ebc1c06ba6be4db3",
"text": "We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www.\n\n\nRESEARCH\nibm.com/ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future.",
"title": ""
},
{
"docid": "bc4ed7182695c62d7a2c8af82cdeb9fc",
"text": "The work in this paper is driven by the question how to exploit the temporal cues available in videos for their accurate classification, and for human action recognition in particular? Thus far, the vision community has focused on spatio-temporal approaches with fixed temporal convolution kernel depths. We introduce a new temporal layer that models variable temporal convolution kernel depths. We embed this new temporal layer in our proposed 3D CNN. We extend the DenseNet architecture which normally is 2D with 3D filters and pooling kernels. We name our proposed video convolutional network ‘Temporal 3D ConvNet’ (T3D) and its new temporal layer ‘Temporal Transition Layer’ (TTL). Our experiments show that T3D outperforms the current state-of-the-art methods on the HMDB51, UCF101 and Kinetics datasets. The other issue in training 3D ConvNets is about training them from scratch with a huge labeled dataset to get a reasonable performance. So the knowledge learned in 2D ConvNets is completely ignored. Another contribution in this work is a simple and effective technique to transfer knowledge from a pre-trained 2D CNN to a randomly initialized 3D CNN for a stable weight initialization. This allows us to significantly reduce the number of training samples for 3D CNNs. Thus, by finetuning this network, we beat the performance of generic and recent methods in 3D CNNs, which were trained on large video datasets, e.g. Sports-1M, and finetuned on the target datasets, e.g. HMDB51/UCF101. The T3D codes will be released soon1.",
"title": ""
}
] |
scidocsrr
|
dfbc38a49a35e0519f42b8da546c8212
|
Natural Language Communication with Robots
|
[
{
"docid": "485cda7203863d2ff0b2070ca61b1126",
"text": "Interestingly, understanding natural language that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.",
"title": ""
}
] |
[
{
"docid": "d18c3e9f7b77a130dc7b7d76a3e2bbe4",
"text": "Laser grooving process is crucial to thin chip strength. Much of paper has been put on the mechanism of laser grooving, but only few investigations were taken for chip strength enhancement. In this paper, thin chip (less than 100um chip thickness) is adopted, and the experiment of laser grooving process is carried out for mechanical characteristics evaluation. Based on experiment results, the optimal process condition has been determined. A notable improvement for thin chip strength is achieved after process optimization.",
"title": ""
},
{
"docid": "c91cc6de1e26d9ac9b5ba03ba67fa9b9",
"text": "As in most of the renewable energy sources it is not possible to generate high voltage directly, the study of high gain dc-dc converters is an emerging area of research. This paper presents a high step-up dc-dc converter based on current-fed Cockcroft-Walton multiplier. This converter not only steps up the voltage gain but also eliminates the use of high frequency transformer which adds to cost and design complexity. N-stage Cockcroft-Walton has been utilized to increase the voltage gain in place of a transformer. This converter also provides dual input operation, interleaved mode and maximum power point tracking control (if solar panel is used as input). This converter is utilized for resistive load and a pulsed power supply and the effect is studied in high voltage application. Simulation has been performed by designing a converter of 450 W, 400 V with single source and two stage of Cockcroft-Walton multiplier and interleaved mode of operation is performed. Design parameters as well as simulation results are presented and verified in this paper.",
"title": ""
},
{
"docid": "8a6a26094a9752010bb7297ecc80cd15",
"text": "This paper provides standard instructions on how to protect short text messages with one-time pad encryption. The encryption is performed with nothing more than a pencil and paper, but provides absolute message security. If properly applied, it is mathematically impossible for any eavesdropper to decrypt or break the message without the proper key.",
"title": ""
},
{
"docid": "6b236f1e123dd27e7c52392e8efa500d",
"text": "An ordered probit regression model estimated using 15 years’ data is used to model English league football match results. As well as past match results data, the significance of the match for end-ofseason league outcomes; the involvement of the teams in cup competition; the geographical distance between the two teams’ home towns; and the average attendances of the two teams all contribute to the model’s performance. The model is used to test the weak-form efficiency of prices in the fixedodds betting market, and betting strategies with a positive expected return are identified.",
"title": ""
},
{
"docid": "974daec8be06ab2c386257751a69e6e4",
"text": "Inspired by recent successes of deep learning in computer vision, we propose a novel framework for encoding time series as different types of images, namely, Gramian Angular Summation/Difference Fields (GASF/GADF) and Markov Transition Fields (MTF). This enables the use of techniques from computer vision for time series classification and imputation. We used Tiled Convolutional Neural Networks (tiled CNNs) on 20 standard datasets to learn high-level features from the individual and compound GASF-GADF-MTF images. Our approaches achieve highly competitive results when compared to nine of the current best time series classification approaches. Inspired by the bijection property of GASF on 0/1 rescaled data, we train Denoised Auto-encoders (DA) on the GASF images of four standard and one synthesized compound dataset. The imputation MSE on test data is reduced by 12.18%-48.02% when compared to using the raw data. An analysis of the features and weights learned via tiled CNNs and DAs explains why the approaches work.",
"title": ""
},
{
"docid": "7ffaedeabffcc9816d1eb83a4e4cdfd0",
"text": "In this paper, we propose a new method for calculating the output layer in neural machine translation systems. The method is based on predicting a binary code for each word and can reduce computation time/memory requirements of the output layer to be logarithmic in vocabulary size in the best case. In addition, we also introduce two advanced approaches to improve the robustness of the proposed model: using error-correcting codes and combining softmax and binary codes. Experiments on two English ↔ Japanese bidirectional translation tasks show proposed models achieve BLEU scores that approach the softmax, while reducing memory usage to the order of less than 1/10 and improving decoding speed on CPUs by x5 to x10.",
"title": ""
},
{
"docid": "503951e241d69d6ca21392807141ad45",
"text": "The authors examined the efficacy, speed, and incidence of symptom worsening for 3 treatments of posttraumatic stress disorder (PTSD): prolonged exposure, relaxation training, or eye movement desensitization and reprocessing (EMDR; N = 60). Treaments did not differ in attrition, in the incidence of symptom worsening, or in their effects on numbing and hyperarousal symptoms. Compared with EMDR and relaxation training, exposure therapy (a) produced significantly larger reductions in avoidance and reexperiencing symptoms, (b) tended to be faster at reducing avoidance, and (c) tended to yield a greater proportion of participants who no longer met criteria for PTSD after treatment. EMDR and relaxation did not differ from one another in speed or efficacy.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "e76a9cef74788905d3d8f5659c2bfca2",
"text": "In this paper, we present a novel configuration for realizing monolithic substrate integrated waveguide (SIW)-based phased antenna arrays using Ferrite low-temperature cofired ceramic (LTCC) technology. Unlike the current common schemes for realizing SIW phased arrays that rely on surface-mount component (p-i-n diodes, etc.) for controlling the phase of the individual antenna elements, here the phase is tuned by biasing of the ferrite filling of the SIW. This approach eliminates the need for mounting of any additional RF components and enables seamless monolithic integration of phase shifters and antennas in SIW technology. As a proof of concept, a two-element slotted SIW-based phased array is designed, fabricated, and measured. The prototype exhibits a gain of 4.9 dBi at 13.2 GHz and a maximum E-plane beam-scanning of ±28° using external windings for biasing the phase shifters. Moreover, the array can achieve a maximum beam-scanning of ±19° when biased with small windings that are embedded in the package. This demonstration marks the first time a fully monolithic SIW-based phased array is realized in Ferrite LTCC technology and paves the way for future larger size implementations.",
"title": ""
},
{
"docid": "dcc7f48a828556808dc435deda5c1281",
"text": "Object detection and segmentation represents the basis for many tasks in computer and machine vision. In biometric recognition systems the detection of the region-of-interest (ROI) is one of the most crucial steps in the overall processing pipeline, significantly impacting the performance of the entire recognition system. Existing approaches to ear detection, for example, are commonly susceptible to the presence of severe occlusions, ear accessories or variable illumination conditions and often deteriorate in their performance if applied on ear images captured in unconstrained settings. To address these shortcomings, we present in this paper a novel ear detection technique based on convolutional encoder-decoder networks (CEDs). For our technique, we formulate the problem of ear detection as a two-class segmentation problem and train a convolutional encoder-decoder network based on the SegNet architecture to distinguish between image-pixels belonging to either the ear or the non-ear class. The output of the network is then post-processed to further refine the segmentation result and return the final locations of the ears in the input image. Different from competing techniques from the literature, our approach does not simply return a bounding box around the detected ear, but provides detailed, pixel-wise information about the location of the ears in the image. Our experiments on a dataset gathered from the web (a.k.a. in the wild) show that the proposed technique ensures good detection results in the presence of various covariate factors and significantly outperforms the existing state-of-the-art.",
"title": ""
},
{
"docid": "92b4d9c69969c66a1d523c38fd0495a4",
"text": "A level designer typically creates the levels of a game to cater for a certain set of objectives, or mission. But in procedural content generation, it is common to treat the creation of missions and the generation of levels as two separate concerns. This often leads to generic levels that allow for various missions. However, this also creates a generic impression for the player, because the potential for synergy between the objectives and the level is not utilised. Following up on the mission-space generation concept, as described by Dormans [5], we explore the possibilities of procedurally generating a level from a designer-made mission. We use a generative grammar to transform a mission into a level in a mixed-initiative design setting. We provide two case studies, dungeon levels for a rogue-like game, and platformer levels for a metroidvania game. The generators differ in the way they use the mission to generate the space, but are created with the same tool for content generation based on model transformations. We discuss the differences between the two generation processes and compare it with a parameterized approach.",
"title": ""
},
{
"docid": "57c0db8c200b94baa28779ff4f47d630",
"text": "The development of the Web services lets many users easily provide their opinions recently. Automatic summarization of enormous sentiments has been expected. Intuitively, we can summarize a review with traditional document summarization methods. However, such methods have not well-discussed “aspects”. Basically, a review consists of sentiments with various aspects. We summarize reviews for each aspect so that the summary presents information without biasing to a specific topic. In this paper, we propose a method for multiaspects review summarization based on evaluative sentence extraction. We handle three features; ratings of aspects, the tf -idf value, and the number of mentions with a similar topic. For estimating the number of mentions, we apply a clustering algorithm. By integrating these features, we generate a more appropriate summary. The experiment results show the effectiveness of our method.",
"title": ""
},
{
"docid": "6ea91574db57616682cf2a9608b0ac0b",
"text": "METHODOLOGY AND PRINCIPAL FINDINGS\nOleuropein promoted cultured human follicle dermal papilla cell proliferation and induced LEF1 and Cyc-D1 mRNA expression and β-catenin protein expression in dermal papilla cells. Nuclear accumulation of β-catenin in dermal papilla cells was observed after oleuropein treatment. Topical application of oleuropein (0.4 mg/mouse/day) to C57BL/6N mice accelerated the hair-growth induction and increased the size of hair follicles in telogenic mouse skin. The oleuropein-treated mouse skin showed substantial upregulation of Wnt10b, FZDR1, LRP5, LEF1, Cyc-D1, IGF-1, KGF, HGF, and VEGF mRNA expression and β-catenin protein expression.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nThese results demonstrate that topical oleuroepin administration induced anagenic hair growth in telogenic C57BL/6N mouse skin. The hair-growth promoting effect of oleuropein in mice appeared to be associated with the stimulation of the Wnt10b/β-catenin signaling pathway and the upregulation of IGF-1, KGF, HGF, and VEGF gene expression in mouse skin tissue.",
"title": ""
},
{
"docid": "b96a571e57a3121746d841bed4af4dbe",
"text": "The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any “thing”, whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community effort to achieve inter-operability in the Provenance Challenge series.",
"title": ""
},
{
"docid": "6706ad68059944988c41ba96e6d67f7c",
"text": "This paper investigates the motives, behavior, and characteristics shaping mutual fund managers’ willingness to incorporate Environmental, Social and Governance (ESG) issues into investment decision making. Using survey evidence from fund managers from five different countries, we demonstrate that this predisposition is the stronger, the shorter their average forecasting horizon and the higher their level of reliance on business risk in portfolio management is. We also find that the propensity to incorporate ESG factors is positively related to an increasing level of risk aversion, an increasing importance of salary change and senior management approval/disapproval as motivating factors as well as length of professional experience in current fund and increasing significance of assessment by superiors in remuneration. Overall, our evidence suggests that ESG diligence among fund managers serves mainly as a method for mitigating risk and is typically motivated by herding; it is much less important as a tool for additional value creation. The prevalent use of ESG criteria in mitigating risk is in contrast with traditional approach, but it is in line with behavioral finance theory. Additionally, our results also show a strong difference in the length of the forecasting horizon between continental European and Anglo-Saxon fund managers.",
"title": ""
},
{
"docid": "cfff07dbbc363a3e64b94648e19f2e4b",
"text": "Nitrogen (N) starvation and excess have distinct effects on N uptake and metabolism in poplars, but the global transcriptomic changes underlying morphological and physiological acclimation to altered N availability are unknown. We found that N starvation stimulated the fine root length and surface area by 54 and 49%, respectively, decreased the net photosynthetic rate by 15% and reduced the concentrations of NH4+, NO3(-) and total free amino acids in the roots and leaves of Populus simonii Carr. in comparison with normal N supply, whereas N excess had the opposite effect in most cases. Global transcriptome analysis of roots and leaves elucidated the specific molecular responses to N starvation and excess. Under N starvation and excess, gene ontology (GO) terms related to ion transport and response to auxin stimulus were enriched in roots, whereas the GO term for response to abscisic acid stimulus was overrepresented in leaves. Common GO terms for all N treatments in roots and leaves were related to development, N metabolism, response to stress and hormone stimulus. Approximately 30-40% of the differentially expressed genes formed a transcriptomic regulatory network under each condition. These results suggest that global transcriptomic reprogramming plays a key role in the morphological and physiological acclimation of poplar roots and leaves to N starvation and excess.",
"title": ""
},
{
"docid": "a1915a869616b9c8c2547f66ec89de13",
"text": "The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8% of actual crop weight.",
"title": ""
},
{
"docid": "d51f6c74b69716db5e748ad07577aba6",
"text": "Feature selection and feature weighting are useful techniques for improving the classification accuracy of K-nearest-neighbor (K-NN) rule. The term feature selection refers to algorithms that select the best subset of the input feature set. In feature weighting, each feature is multiplied by a weight value proportional to the ability of the feature to distinguish pattern classes. In this paper, a novel hybrid approach is proposed for simultaneous feature selection and feature weighting of K-NN rule based on Tabu Search (TS) heuristic. The proposed TS heuristic in combination with K-NN classifier is compared with several classifiers on various available data sets. The results have indicated a significant improvement in the performance in classification accuracy. The proposed TS heuristic is also compared with various feature selection algorithms. Experiments performed revealed that the proposed hybrid TS heuristic is superior to both simple TS and sequential search algorithms. We also present results for the classification of prostate cancer using multispectral images, an important problem in biomedicine. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d8c04293459a66db4332696aea7791ce",
"text": "Techniques known as Nonlinear Set Membership prediction, Lipschitz Interpolation or Kinky Inference are approaches to machine learning that utilise presupposed Lipschitz properties to compute inferences over unobserved function values. Provided a bound on the true best Lipschitz constant of the target function is known a priori they offer convergence guarantees as well as bounds around the predictions. Considering a more general setting that builds on Hölder continuity relative to pseudo-metrics, we propose an online method for estimating the Hölder constant online from function value observations that possibly are corrupted by bounded observational errors. Utilising this to compute adaptive parameters within a kinky inference rule gives rise to a nonparametric machine learning method, for which we establish strong universal approximation guarantees. That is, we show that our prediction rule can learn any continuous function in the limit of increasingly dense data to within a worst-case error bound that depends on the level of observational uncertainty. We apply our method in the context of nonparametric model-reference adaptive control (MRAC). Across a range of simulated aircraft roll-dynamics and performance metrics our approach outperforms recently proposed alternatives that were based on Gaussian processes and RBF-neural networks. For discrete-time systems, we provide stability guarantees for our learning-based controllers both for the batch and the online learning setting.",
"title": ""
},
{
"docid": "c68c5df29702e797b758474f4e8b137e",
"text": "Abstract—A miniaturized printed log-periodic fractal dipole antenna is proposed. Tree fractal structure is introduced in an antenna design and evolves the traditional Euclidean log-periodic dipole array into the log-periodic second-iteration tree-dipole array (LPT2DA) for the first time. Main parameters and characteristics of the proposed antenna are discussed. A fabricated proof-of-concept prototype of the proposed antenna is etched on a FR4 substrate with a relative permittivity of 4.4 and volume of 490 mm × 245 mm × 1.5 mm. The impedance bandwidth (measured VSWR < 2) of the fabricated antenna with approximate 40% reduction of traditional log-periodic dipole antenna is from 0.37 to 3.55GHz with a ratio of about 9.59 : 1. Both numerical and experimental results show that the proposed antenna has stable directional radiation patterns and apparently miniaturized effect, which are suitable for various ultra-wideband applications.",
"title": ""
}
] |
scidocsrr
|
aa66fe37dc37e0a1c1ad3db4262e5c93
|
Decision support in intermodal transport: A new research agenda
|
[
{
"docid": "061c44266819248f38711a12b72d99cb",
"text": "Intermodal freight transport has received an increased attention due to problems of road congestion, environmental concerns and traffic safety. A growing recognition of the strategic importance of speed and agility in the supply chain is forcing firms to reconsider traditional logistic services. As a consequence, research interest in intermodal freight transportation problems is growing. This paper provides an overview of planning decisions in intermodal freight transport and solution methods proposed in scientific literature. Planning problems are classified according to type of decision maker and decision level. General conclusions are given and subjects for further research are identified.",
"title": ""
}
] |
[
{
"docid": "07a718d6e7136e90dbd35ea18d6a5f11",
"text": "We discuss the importance of understanding psychological aspects of phishing, and review some recent findings. Given these findings, we critique some commonly used security practices and suggest and review alternatives, including educational approaches. We suggest a few techniques that can be used to assess and remedy threats remotely, without requiring any user involvement. We conclude by discussing some approaches to anticipate the next wave of threats, based both on psychological and technical insights. 1 What Will Consumers Believe? There are several reasons why it is important to understand what consumers will find believable. First of all, it is crucial for service providers to know their vulnerabilities (and those of their clients) in order to assess their exposure to risks and the associated liabilities. Second, recognizing what the vulnerabilities are translates into knowing from where the attacks are likely to come; this allows for suitable technical security measures to be deployed to detect and protect against attacks of concern. It also allows for a proactive approach in which the expected vulnerabilities are minimized by the selection and deployment of appropriate email and web templates, and the use of appropriate manners of interaction. Finally, there are reasons for why understanding users is important that are not directly related to security: Knowing what consumers will believe—and will not believe—means a better ability to reach the consumers with information they do not expect, whether for reasons of advertising products or communicating alerts. Namely, given the mimicry techniques used by phishers, there is a risk that consumers incorrectly classify legitimate messages as attempts to attack them. Being aware of potential pitfalls may guide decisions that facilitate communication. While technically knowledgeable, specialists often make the mistake of believing that security measures that succeed in protecting them are sufficient to protect average consumers. For example, it was for a long time commonly held among security practitioners that the widespread deployment of SSL would eliminate phishing once consumers become aware of the risks and nature of phishing attacks. This, very clearly, has not been the case, as supported both by reallife observations and by experiments [48]. This can be ascribed to a lack of attention to security among typical users [47, 35], but also to inconsistent or inappropriate security education [12]— whether implicit or not. An example of a common procedure that indirectly educates user is the case of lock symbols. Many financial institutions place a lock symbol in the content portion of the login page to indicate that a secure connection will be established as the user submits his credentials. This is to benefit from the fact that users have been educated to equate an SSL lock with a higher level of security. However, attackers may also place lock icons in the content of the page, whether they intend to establish an SSL connection or not. Therefore, the use of the lock",
"title": ""
},
{
"docid": "cd094cc790b51c34ce315b59ae08b6d9",
"text": "We present a framework and supporting algorithms to automate the use of temporal data reprojection as a general tool for optimizing procedural shaders. Although the general strategy of caching and reusing expensive intermediate shading calculations across consecutive frames has previously been shown to provide an effective trade-off between speed and accuracy, the critical choices of what to reuse and at what rate to refresh cached entries have been left to a designer. The fact that these decisions require a deep understanding of a procedure's semantic structure makes it challenging to select optimal candidates among possibly hundreds of alternatives. Our automated approach relies on parametric models of the way possible caching decisions affect the shader's performance and visual fidelity. These models are trained using a sample rendering session and drive an interactive profiler in which the user can explore the error/performance trade-offs associated with incorporating temporal reprojection. We evaluate the proposed models and selection algorithm with a prototype system used to optimize several complex shaders and compare our approach to current alternatives.",
"title": ""
},
{
"docid": "277071a4a2dde56c13ca2be8abd4b73d",
"text": "Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels are usually not the desired output, but just an intermediary step. End-to-end (E2E) models, which take raw text as input and produce the desired output directly, need not depend on token-level labels. We propose an E2E model based on pointer networks, which can be trained directly on pairs of raw input and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction without the need for token-level labels. This opens up new possibilities, as for many tasks currently addressed by human extractors, raw input and output data are available, but not token-level labels.",
"title": ""
},
{
"docid": "93c928adef35a409acaa9b371a1498f3",
"text": "The acquisition of a new motor skill is characterized first by a short-term, fast learning stage in which performance improves rapidly, and subsequently by a long-term, slower learning stage in which additional performance gains are incremental. Previous functional imaging studies have suggested that distinct brain networks mediate these two stages of learning, but direct comparisons using the same task have not been performed. Here we used a task in which subjects learn to track a continuous 8-s sequence demanding variable isometric force development between the fingers and thumb of the dominant, right hand. Learning-associated changes in brain activation were characterized using functional MRI (fMRI) during short-term learning of a novel sequence, during short-term learning after prior, brief exposure to the sequence, and over long-term (3 wk) training in the task. Short-term learning was associated with decreases in activity in the dorsolateral prefrontal, anterior cingulate, posterior parietal, primary motor, and cerebellar cortex, and with increased activation in the right cerebellar dentate nucleus, the left putamen, and left thalamus. Prefrontal, parietal, and cerebellar cortical changes were not apparent with short-term learning after prior exposure to the sequence. With long-term learning, increases in activity were found in the left primary somatosensory and motor cortex and in the right putamen. Our observations extend previous work suggesting that distinguishable networks are recruited during the different phases of motor learning. While short-term motor skill learning seems associated primarily with activation in a cortical network specific for the learned movements, long-term learning involves increased activation of a bihemispheric cortical-subcortical network in a pattern suggesting \"plastic\" development of new representations for both motor output and somatosensory afferent information.",
"title": ""
},
{
"docid": "9eeadbdd055eb616aba72f29a89d25c1",
"text": "Warr's (1987) Vitamin Model was investigated in a representative sample of 1437 Dutch health care workers (i.e. nurses and nurses' aides). According to this model, it was hypothesized that three job characteristics (i.e. job demands, job autonomy, and workplace social support) are curvilinearly related with three key indicators of employee well-being (i.e. job satisfaction, job-related anxiety, and emotional exhaustion). Structural equation modelling (LISREL 8) was employed to test the comprehensive Vitamin Model. The results showed that the ®t of the non-linear model is superior to that of the linear model. Except for the relationship between job autonomy and emotional exhaustion, the curvilinear relationships followed the predicted U-shaped or inverted U-shaped curvilinear pattern. Moreover, it appeared that the three job characteristics are dierentially related with various indicators of employee well-being. In conclusion, this study partially supports the assertion of the Vitamin Model that non-linear relationships exist between job characteristics and employee well-being. # 1998 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "3f3a017d93588f19eb59a93ccd587902",
"text": "n this work we propose a novel Hough voting approach for the detection of free-form shapes in a 3D space, to be used for object recognition tasks in 3D scenes with a significant degree of occlusion and clutter. The proposed method relies on matching 3D features to accumulate evidence of the presence of the objects being sought in a 3D Hough space. We validate our proposal by presenting a quantitative experimental comparison with state-of-the-art methods as well as by showing how our method enables 3D object recognition from real-time stereo data.",
"title": ""
},
{
"docid": "e0d3a7e7e000c6704518763bf8dff8c8",
"text": "Integration of optical communication circuits directly into high-performance microprocessor chips can enable extremely powerful computer systems. A germanium photodetector that can be monolithically integrated with silicon transistor technology is viewed as a key element in connecting chip components with infrared optical signals. Such a device should have the capability to detect very-low-power optical signals at very high speed. Although germanium avalanche photodetectors (APD) using charge amplification close to avalanche breakdown can achieve high gain and thus detect low-power optical signals, they are universally considered to suffer from an intolerably high amplification noise characteristic of germanium. High gain with low excess noise has been demonstrated using a germanium layer only for detection of light signals, with amplification taking place in a separate silicon layer. However, the relatively thick semiconductor layers that are required in such structures limit APD speeds to about 10 GHz, and require excessively high bias voltages of around 25 V (ref. 12). Here we show how nanophotonic and nanoelectronic engineering aimed at shaping optical and electrical fields on the nanometre scale within a germanium amplification layer can overcome the otherwise intrinsically poor noise characteristics, achieving a dramatic reduction of amplification noise by over 70 per cent. By generating strongly non-uniform electric fields, the region of impact ionization in germanium is reduced to just 30 nm, allowing the device to benefit from the noise reduction effects that arise at these small distances. Furthermore, the smallness of the APDs means that a bias voltage of only 1.5 V is required to achieve an avalanche gain of over 10 dB with operational speeds exceeding 30 GHz. Monolithic integration of such a device into computer chips might enable applications beyond computer optical interconnects—in telecommunications, secure quantum key distribution, and subthreshold ultralow-power transistors.",
"title": ""
},
{
"docid": "67978cd2f94cabb45c1ea2c571cef4de",
"text": "Studies identifying oil shocks using structural vector autoregressions (VARs) reach different conclusions on the relative importance of supply and demand factors in explaining oil market fluctuations. This disagreement is due to different assumptions on the oil supply and demand elasticities that determine the identification of the oil shocks. We provide new estimates of oil-market elasticities by combining a narrative analysis of episodes of large drops in oil production with country-level instrumental variable regressions. When the estimated elasticities are embedded into a structural VAR, supply and demand shocks play an equally important role in explaining oil prices and oil quantities. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "969c83b4880879f1137284f531c9f94a",
"text": "The extant literature on cross-national differences in approaches to corporate social responsibility (CSR) has mostly focused on developed countries. Instead, we offer two interrelated studies into corporate codes of conduct issued by developing country multinational enterprises (DMNEs). First, we analyse code adoption rates and code content through a mixed methods design. Second, we use multilevel analyses to examine country-level drivers of",
"title": ""
},
{
"docid": "f25afc147ceb24fb1aca320caa939f10",
"text": "Third party intervention is a typical response to destructive and persistent social conflict and comes in a number of different forms attended by a variety of issues. Mediation is a common form of intervention designed to facilitate a negotiated settlement on substantive issues between conflicting parties. Mediators are usually external to the parties and carry an identity, motives and competencies required to play a useful role in addressing the dispute. While impartiality is generally seen as an important prerequisite for effective intervention, biased mediators also appear to have a role to play. This article lays out the different forms of third-party intervention in a taxonomy of six methods, and proposes a contingency model which matches each type of intervention to the appropriate stage of conflict escalation. Interventions are then sequenced, in order to assist the parties in de-escalating and resolving the conflict. It must be pointed out, however, that the mixing of interventions with different power bases raises a number of ethical and moral questions about the use of reward and coercive power by third parties. The article then discusses several issues around the practice of intervention. It is essential to give these issues careful consideration if third-party methods are to play their proper and useful role in the wider process of conflict transformation. Psychology from the University of Saskatchewan and a Ph.D. in Social Psychology from the University of Michigan. He has provided training and consulting services to various organizations and international institutes in conflict management. His current interests include third party intervention, interactive conflict resolution, and reconciliation in situations of ethnopolitical conflict. A b s t r a c t A b o u t t h e C o n t r i b u t o r",
"title": ""
},
{
"docid": "c94400a5141e4bf5088ce7a79e3ee162",
"text": "Discourse Analysis Introduction. Discourse analysis is the study of language in use. It rests on the basic premise that language cannot be understood without reference to the context, both linguistic and extra-linguistic, in which it is used. It draws from the findings and methodologies of a wide range of fields, such as anthropology, philosophy, sociology, social and cognitive psychology, and artificial intelligence. It is itself a broad field comprised of a large number of linguistic subfields and approaches, including speech act theory, conversation analysis, pragmatics, and the ethnography of speaking. At the same time, the lines between certain linguistic subfields, in particular psycholinguistics, anthropological linguistics, and cognitive linguistics and discourse analysis overlap, and approaches to the study of discourse are informed by these subfields, and in many cases findings are independently corroborated. As a very interdisciplinary approach, the boundaries of this field are fuzzy. 1 The fundamental assumption underlying all approaches to discourse analysis is that language must be studied as it is used, in its context of production, and so the object of analysis is very rarely in the form of a sentence. Instead, written or spoken texts, usually larger than one sentence or one utterance, provide the data. In other words, the discourse analyst works with naturally occurring corpora, and with such corpora come a wide variety of features such as hesitations, non-standard forms, self-corrections, repetitions, incomplete clauses, words, and so—all linguistic material which would be relegated to performance by Chomsky (1965) and so stand outside the scope of analysis for many formal linguists. But for the discourse analyst, such \" performance \" data are 1 It is interesting, in this light, to compare the contents of several standard handbooks of discourse analysis. Brown and Yule (1986) focus heavily on pragmatics and information structure, while Schiffrin (1994) includes several chapters directly related to sociolinguistic methodologies (i.e. chapters on interactional sociolinguistics, ethnomethodology and variation analysis). Mey (1993) has three chapters on conversation analysis (a topic which Schiffrin also covers) and a chapter on \" societal pragmatics. \" Lenore Grenoble, Discourse Analysis 2 indeed relevant and may in fact be the focus of research. The focus on actual instances of language use also means that the analysis does not look at language only as an abstract system; this is a fundamental difference between formal work on syntax versus discourse analysis. This paper first provides an overview of discourse analysis and …",
"title": ""
},
{
"docid": "0a4fd637914f538a37830655f8c5df01",
"text": "Many children's books contain movable pictures with elements that can be physically opened, closed, pushed, pulled, spun, flipped, or swung. But these tangible, interactive reading experiences are inaccessible to children with visual impairments. This paper presents a set of 3D-printable models designed as building blocks for creating movable tactile pictures that can be touched, moved, and understood by children with visual impairments. Examples of these models are canvases, connectors, hinges, spinners, sliders, lifts, walls, and cutouts. They can be used to compose movable tactile pictures to convey a range of spatial concepts, such as in/out, up/down, and high/low. The design and development of these models were informed by three formative studies including 1) a survey on popular moving mechanisms in children's books and 3D-printed parts to implement them, 2) two workshops on the process creating movable tactile pictures by hand (e.g., Lego, Play-Doh), and 3) creation of wood-based prototypes and an informal testing on sighted preschoolers. Also, we propose a design language based on XML and CSS for specifying the content and structure of a movable tactile picture. Given a specification, our system can generate a 3D-printable model. We evaluate our approach by 1) transcribing six children's books, and 2) conducting six interviews on domain experts including four teachers for the visually impaired, one blind adult, two publishers at the National Braille Press, a renowned tactile artist, and a librarian.",
"title": ""
},
{
"docid": "fa5a07a89f8b52759585ea20124fb3cc",
"text": "Polycystic ovary syndrome (PCOS) is considered as a highly heterogeneous and complex disease. Dimethyldiguanide (DMBG) is widely used to improve the reproductive dysfunction in women with PCOS. However, the precise mechanism by which DMBG exerts its benefical effect on PCOS remains largely unknown. The present study was designed to explore the effects of DMBG on the changes of oxidative stress and the activation of nucleotide leukin rich polypeptide 3 (NLRP3) inflammasome in the ovaries during the development and treatment of PCOS. A letrozole-induced rat PCOS model was developed. The inflammatory status was examined by analyzing the serum high sensitive C-reactive protein (hsCRP) levels in ras. We found that DMBG treatment rescued PCOS rats, which is associated with the reduced chronic low grade inflammation in these rats. In PCOS rats, the NLRP3 and the adaptor protein apoptosis-associated speck-like protein (ASC) mRNA levels, caspase-1 activation, and IL-1β production were unregulated, which was markedly attenuated by DMBG treatment. Moreover, oxidative stress was enhanced in PCOS rats as shown by increased lipid peroxidation (LPO) and activity of superoxide dismutase (SOD) and catalase. DMBG significantly decreased LPO, while it had no effects on SOD and catalase activities. Together, these results indicate that DMBG treatment may rescue PCOS rats by suppressing oxidative stress and NLRP3 inflammasome activation in PCOS ovaries.",
"title": ""
},
{
"docid": "326493520ccb5c8db07362f412f57e62",
"text": "This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.",
"title": ""
},
{
"docid": "19fb05d84b3b0ac4a682a159f4935b76",
"text": "Expanding upon Simon's (1955) seminal theory, this investigation compared the choice-making strategies of maximizers and satisficers, finding that maximizing tendencies, although positively correlated with objectively better decision outcomes, are also associated with more negative subjective evaluations of these decision outcomes. Specifically, in the fall of their final year in school, students were administered a scale that measured maximizing tendencies and were then followed over the course of the year as they searched for jobs. Students with high maximizing tendencies secured jobs with 20% higher starting salaries than did students with low maximizing tendencies. However, maximizers were less satisfied than satisficers with the jobs they obtained, and experienced more negative affect throughout the job-search process. These effects were mediated by maximizers' greater reliance on external sources of information and their fixation on realized and unrealized options during the search and selection process.",
"title": ""
},
{
"docid": "6894ef6c25b77f0d3f734cd18ab45ef6",
"text": "A low-power receiver circuit in 32 nm SOI CMOS is presented, which is intended to be used in a source-synchronous link configuration. The design of the receiver was optimized for power owing to the assumption that a link protocol enables a periodic calibration during which the circuit does not have to deliver valid data. In addition, it is shown that the transceiver power and the effect of high-frequency transmit jitter can be reduced by implementing a linear equalizer only on the receive side and avoiding a transmit feed-forward equalizer (TX-FFE). On the circuit level, the receiver uses a switched-capacitor (SC) approach for the implementation of an 8-tap decision-feedback equalizer (DFE). The SC-DFE improves the timing margin relative to previous DFE implementations with current feedback, and leads to a digital-style circuit implementation with compact layout. The receiver was measured at data rates up to 13.5 Gb/s, where error free operation was verified with a PRBS-31 sequence and a channel with 32 dB attenuation at Nyquist. With the clock generation circuits amortized over eight lanes, the receiver circuit consumes 2.6 mW/Gbps from a 1.1 V supply while running at 12.5 Gb/s.",
"title": ""
},
{
"docid": "9dd3157c4c94c62e2577ace7f6c41629",
"text": "BACKGROUND\nThere is a growing concern over the addictiveness of Social Media use. Additional representative indicators of impaired control are needed in order to distinguish presumed social media addiction from normal use.\n\n\nAIMS\n(1) To examine the existence of time distortion during non-social media use tasks that involve social media cues among those who may be considered at-risk for social media addiction. (2) To examine the usefulness of this distortion for at-risk vs. low/no-risk classification.\n\n\nMETHOD\nWe used a task that prevented Facebook use and invoked Facebook reflections (survey on self-control strategies) and subsequently measured estimated vs. actual task completion time. We captured the level of addiction using the Bergen Facebook Addiction Scale in the survey, and we used a common cutoff criterion to classify people as at-risk vs. low/no-risk of Facebook addiction.\n\n\nRESULTS\nThe at-risk group presented significant upward time estimate bias and the low/no-risk group presented significant downward time estimate bias. The bias was positively correlated with Facebook addiction scores. It was efficacious, especially when combined with self-reported estimates of extent of Facebook use, in classifying people to the two categories.\n\n\nCONCLUSIONS\nOur study points to a novel, easy to obtain, and useful marker of at-risk for social media addiction, which may be considered for inclusion in diagnosis tools and procedures.",
"title": ""
},
{
"docid": "06372546b3dedb8a1af4324ce57d56f3",
"text": "Twitter1, the microblog site started in 2006, has become a social phenomenon. More than 340 million Tweets are sent out every day2. While a majority of posts are conversational or not particularly meaningful, about 3.6% of the posts concern topics of mainstream news3. Twitter has been credited with providing the most current news about many important events before traditional media, such as the attacks in Mumbai in November 2008. Twitter also played a prominent role in the unfolding of the troubles in Iran in 2009 subsequent to a disputed election, and the so-called Twitter Revolutions4 in Tunisia and Egypt in 2010-11. To help people who read Twitter posts or tweets, Twitter provides two interesting features: an API that allows users to search for posts that contain a topic phrase and a short list of popular topics called Trending Topics. A user can perform a search for a topic and retrieve a list of most recent posts that contain the topic phrase. The di culty in interpreting the results is that the returned posts are only sorted by recency, not relevancy. Therefore, the user is forced to manually read through the posts in order to understand what users are primarily saying about a particular topic. A website called WhatTheTrend5 attempts to provide definitions of trending topics by allowing users to manually enter descriptions of why a topic is trending. Here is an example of a definition from WhatTheTrend:",
"title": ""
},
{
"docid": "cf02d97cdcc1a4be51ed0af2af771b7d",
"text": "Bowen's disease is a squamous cell carcinoma in situ and has the potential to progress to a squamous cell carcinoma. The authors treated two female patients (a 39-year-old and a 41-year-old) with Bowen's disease in the vulva area using topical photodynamic therapy (PDT), involving the use of 5-aminolaevulinic acid and a light-emitting diode device. The light was administered at an intensity of 80 mW/cm(2) for a dose of 120 J/cm(2) biweekly for 6 cycles. The 39-year-old patient showed excellent clinical improvement, but the other patient achieved only a partial response. Even though one patient underwent a total excision 1 year later due to recurrence, both patients were satisfied with the cosmetic outcomes of this therapy and the partial improvement over time. The common side effect of PDT was a stinging sensation. PDT provides a relatively effective and useful alternative treatment for Bowen's disease in the vulva area.",
"title": ""
}
] |
scidocsrr
|
3d3c878f82854c985f6481fabe5de57f
|
Appraisals of Emotion-Eliciting Events : Testing a Theory of Discrete Emotions
|
[
{
"docid": "f71d0084ebb315a346b52c7630f36fb2",
"text": "A theory of motivation and emotion is proposed in which causal ascriptions play a key role. It is first documented that in achievement-related contexts there are a few dominant causal perceptions. The perceived causes of success and failure share three common properties: locus, stability, and controllability, with intentionality and globality as other possible causal structures. The perceived stability of causes influences changes in expectancy of success; all three dimensions of causality affect a variety of common emotional experiences, including anger, gratitude, guilt, hopelessness, pity, pride, and shame. Expectancy and affect, in turn, are presumed to guide motivated behavior. The theory therefore relates the structure of thinking to the dynamics of feeling and action. Analysis of a created motivational episode involving achievement strivings is offered, and numerous empirical observations are examined from this theoretical position. The strength of the empirical evidence, the capability of this theory to address prevalent human emotions, and the potential generality of the conception are stressed.",
"title": ""
}
] |
[
{
"docid": "8b46e6e341f4fdf4eb18e66f237c4000",
"text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.",
"title": ""
},
{
"docid": "c385054322970c86d3f08b298aa811e2",
"text": "Recently, a small number of papers have appeared in which the authors implement stochastic search algorithms, such as evolutionary computation, to generate game content, such as levels, rules and weapons. We propose a taxonomy of such approaches, centring on what sort of content is generated, how the content is represented, and how the quality of the content is evaluated. The relation between search-based and other types of procedural content generation is described, as are some of the main research challenges in this new field. The paper ends with some successful examples of this approach.",
"title": ""
},
{
"docid": "67768b96aed92f645561c8d53357f765",
"text": "Recently, Massive Open Online Courses (MOOCs) have garnered a high level of interest in the media. With larger and larger numbers of students participating in each course, finding useful and informative threads in increasingly crowded course discussion forums becomes a challenging issue for students. In this work, we address this thread overload problem by taking advantage of an adaptive feature-based matrix factorization framework to make thread recommendations. A key component of our approach is a feature space design that effectively characterizes student behaviors in the forum in order to match threads and users. This effort includes content level modeling, social peer connections, and other forum activities. The results from our experiment conducted on one MOOC course show promise that our thread recommendation method has potential to direct students to threads they might be interested in.",
"title": ""
},
{
"docid": "b52a29cd426c5861dbb97aeb91efda4b",
"text": "In recent years, inexact computing has been increasingly regarded as one of the most promising approaches for slashing energy consumption in many applications that can tolerate a certain degree of inaccuracy. Driven by the principle of trading tolerable amounts of application accuracy in return for significant resource savings-the energy consumed, the (critical path) delay, and the (silicon) area-this approach has been limited to application-specified integrated circuits (ASICs) so far. These ASIC realizations have a narrow application scope and are often rigid in their tolerance to inaccuracy, as currently designed; the latter often determining the extent of resource savings we would achieve. In this paper, we propose to improve the application scope, error resilience and the energy savings of inexact computing by combining it with hardware neural networks. These neural networks are fast emerging as popular candidate accelerators for future heterogeneous multicore platforms and have flexible error resilience limits owing to their ability to be trained. Our results in 65-nm technology demonstrate that the proposed inexact neural network accelerator could achieve 1.78-2.67× savings in energy consumption (with corresponding delay and area savings being 1.23 and 1.46×, respectively) when compared to the existing baseline neural network implementation, at the cost of a small accuracy loss (mean squared error increases from 0.14 to 0.20 on average).",
"title": ""
},
{
"docid": "a82a658a8200285cf5a6eab8035a3fce",
"text": "This paper examines the magnitude of informational problems associated with the implementation and interpretation of simple monetary policy rules. Using Taylor’s rule as an example, I demonstrate that real-time policy recommendations differ considerably from those obtained with ex post revised data. Further, estimated policy reaction functions based on ex post revised data provide misleading descriptions of historical policy and obscure the behavior suggested by information available to the Federal Reserve in real time. These results indicate that reliance on the information actually available to policy makers in real time is essential for the analysis of monetary policy rules. (JEL E52, E58)",
"title": ""
},
{
"docid": "df0381c129339b1131897708fc00a96c",
"text": "We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely receiver driven and requires no per-receiver status at the sender, in order to scale to large numbers of receivers. It relies on standard functionalities of multicast routers, and is suitable for continuous stream and reliable bulk data transfer. In the paper we illustrate the algorithm, characterize its response to losses both analytically and by simulations, and analyse its behaviour using simulations and experiments in real networks. We also show how error recovery can be dealt with independently from congestion control by using FEC techniques, so as to provide reliable bulk data transfer.",
"title": ""
},
{
"docid": "ba4637dd5033fa39d1cb09edb42481ec",
"text": "In this paper we introduce a framework for best first search of minimax trees. Existing best first algorithms like SSS* and DUAL* are formulated as instances of this framework. The framework is built around the Alpha-Beta procedure. Its instances are highly practical, and readily implementable. Our reformulations of SSS* and DUAL* solve the perceived drawbacks of these algorithms. We prove their suitability for practical use by presenting test results with a tournament level chess program. In addition to reformulating old best first algorithms, we introduce an improved instance of the framework: MTD(ƒ). This new algorithm outperforms NegaScout, the current algorithm of choice of most chess programs. Again, these are not simulation results, but results of tests with an actual chess program, Phoenix.",
"title": ""
},
{
"docid": "94956e7075d0c918794c4bc1b30f50c6",
"text": "Studies have shown that patients who practice functional movements at home in conjunction with outpatient therapy show higher improvement in motor recovery. However, patients are not qualified to monitor or assess their own condition that must be reported back to the clinician. Therefore, there is a need to transmit physiological data to clinicians from patients in their home environment. This paper presents a review of wearable technology for in-home health monitoring, assessment, and rehabilitation of patients with brain and spinal cord injuries.",
"title": ""
},
{
"docid": "ac1302f482309273d9e61fdf0f093e01",
"text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "81840452c52d61024ba5830437e6a2c4",
"text": "Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR). We are given a set of items, each with a positive real weight, and a set of knapsacks, each with a positive real capacity. In addition, for each item a set of knapsacks that can hold that item is specified. In a feasible assignment of items to knapsacks, each item is assigned to at most one knapsack, assignment restrictions are satisfied, and knapsack capacities are not exceeded. We consider the objectives of maximizing assigned weight and minimizing utilized capacity. We focus on obtaining approximate solutions in polynomial computational time. We show that simple greedy approaches yield 1/3-approximation algorithms for the objective of maximizing assigned weight. We give two different 1/2-approximation algorithms: the first one solves single knapsack problems successively and the second one is based on rounding the LP relaxation solution. For the bicriteria problem of minimizing utilized capacity subject to a minimum requirement on assigned weight, we give an (1/3,2)-approximation algorithm.",
"title": ""
},
{
"docid": "e0a8035f9e61c78a482f2e237f7422c6",
"text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University",
"title": ""
},
{
"docid": "c1204402188570117563c6cf642e8833",
"text": "Human re-identification is defined as a requirement to determine whether a given individual has already appeared over a network of cameras. This problem is particularly hard by significant appearance changes across different camera views. In order to re-identify people a human signature should handle difference in illumination, pose and camera parameters. We propose a new appearance model combining information from multiple images to obtain highly discriminative human signature, called Mean Riemannian Covariance Grid (MRCG). The method is evaluated and compared with the state of the art using benchmark video sequences from the ETHZ and the i-LIDS datasets. We demonstrate that the proposed approach outperforms state of the art methods. Finally, the results of our approach are shown on two other more pertinent datasets.",
"title": ""
},
{
"docid": "5aef75aead029333a2e47a5d1ba52f2e",
"text": "Although we appreciate Kinney and Atwal’s interest in equitability and maximal information coefficient (MIC), we believe they misrepresent our work. We highlight a few of our main objections below. Regarding our original paper (1), Kinney and Atwal (2) state “MIC is said to satisfy not just the heuristic notion of equitability, but also the mathematical criterion of R equitability,” the latter being their formalization of the heuristic notion that we introduced. This statement is simply false. We were explicit in our paper that our claims regarding MIC’s performance were based on large-scale simulations: “We tested MIC’s equitability through simulations. . ..[These] show that, for a large collection of test functions with varied sample sizes, noise levels, and noise models, MIC roughly equals the coefficient of determination R relative to each respective noiseless function.” Although we mathematically proved several things about MIC, none of our claims imply that it satisfies Kinney and Atwal’s R equitability, which would require that MIC exactly equal R in the infinite data limit. Thus, their proof that no dependence measure can satisfy R equitability, although interesting, does not uncover any error in our work, and their suggestion that it does is a gross misrepresentation. Kinney and Atwal seem ready to toss out equitability as a useful criterion based on their theoretical result. We argue, however, that regardless of whether “perfect” equitability is possible, approximate notions of equitability remain the right goal for many data exploration settings. Just as the theory of NP completeness does not suggest we stop thinking about NP complete problems, but instead that we look for approximations and solutions in restricted cases, an impossibility result about perfect equitability provides focus for further research, but does not mean that useful solutions are unattainable. Similarly, as others have noted (3), Kinney and Atwal’s proof requires a highly permissive noise model, and so the attainability of R equitability under more limited noise models such as those in our work remains an open question. Finally, the authors argue that mutual information is more equitable than MIC. However, they provide as justification only a single noise model, only at limiting sample sizes ðn≥ 5;000Þ. As we’ve shown in followup work (4), which they themselves cite but fail to address, MIC is more equitable than mutual information estimation under many other realistic noise models even at a sample size of 5,000. Kinney and Atwal have stated, “. . .it matters how one defines noise” (5), and a useful statistic must indeed be robust to a wide range of noise models. Equally importantly, we’ve established in both our original and follow-up work that at sample size regimes less than 5,000, MIC is more equitable than mutual information estimates across all noise models tested. MIC’s superior equitability in these settings is not an “artifact” we neglected—as Kinney and Atwal suggest—but rather a weakness of mutual information estimation and an important consideration for practitioners. We expect that the understanding of equitability and MIC will improve over time and that better methods may arise. However, accurate representations of the work thus far will allow researchers in the area to most productively and collectively move forward.",
"title": ""
},
{
"docid": "99f22bc84690fc357df55484cb7c6e54",
"text": "This work presents a Text Segmentation algorithm called TopicTiling. This algorithm is based on the well-known TextTiling algorithm, and segments documents using the Latent Dirichlet Allocation (LDA) topic model. We show that using the mode topic ID assigned during the inference method of LDA, used to annotate unseen documents, improves performance by stabilizing the obtained topics. We show significant improvements over state of the art segmentation algorithms on two standard datasets. As an additional benefit, TopicTiling performs the segmentation in linear time and thus is computationally less expensive than other LDA-based segmentation methods.",
"title": ""
},
{
"docid": "2502fc02f09be72d138275a7ac41d8bc",
"text": "This manual describes the competition software for the Simulated Car Racing Championship, an international competition held at major conferences in the field of Evolutionary Computation and in the field of Computational Intelligence and Games. It provides an overview of the architecture, the instructions to install the software and to run the simple drivers provided in the package, the description of the sensors and the actuators.",
"title": ""
},
{
"docid": "02f62ec1ea8b7dba6d3a5d4ea08abe2d",
"text": "MicroRNAs (miRNAs) are short, 22–25 nucleotide long transcripts that may suppress entire signaling pathways by interacting with the 3’-untranslated region (3’-UTR) of coding mRNA targets, interrupting translation and inducing degradation of these targets. The long 3’-UTRs of brain transcripts compared to other tissues predict important roles for brain miRNAs. Supporting this notion, we found that brain miRNAs co-evolved with their target transcripts, that non-coding pseudogenes with miRNA recognition elements compete with brain coding mRNAs on their miRNA interactions, and that Single Nucleotide Polymorphisms (SNPs) on such pseudogenes are enriched in mental diseases including autism and schizophrenia, but not Alzheimer’s disease (AD). Focusing on evolutionarily conserved and primate-specifi c miRNA controllers of cholinergic signaling (‘CholinomiRs’), we fi nd modifi ed CholinomiR levels in the brain and/or nucleated blood cells of patients with AD and Parkinson’s disease, with treatment-related diff erences in their levels and prominent impact on the cognitive and anti-infl ammatory consequences of cholinergic signals. Examples include the acetylcholinesterase (AChE)-targeted evolutionarily conserved miR-132, whose levels decline drastically in the AD brain. Furthermore, we found that interruption of AChE mRNA’s interaction with the primatespecifi c CholinomiR-608 in carriers of a SNP in the AChE’s miR-608 binding site induces domino-like eff ects that reduce the levels of many other miR-608 targets. Young, healthy carriers of this SNP express 40% higher brain AChE activity than others, potentially aff ecting the responsiveness to AD’s anti-AChE therapeutics, and show elevated trait anxiety, infl ammation and hypertension. Non-coding regions aff ecting miRNA-target interactions in neurodegenerative brains thus merit special attention.",
"title": ""
},
{
"docid": "b0f13c59bb4ba0f81ebc86373ad80d81",
"text": "3D-stacked memory devices with processing logic can help alleviate the memory bandwidth bottleneck in GPUs. However, in order for such Near-Data Processing (NDP) memory stacks to be used for different GPU architectures, it is desirable to standardize the NDP architecture. Our proposal enables this standardization by allowing data to be spread across multiple memory stacks as is the norm in high-performance systems without an MMU on the NDP stack. The keys to this architecture are the ability to move data between memory stacks as required for computation, and a partitioned execution mechanism that offloads memory-intensive application segments onto the NDP stack and decouples address translation from DRAM accesses. By enhancing this system with a smart offload selection mechanism that is cognizant of the compute capability of the NDP and cache locality on the host processor, system performance and energy are improved by up to 66.8% and 37.6%, respectively.",
"title": ""
},
{
"docid": "561d8a130051ef2da6ad962eed110821",
"text": "In The Great Gatsby, Fitzgerald depicts the conflicts and contradictions between men and women about society, family, love, and money, literally mirroring the patriarchal society constantly challenged by feminism in the 1920s of America. This paper intends to compare the features of masculinism and feminism in three aspects: gender, society, and morality. Different identifications of gender role between men and women lead to female protests against male superiority and pursuits of individual liberation. Meanwhile, male unshaken egotism and gradually expanded individualism of women enable them both in lack of sound moral standards. But compared with the female, male moral pride drives them with much more proper moral judge, which reflects Fitzgerald’s support of the masculine society. Probing into the confrontation between masculinism and feminism, it is beneficial for further study on how to achieve equal coexistence and harmony between men and women.",
"title": ""
},
{
"docid": "ed83b7dc02e34894a3ab4a4325272a1c",
"text": "We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients. Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation.",
"title": ""
},
{
"docid": "ec2a377d643326c5e7f64f6f01f80a04",
"text": "October 2006 | Volume 3 | Issue 10 | e294 Cultural competency has become a fashionable term for clinicians and researchers. Yet no one can defi ne this term precisely enough to operationalize it in clinical training and best practices. It is clear that culture does matter in the clinic. Cultural factors are crucial to diagnosis, treatment, and care. They shape health-related beliefs, behaviors, and values [1,2]. But the large claims about the value of cultural competence for the art of professional care-giving around the world are simply not supported by robust evaluation research showing that systematic attention to culture really improves clinical services. This lack of evidence is a failure of outcome research to take culture seriously enough to routinely assess the cost-effectiveness of culturally informed therapeutic practices, not a lack of effort to introduce culturally informed strategies into clinical settings [3].",
"title": ""
}
] |
scidocsrr
|
2ab6be00d5d7b43ca0e224dc101d1672
|
DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks
|
[
{
"docid": "703cda264eddc139597b9ef9d4c0e977",
"text": "Multi-processor systems are becoming the de-facto standard across different computing domains, ranging from high-end multi-tenant cloud servers to low-power mobile platforms. The denser integration of CPUs creates an opportunity for great economic savings achieved by packing processes of multiple tenants or by bundling all kinds of tasks at various privilege levels to share the same platform. This level of sharing carries with it a serious risk of leaking sensitive information through the shared microarchitectural components. Microarchitectural attacks initially only exploited core-private resources, but were quickly generalized to resources shared within the CPU. We present the first fine grain side channel attack that works across processors. The attack does not require CPU co-location of the attacker and the victim. The novelty of the proposed work is that, for the first time the directory protocol of high efficiency CPU interconnects is targeted. The directory protocol is common to all modern multi-CPU systems. Examples include AMD's HyperTransport, Intel's Quickpath, and ARM's AMBA Coherent Interconnect. The proposed attack does not rely on any specific characteristic of the cache hierarchy, e.g. inclusiveness. Note that inclusiveness was assumed in all earlier works. Furthermore, the viability of the proposed covert channel is demonstrated with two new attacks: by recovering a full AES key in OpenSSL, and a full ElGamal key in libgcrypt within the range of seconds on a shared AMD Opteron server.",
"title": ""
},
{
"docid": "3d6c7d1c61afe784dc210df5f4454ce3",
"text": "Modern Intel processors use an undisclosed hash function to map memory lines into last-level cache slices. In this work we develop a technique for reverse-engineering the hash function. We apply the technique to a 6-core Intel processor and demonstrate that knowledge of this hash function can facilitate cache-based side channel attacks, reducing the amount of work required for profiling the cache by three orders of magnitude. We also show how using the hash function we can double the number of colours used for page-colouring techniques.",
"title": ""
}
] |
[
{
"docid": "46bbc38bc45d9998fcd517edd253c091",
"text": "Visual data analysis involves both open-ended and focused exploration. Manual chart specification tools support question answering, but are often tedious for early-stage exploration where systematic data coverage is needed. Visualization recommenders can encourage broad coverage, but irrelevant suggestions may distract users once they commit to specific questions. We present Voyager 2, a mixed-initiative system that blends manual and automated chart specification to help analysts engage in both open-ended exploration and targeted question answering. We contribute two partial specification interfaces: wildcards let users specify multiple charts in parallel, while related views suggest visualizations relevant to the currently specified chart. We present our interface design and applications of the CompassQL visualization query language to enable these interfaces. In a controlled study we find that Voyager 2 leads to increased data field coverage compared to a traditional specification tool, while still allowing analysts to flexibly drill-down and answer specific questions.",
"title": ""
},
{
"docid": "0343f1a0be08ff53e148ef2eb22aaf14",
"text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.",
"title": ""
},
{
"docid": "f717225fa7518383e0db362e673b9af4",
"text": "The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "a601abae0a3d54d4aa3ecbb4bd09755a",
"text": "Article history: Received 27 March 2008 Received in revised form 2 September 2008 Accepted 20 October 2008",
"title": ""
},
{
"docid": "fce21a54f6319bcc798914a6fc4a8125",
"text": "CRISPR-Cas systems have rapidly transitioned from intriguing prokaryotic defense systems to powerful and versatile biomolecular tools. This article reviews how these systems have been translated into technologies to manipulate bacterial genetics, physiology, and communities. Recent applications in bacteria have centered on multiplexed genome editing, programmable gene regulation, and sequence-specific antimicrobials, while future applications can build on advances in eukaryotes, the rich natural diversity of CRISPR-Cas systems, and the untapped potential of CRISPR-based DNA acquisition. Overall, these systems have formed the basis of an ever-expanding genetic toolbox and hold tremendous potential for our future understanding and engineering of the bacterial world.",
"title": ""
},
{
"docid": "1891bf842d446a7d323dc207b38ff5a9",
"text": "We use linear programming techniques to obtain new upper bounds on the maximal squared minimum distance of spherical codes with fixed cardinality. Functions Qj(n, s) are introduced with the property that Qj(n, s) < 0 for some j > m iff the Levenshtein bound Lm(n, s) on A(n, s) = max{|W | : W is an (n, |W |, s) code} can be improved by a polynomial of degree at least m+1. General conditions on the existence of new bounds are presented. We prove that for fixed dimension n ≥ 5 there exist a constant k = k(n) such that all Levenshtein bounds Lm(n, s) for m ≥ 2k− 1 can be improved. An algorithm for obtaining new bounds is proposed and discussed.",
"title": ""
},
{
"docid": "bd973cb5d5343293c9c68646dfbb1005",
"text": "The betweenness metric has always been intriguing and used in many analyses. Yet, it is one of the most computationally expensive kernels in graph mining. For that reason, making betweenness centrality computations faster is an important and well-studied problem. In this work, we propose the framework, BADIOS, which compresses a network and shatters it into pieces so that the centrality computation can be handled independently for each piece. Although BADIOS is designed and tuned for betweenness centrality, it can easily be adapted for other centrality metrics. Experimental results show that the proposed techniques can be a great arsenal to reduce the centrality computation time for various types and sizes of networks. In particular, it reduces the computation time of a 4.6 million edges graph from more than 5 days to less than 16 hours.",
"title": ""
},
{
"docid": "2e6c399dc08ed15446e2f4a67442ad11",
"text": "Anomaly detection is a critical step towards building a secure and trustworthy system. The primary purpose of a system log is to record system states and significant events at various critical points to help debug system failures and perform root cause analysis. Such log data is universally available in nearly all computer systems. Log data is an important and valuable resource for understanding system status and performance issues; therefore, the various system logs are naturally excellent source of information for online monitoring and anomaly detection. We propose DeepLog, a deep neural network model utilizing Long Short-Term Memory (LSTM), to model a system log as a natural language sequence. This allows DeepLog to automatically learn log patterns from normal execution, and detect anomalies when log patterns deviate from the model trained from log data under normal execution. In addition, we demonstrate how to incrementally update the DeepLog model in an online fashion so that it can adapt to new log patterns over time. Furthermore, DeepLog constructs workflows from the underlying system log so that once an anomaly is detected, users can diagnose the detected anomaly and perform root cause analysis effectively. Extensive experimental evaluations over large log data have shown that DeepLog has outperformed other existing log-based anomaly detection methods based on traditional data mining methodologies.",
"title": ""
},
{
"docid": "171d9acd0e2cb86a02d5ff56d4515f0d",
"text": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings.1",
"title": ""
},
{
"docid": "4ea0ebac861cf5185721ec9dedf1e198",
"text": "Instagram is the fastest growing social network site globally. This study investigates motives for its use, and its relationship to contextual age and narcissism. A survey of 239 college students revealed that the main reasons for Instagram use are “Surveillance/Knowledge about others,” “Documentation,” “Coolness,” and “Creativity.” The next significant finding was a positive relationship between those who scored high in interpersonal interaction and using Instagram for coolness, creative purposes, and surveillance. Another interesting finding shows that there is a positive relationship between high levels of social activity (traveling, going to sporting events, visiting friends, etc.) and being motivated to use Instagram as a means of documentation. In reference to narcissism, there was a positive relationship between using Instagram to be cool and for surveillance. Theoretical contributions of this study relate to our understanding of uses and gratifications theory. This study uncovers new motives for social media use not identified in previous literature. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "62f5feb738b59910f3e284d089145b17",
"text": "Haptic devices are dedicated to render virtual tactile stimulation. A limitation of these devices is the intrusiveness of their mechanical structures, i.e. the user need to hold or wear the device to interact with the environment. Here, we propose a concept of new tactile device named HAIR. The device is composed of a computer vision system, a mechatronic device and air jets that stimulate the skin. We designed a first prototype and conducted a preliminary experiment to validate our concept. The interface enables a tactile interaction without using physical contact with material devices, providing better freedom of movement and enhancing the interaction transparency.",
"title": ""
},
{
"docid": "a818a70bd263617eb3089cde9e9d1bb9",
"text": "The paper proposes identifying relevant information sources from the history of combined searching and browsing behavior of many Web users. While it has been previously shown that user interactions with search engines can be employed to improve document ranking, browsing behavior that occurs beyond search result pages has been largely overlooked in prior work. The paper demonstrates that users' post-search browsing activity strongly reflects implicit endorsement of visited pages, which allows estimating topical relevance of Web resources by mining large-scale datasets of search trails. We present heuristic and probabilistic algorithms that rely on such datasets for suggesting authoritative websites for search queries. Experimental evaluation shows that exploiting complete post-search browsing trails outperforms alternatives in isolation (e.g., clickthrough logs), and yields accuracy improvements when employed as a feature in learning to rank for Web search.",
"title": ""
},
{
"docid": "7d0badaeeb94658690f0809c134d3963",
"text": "Vascular tissue engineering is an area of regenerative medicine that attempts to create functional replacement tissue for defective segments of the vascular network. One approach to vascular tissue engineering utilizes seeding of biodegradable tubular scaffolds with stem (and/or progenitor) cells wherein the seeded cells initiate scaffold remodeling and prevent thrombosis through paracrine signaling to endogenous cells. Stem cells have received an abundance of attention in recent literature regarding the mechanism of their paracrine therapeutic effect. However, very little of this mechanistic research has been performed under the aegis of vascular tissue engineering. Therefore, the scope of this review includes the current state of TEVGs generated using the incorporation of stem cells in biodegradable scaffolds and potential cell-free directions for TEVGs based on stem cell secreted products. The current generation of stem cell-seeded vascular scaffolds are based on the premise that cells should be obtained from an autologous source. However, the reduced regenerative capacity of stem cells from certain patient groups limits the therapeutic potential of an autologous approach. This limitation prompts the need to investigate allogeneic stem cells or stem cell secreted products as therapeutic bases for TEVGs. The role of stem cell derived products, particularly extracellular vesicles (EVs), in vascular tissue engineering is exciting due to their potential use as a cell-free therapeutic base. EVs offer many benefits as a therapeutic base for functionalizing vascular scaffolds such as cell specific targeting, physiological delivery of cargo to target cells, reduced immunogenicity, and stability under physiological conditions. However, a number of points must be addressed prior to the effective translation of TEVG technologies that incorporate stem cell derived EVs such as standardizing stem cell culture conditions, EV isolation, scaffold functionalization with EVs, and establishing the therapeutic benefit of this combination treatment.",
"title": ""
},
{
"docid": "79f7d7dc109a9e8d2e5197de8f2d76e7",
"text": "Goal Oriented Requirements Engineering (GORE) is concerned with the identification of goals of the software according to the need of the stakeholders. In GORE, goals are the need of the stakeholders. These goals are refined and decomposed into sub-goals until the responsibility of the last goals are assigned to some agent or some software system. In literature different methods have been developed based on GORE concepts for the identification of software goals or software requirements like fuzzy attributed goal oriented software requirements analysis (FAGOSRA) method, knowledge acquisition for automated specifications (KAOS), i∗ framework, attributed goal oriented requirements analysis (AGORA) method, etc. In AGORA, decision makers use subjective values during the selection and the prioritization of software requirements. AGORA method can be extended by computing the objective values. These objective values can be obtained by using analytic hierarchy process (AHP). In AGORA, there is no support to check whether the values provided by the decision makers are consistent or not. Therefore, in order to address this issue we proposed a method for the prioritization of software requirements by applying the AHP in goal oriented requirements elicitation method. Finally, we consider an example to explain the proposed method.",
"title": ""
},
{
"docid": "e7ff760dddadf1de42cfc0553f286fe6",
"text": "Fluorine-containing amino acids are valuable probes for the biophysical characterization of proteins. Current methods for (19)F-labeled protein production involve time-consuming genetic manipulation, compromised expression systems and expensive reagents. We show that Escherichia coli BL21, the workhorse of protein production, can utilise fluoroindole for the biosynthesis of proteins containing (19)F-tryptophan.",
"title": ""
},
{
"docid": "01a215b6e55fbb41c01d7443a814b8dc",
"text": "This study aims to gather and analyze published articles regarding the influence of electronic word-ofmouth (eWOM) on the hotel industry. Articles published in the last five years appearing in six different academically recognized journals of tourism have been reviewed in the present study. Analysis of these articles has identified two main lines of research: review-generating factors (previous factors that cause consumers to write reviews) and impacts of eWOM (impacts caused by online reviews) from consumer perspective and company perspective. A summary of each study’s description, methodology and main results are outlined below, as well as an analysis of findings. This study also seeks to facilitate understanding and provide baseline information for future articles related to eWOM and hotels with the intention that researchers have a “snapshot” of previous research and the results achieved to date. © 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2354efe3a1533cd2fb23c2598d6b87c4",
"text": "It is hard to build robust systems: systems that have accept able behavior over a larger class of situations than was anticipated by their designers. The most robust systems are evolvable: they can be easily adapted to new situations with only minor mod ification. How can we design systems that are flexible in this way? Observations of biological systems tell us a great deal about how to make robust and evolvable systems. Techniques origi nally developed in support of symbolic Artificial Intelligence can be viewed as ways of enhancing robustness and evolvability in programs and other engineered systems. By contrast, common practice of computer science actively discourages the construc tion of robust systems.",
"title": ""
},
{
"docid": "b102ec1a14cda627b738fa3a2236846a",
"text": "The estimation of quality for real time services over telecommunication networks requires realistic models in order to generate impairments and failures during transmission. Starting with the classical Gilbert-Elliot model, we derive the second order statistics over arbitrary time scales and fit the parameters to match the packet loss pattern of traffic traces. The results show that simple Markov models are appropriate to capture the observed loss pattern.",
"title": ""
},
{
"docid": "3c7d25c85b837a3337c93ca2e1e54af4",
"text": "BACKGROUND\nThe treatment of acne scars with fractional CO(2) lasers is gaining increasing impact, but has so far not been compared side-by-side to untreated control skin.\n\n\nOBJECTIVE\nIn a randomized controlled study to examine efficacy and adverse effects of fractional CO(2) laser resurfacing for atrophic acne scars compared to no treatment.\n\n\nMETHODS\nPatients (n = 13) with atrophic acne scars in two intra-individual areas of similar sizes and appearances were randomized to (i) three monthly fractional CO(2) laser treatments (MedArt 610; 12-14 W, 48-56 mJ/pulse, 13% density) and (ii) no treatment. Blinded on-site evaluations were performed by three physicians on 10-point scales. Endpoints were change in scar texture and atrophy, adverse effects, and patient satisfaction.\n\n\nRESULTS\nPreoperatively, acne scars appeared with moderate to severe uneven texture (6.15 ± 1.23) and atrophy (5.72 ± 1.45) in both interventional and non-interventional control sites, P = 1. Postoperatively, lower scores of scar texture and atrophy were obtained at 1 month (scar texture 4.31 ± 1.33, P < 0.0001; atrophy 4.08 ± 1.38, P < 0.0001), at 3 months (scar texture 4.26 ± 1.97, P < 0.0001; atrophy 3.97 ± 2.08, P < 0.0001), and at 6 months (scar texture 3.89 ± 1.7, P < 0.0001; atrophy 3.56 ± 1.76, P < 0.0001). Patients were satisfied with treatments and evaluated scar texture to be mild or moderately improved. Adverse effects were minor.\n\n\nCONCLUSIONS\nIn this single-blinded randomized controlled trial we demonstrated that moderate to severe atrophic acne scars can be safely improved by ablative fractional CO(2) laser resurfacing. The use of higher energy levels might have improved the results and possibly also induced significant adverse effects.",
"title": ""
}
] |
scidocsrr
|
2c1506c5719c699dfb2d6720e7f6fae3
|
Multimodal emotion recognition from expressive faces, body gestures and speech
|
[
{
"docid": "113cf957b47a8b8e3bbd031aa9a28ff2",
"text": "We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use nonpropositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.",
"title": ""
},
{
"docid": "dadcecd178721cf1ea2b6bf51bc9d246",
"text": "8 Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect 9 of substantial applications, notably in human–computer interaction. Progress in the area relies heavily on the devel10 opment of appropriate databases. This paper addresses four main issues that need to be considered in developing 11 databases of emotional speech: scope, naturalness, context and descriptors. The state of the art is reviewed. A good deal 12 has been done to address the key issues, but there is still a long way to go. The paper shows how the challenge of 13 developing appropriate databases is being addressed in three major recent projects––the Reading–Leeds project, the 14 Belfast project and the CREST–ESP project. From these and other studies the paper draws together the tools and 15 methods that have been developed, addresses the problems that arise and indicates the future directions for the de16 velopment of emotional speech databases. 2002 Published by Elsevier Science B.V.",
"title": ""
}
] |
[
{
"docid": "26d8f073cfe1e907183022564e6bde80",
"text": "With advances in computer hardware, 3D game worlds are becoming larger and more complex. Consequently the development of game worlds becomes increasingly time and resource intensive. This paper presents a framework for generation of entire virtual worlds using procedural generation. The approach is demonstrated with the example of a virtual city.",
"title": ""
},
{
"docid": "04cf981a76c74b198ebe4703d0039e36",
"text": "The acquisition of high-fidelity, long-term neural recordings in vivo is critically important to advance neuroscience and brain⁻machine interfaces. For decades, rigid materials such as metal microwires and micromachined silicon shanks were used as invasive electrophysiological interfaces to neurons, providing either single or multiple electrode recording sites. Extensive research has revealed that such rigid interfaces suffer from gradual recording quality degradation, in part stemming from tissue damage and the ensuing immune response arising from mechanical mismatch between the probe and brain. The development of \"soft\" neural probes constructed from polymer shanks has been enabled by advancements in microfabrication; this alternative has the potential to mitigate mismatch-related side effects and thus improve the quality of recordings. This review examines soft neural probe materials and their associated microfabrication techniques, the resulting soft neural probes, and their implementation including custom implantation and electrical packaging strategies. The use of soft materials necessitates careful consideration of surgical placement, often requiring the use of additional surgical shuttles or biodegradable coatings that impart temporary stiffness. Investigation of surgical implantation mechanics and histological evidence to support the use of soft probes will be presented. The review concludes with a critical discussion of the remaining technical challenges and future outlook.",
"title": ""
},
{
"docid": "0ce46853852a20e5e0ab9aacd3ec20c1",
"text": "In immunocompromised subjects, Epstein-Barr virus (EBV) infection of terminally differentiated oral keratinocytes may result in subclinical productive infection of the virus in the stratum spinosum and in the stratum granulosum with shedding of infectious virions into the oral fluid in the desquamating cells. In a minority of cases this productive infection with dysregulation of the cell cycle of terminally differentiated epithelial cells may manifest as oral hairy leukoplakia. This is a white, hyperkeratotic, benign lesion of low morbidity, affecting primarily the lateral border of the tongue. Factors that determine whether productive EBV replication within the oral epithelium will cause oral hairy leukoplakia include the fitness of local immune responses, the profile of EBV gene expression, and local environmental factors.",
"title": ""
},
{
"docid": "51c4dd282e85db5741b65ae4386f6c48",
"text": "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These lowlevel visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.",
"title": ""
},
{
"docid": "c2f338aef785f0d6fee503bf0501a558",
"text": "Recognizing 3-D objects in cluttered scenes is a challenging task. Common approaches find potential feature correspondences between a scene and candidate models by matching sampled local shape descriptors and select a few correspondences with the highest descriptor similarity to identify models that appear in the scene. However, real scans contain various nuisances, such as noise, occlusion, and featureless object regions. This makes selected correspondences have a certain portion of false positives, requiring adopting the time-consuming model verification many times to ensure accurate recognition. This paper proposes a 3-D object recognition approach with three key components. First, we construct a Signature of Geometric Centroids descriptor that is descriptive and robust, and apply it to find high-quality potential feature correspondences. Second, we measure geometric compatibility between a pair of potential correspondences based on isometry and three angle-preserving components. Third, we perform effective correspondence selection by using both descriptor similarity and compatibility with an auxiliary set of “less” potential correspondences. Experiments on publicly available data sets demonstrate the robustness and/or efficiency of the descriptor, selection approach, and recognition framework. Comparisons with the state-of-the-arts validate the superiority of our recognition approach, especially under challenging scenarios.",
"title": ""
},
{
"docid": "3e9f98a1aa56e626e47a93b7973f999a",
"text": "This paper presents a sociocultural knowledge ontology (OntoSOC) modeling approach. OntoSOC modeling approach is based on Engeström‟s Human Activity Theory (HAT). That Theory allowed us to identify fundamental concepts and relationships between them. The top-down precess has been used to define differents sub-concepts. The modeled vocabulary permits us to organise data, to facilitate information retrieval by introducing a semantic layer in social web platform architecture, we project to implement. This platform can be considered as a « collective memory » and Participative and Distributed Information System (PDIS) which will allow Cameroonian communities to share an co-construct knowledge on permanent organized activities.",
"title": ""
},
{
"docid": "77d0845463db0f4e61864b37ec1259b7",
"text": "A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.",
"title": ""
},
{
"docid": "d1f8ee3d6dbc7ddc76b84ad2b0bfdd16",
"text": "Cognitive radio technology addresses the limited availability of wireless spectrum and inefficiency of spectrum usage. Cognitive Radio (CR) devices sense their environment, detect spatially unused spectrum and opportunistically access available spectrum without creating harmful interference to the incumbents. In cellular systems with licensed spectrum, the efficient utilization of the spectrum as well as the protection of primary users is equally important, which imposes opportunities and challenges for the application of CR. This paper introduces an experimental framework for 5G cognitive radio access in current 4G LTE cellular systems. It can be used to study CR concepts in different scenarios, such as 4G to 5G system migrations, machine-type communications, device-to-device communications, and load balancing. Using our framework, selected measurement results are presented that compare Long Term Evolution (LTE) Orthogonal Frequency Division Multiplex (OFDM) with a candidate 5G waveform called Generalized Frequency Division Multiplexing (GFDM) and quantify the benefits of GFDM in CR scenarios.",
"title": ""
},
{
"docid": "1d935fd69bcc3aca58f03e5d34892076",
"text": "• Healthy behaviour interventions should be initiated in people newly diagnosed with type 2 diabetes. • In people with type 2 diabetes with A1C <1.5% above the person’s individualized target, antihyperglycemic pharmacotherapy should be added if glycemic targets are not achieved within 3 months of initiating healthy behaviour interventions. • In people with type 2 diabetes with A1C ≥1.5% above target, antihyperglycemic agents should be initiated concomitantly with healthy behaviour interventions, and consideration could be given to initiating combination therapy with 2 agents. • Insulin should be initiated immediately in individuals with metabolic decompensation and/or symptomatic hyperglycemia. • In the absence of metabolic decompensation, metformin should be the initial agent of choice in people with newly diagnosed type 2 diabetes, unless contraindicated. • Dose adjustments and/or additional agents should be instituted to achieve target A1C within 3 to 6 months. Choice of second-line antihyperglycemic agents should be made based on individual patient characteristics, patient preferences, any contraindications to the drug, glucose-lowering efficacy, risk of hypoglycemia, affordability/access, effect on body weight and other factors. • In people with clinical cardiovascular (CV) disease in whom A1C targets are not achieved with existing pharmacotherapy, an antihyperglycemic agent with demonstrated CV outcome benefit should be added to antihyperglycemic therapy to reduce CV risk. • In people without clinical CV disease in whom A1C target is not achieved with current therapy, if affordability and access are not barriers, people with type 2 diabetes and their providers who are concerned about hypoglycemia and weight gain may prefer an incretin agent (DPP-4 inhibitor or GLP-1 receptor agonist) and/or an SGLT2 inhibitor to other agents as they improve glycemic control with a low risk of hypoglycemia and weight gain. • In people receiving an antihyperglycemic regimen containing insulin, in whom glycemic targets are not achieved, the addition of a GLP-1 receptor agonist, DPP-4 inhibitor or SGLT2 inhibitor may be considered before adding or intensifying prandial insulin therapy to improve glycemic control with less weight gain and comparable or lower hypoglycemia risk.",
"title": ""
},
{
"docid": "409f3b2768a8adf488eaa6486d1025a2",
"text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.",
"title": ""
},
{
"docid": "fc2a7c789f742dfed24599997845b604",
"text": "An axially symmetric power combiner, which utilizes a tapered conical impedance matching network to transform ten 50-Omega inputs to a central coaxial line over the X-band, is presented. The use of a conical line allows standard transverse electromagnetic design theory to be used, including tapered impedance matching networks. This, in turn, alleviates the problem of very low impedance levels at the common port of conical line combiners, which normally requires very high-precision manufacturing and assembly. The tapered conical line is joined to a tapered coaxial line for a completely smooth transmission line structure. Very few full-wave analyses are needed in the design process since circuit models are optimized to achieve a wide operating bandwidth. A ten-way prototype was developed at X-band with a 47% bandwidth, very low losses, and excellent agreement between simulated and measured results.",
"title": ""
},
{
"docid": "6006d2a032b60c93e525a8a28828cc7e",
"text": "Recent advances in genome engineering indicate that innovative crops developed by targeted genome modification (TGM) using site-specific nucleases (SSNs) have the potential to avoid the regulatory issues raised by genetically modified organisms. These powerful SSNs tools, comprising zinc-finger nucleases, transcription activator-like effector nucleases, and clustered regulatory interspaced short palindromic repeats/CRISPR-associated systems, enable precise genome engineering by introducing DNA double-strand breaks that subsequently trigger DNA repair pathways involving either non-homologous end-joining or homologous recombination. Here, we review developments in genome-editing tools, summarize their applications in crop organisms, and discuss future prospects. We also highlight the ability of these tools to create non-transgenic TGM plants for next-generation crop breeding.",
"title": ""
},
{
"docid": "98269ed4d72abecb6112c35e831fc727",
"text": "The goal of this article is to place the role that social media plays in collective action within a more general theoretical structure, using the events of the Arab Spring as a case study. The article presents two broad theoretical principles. The first is that one cannot understand the role of social media in collective action without first taking into account the political environment in which they operate. The second principle states that a significant increase in the use of the new media is much more likely to follow a significant amount of protest activity than to precede it. The study examines these two principles using political, media, and protest data from twenty Arab countries and the Palestinian Authority. The findings provide strong support for the validity of the claims.",
"title": ""
},
{
"docid": "2348652010d1dec37a563e3eed15c090",
"text": "This study firstly examines the current literature concerning ERP implementation problems during implementation phases and causes of ERP implementation failure. A multiple case study research methodology was adopted to understand “why” and “how” these ERP systems could not be implemented successfully. Different stakeholders (including top management, project manager, project team members and ERP consultants) from these case studies were interviewed, and ERP implementation documents were reviewed for triangulation. An ERP life cycle framework was applied to study the ERP implementation process and the associated problems in each phase of ERP implementation. Fourteen critical failure factors were identified and analyzed, and three common critical failure factors (poor consultant effectiveness, project management effectiveness and poo555îr quality of business process re-engineering) were examined and discussed. Future research on ERP implementation and critical failure factors is discussed. It is hoped that this research will help to bridge the current literature gap and provide practical advice for both academics and practitioners.",
"title": ""
},
{
"docid": "1ef814163a5c91155a2d7e1b4b19f4d7",
"text": "In this article, a frequency reconfigurable fractal patch antenna using pin diodes is proposed and studied. The antenna structure has been designed on FR-4 low-cost substrate material of relative permittivity εr = 4.4, with a compact volume of 30×30×0.8 mm3. The bandwidth and resonance frequency of the antenna design will be increased when we exploit the fractal iteration on the patch antenna. This antenna covers some service bands such as: WiMAX, m-WiMAX, WLAN, C-band and X band applications. The simulation of the proposed antenna is carried out using CST microwave studio. The radiation pattern and S parameter are further presented and discussed.",
"title": ""
},
{
"docid": "2c79e4e8563b3724014a645340b869ce",
"text": "Development of linguistic technologies and penetration of social media provide powerful possibilities to investigate users' moods and psychological states of people. In this paper we discussed possibility to improve accuracy of stock market indicators predictions by using data about psychological states of Twitter users. For analysis of psychological states we used lexicon-based approach, which allow us to evaluate presence of eight basic emotions in more than 755 million tweets. The application of Support Vectors Machine and Neural Networks algorithms to predict DJIA and S&P500 indicators are discussed.",
"title": ""
},
{
"docid": "fabcb243bff004279cfb5d522a7bed4b",
"text": "Vein pattern is the network of blood vessels beneath person’s skin. Vein patterns are sufficiently different across individuals, and they are stable unaffected by ageing and no significant changed in adults by observing. It is believed that the patterns of blood vein are unique to every individual, even among twins. Finger vein authentication technology has several important features that set it apart from other forms of biometrics as a highly secure and convenient means of personal authentication. This paper presents a finger-vein image matching method based on minutiae extraction and curve analysis. This proposed system is implemented in MATLAB. Experimental results show that the proposed method performs well in improving finger-vein matching accuracy.",
"title": ""
},
{
"docid": "6deab7156f09594f497806d6f6ad2a27",
"text": "The development of the Multidimensional Health Locus of Control scales is described. Scales have been developed to tap beliefs that the source of reinforcements for health-related behaviors is primarily internal, a matter of chance, or under the control of powerful others. These scales are based on earlier work with a general Health Locus of Control Scale, which, in turn, was developed from Rotter's social learning theory. Equivalent forms of the scales are presented along with initial internal consistency and validity data. Possible means of utilizing these scales are provided.",
"title": ""
},
{
"docid": "027e10898845955beb5c81518f243555",
"text": "As the field of Natural Language Processing has developed, research has progressed on ambitious semantic tasks like Recognizing Textual Entailment (RTE). Systems that approach these tasks may perform sophisticated inference between sentences, but often depend heavily on lexical resources like WordNet to provide critical information about relationships and entailments between lexical items. However, lexical resources are expensive to create and maintain, and are never fully comprehensive. Distributional Semantics has long provided a method to automatically induce meaning representations for lexical items from large corpora with little or no annotation efforts. The resulting representations are excellent as proxies of semantic similarity: words will have similar representations if their semantic meanings are similar. Yet, knowing two words are similar does not tell us their relationship or whether one entails the other. We present several models for identifying specific relationships and entailments from distributional representations of lexical semantics. Broadly, this work falls into two distinct but related areas: the first predicts specific ontology relations and entailment decisions between lexical items devoid of context; and the second predicts specific lexical paraphrases in complete sentences. We provide insight and analysis of how and why our models are able to generalize to novel lexical items and improve upon prior work. We propose several shortand long-term extensions to our work. In the short term, we propose applying one of our hypernymy-detection models to other relationships and evaluating our more recent work in an end-to-end RTE system. In the long-term, we propose adding consistency constraints to our lexical relationship prediction, better integration of context into our lexical paraphrase model, and new distributional models for improving word representations.",
"title": ""
},
{
"docid": "bffbc725b52468b41c53b156f6eadedb",
"text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.